29 Aug, 2019

1 commit

  • Pull arm64 fixes from Will Deacon:
    "Hot on the heels of our last set of fixes are a few more for -rc7.

    Two of them are fixing issues with our virtual interrupt controller
    implementation in KVM/arm, while the other is a longstanding but
    straightforward kallsyms fix which was been acked by Masami and
    resolves an initialisation failure in kprobes observed on arm64.

    - Fix GICv2 emulation bug (KVM)

    - Fix deadlock in virtual GIC interrupt injection code (KVM)

    - Fix kprobes blacklist init failure due to broken kallsyms lookup"

    * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
    KVM: arm/arm64: vgic-v2: Handle SGI bits in GICD_I{S,C}PENDR0 as WI
    KVM: arm/arm64: vgic: Fix potential deadlock when ap_list is long
    kallsyms: Don't let kallsyms_lookup_size_offset() fail on retrieving the first symbol

    Linus Torvalds
     

28 Aug, 2019

1 commit

  • A guest is not allowed to inject a SGI (or clear its pending state)
    by writing to GICD_ISPENDR0 (resp. GICD_ICPENDR0), as these bits are
    defined as WI (as per ARM IHI 0048B 4.3.7 and 4.3.8).

    Make sure we correctly emulate the architecture.

    Fixes: 96b298000db4 ("KVM: arm/arm64: vgic-new: Add PENDING registers handlers")
    Cc: stable@vger.kernel.org # 4.7+
    Reported-by: Andre Przywara
    Signed-off-by: Marc Zyngier
    Signed-off-by: Will Deacon

    Marc Zyngier
     

27 Aug, 2019

1 commit

  • If the ap_list is longer than 256 entries, merge_final() in list_sort()
    will call the comparison callback with the same element twice, causing
    a deadlock in vgic_irq_cmp().

    Fix it by returning early when irqa == irqb.

    Cc: stable@vger.kernel.org # 4.7+
    Fixes: 8e4447457965 ("KVM: arm/arm64: vgic-new: Add IRQ sorting")
    Signed-off-by: Zenghui Yu
    Signed-off-by: Heyi Guo
    [maz: massaged commit log and patch, added Fixes and Cc-stable]
    Signed-off-by: Marc Zyngier
    Signed-off-by: Will Deacon

    Heyi Guo
     

24 Aug, 2019

2 commits

  • …git/kvmarm/kvmarm into kvm/fixes

    Pull KVM/arm fixes from Marc Zyngier as per Paulo's request at:

    https://lkml.kernel.org/r/21ae69a2-2546-29d0-bff6-2ea825e3d968@redhat.com

    "One (hopefully last) set of fixes for KVM/arm for 5.3: an embarassing
    MMIO emulation regression, and a UBSAN splat. Oh well...

    - Don't overskip instructions on MMIO emulation

    - Fix UBSAN splat when initializing PPI priorities"

    * tag 'kvmarm-fixes-for-5.3-3' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm:
    KVM: arm/arm64: VGIC: Properly initialise private IRQ affinity
    KVM: arm/arm64: Only skip MMIO insn once

    Will Deacon
     
  • At the moment we initialise the target *mask* of a virtual IRQ to the
    VCPU it belongs to, even though this mask is only defined for GICv2 and
    quickly runs out of bits for many GICv3 guests.
    This behaviour triggers an UBSAN complaint for more than 32 VCPUs:
    ------
    [ 5659.462377] UBSAN: Undefined behaviour in virt/kvm/arm/vgic/vgic-init.c:223:21
    [ 5659.471689] shift exponent 32 is too large for 32-bit type 'unsigned int'
    ------
    Also for GICv3 guests the reporting of TARGET in the "vgic-state" debugfs
    dump is wrong, due to this very same problem.

    Because there is no requirement to create the VGIC device before the
    VCPUs (and QEMU actually does it the other way round), we can't safely
    initialise mpidr or targets in kvm_vgic_vcpu_init(). But since we touch
    every private IRQ for each VCPU anyway later (in vgic_init()), we can
    just move the initialisation of those fields into there, where we
    definitely know the VGIC type.

    On the way make sure we really have either a VGICv2 or a VGICv3 device,
    since the existing code is just checking for "VGICv3 or not", silently
    ignoring the uninitialised case.

    Signed-off-by: Andre Przywara
    Reported-by: Dave Martin
    Tested-by: Julien Grall
    Signed-off-by: Marc Zyngier

    Andre Przywara
     

22 Aug, 2019

1 commit

  • If after an MMIO exit to userspace a VCPU is immediately run with an
    immediate_exit request, such as when a signal is delivered or an MMIO
    emulation completion is needed, then the VCPU completes the MMIO
    emulation and immediately returns to userspace. As the exit_reason
    does not get changed from KVM_EXIT_MMIO in these cases we have to
    be careful not to complete the MMIO emulation again, when the VCPU is
    eventually run again, because the emulation does an instruction skip
    (and doing too many skips would be a waste of guest code :-) We need
    to use additional VCPU state to track if the emulation is complete.
    As luck would have it, we already have 'mmio_needed', which even
    appears to be used in this way by other architectures already.

    Fixes: 0d640732dbeb ("arm64: KVM: Skip MMIO insn after emulation")
    Acked-by: Mark Rutland
    Signed-off-by: Andrew Jones
    Signed-off-by: Marc Zyngier

    Andrew Jones
     

09 Aug, 2019

4 commits


05 Aug, 2019

5 commits

  • Since commit commit 328e56647944 ("KVM: arm/arm64: vgic: Defer
    touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or
    its GICv2 equivalent) loaded as long as we can, only syncing it
    back when we're scheduled out.

    There is a small snag with that though: kvm_vgic_vcpu_pending_irq(),
    which is indirectly called from kvm_vcpu_check_block(), needs to
    evaluate the guest's view of ICC_PMR_EL1. At the point were we
    call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever
    changes to PMR is not visible in memory until we do a vcpu_put().

    Things go really south if the guest does the following:

    mov x0, #0 // or any small value masking interrupts
    msr ICC_PMR_EL1, x0

    [vcpu preempted, then rescheduled, VMCR sampled]

    mov x0, #ff // allow all interrupts
    msr ICC_PMR_EL1, x0
    wfi // traps to EL2, so samping of VMCR

    [interrupt arrives just after WFI]

    Here, the hypervisor's view of PMR is zero, while the guest has enabled
    its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no
    interrupts are pending (despite an interrupt being received) and we'll
    block for no reason. If the guest doesn't have a periodic interrupt
    firing once it has blocked, it will stay there forever.

    To avoid this unfortuante situation, let's resync VMCR from
    kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block()
    will observe the latest value of PMR.

    This has been found by booting an arm64 Linux guest with the pseudo NMI
    feature, and thus using interrupt priorities to mask interrupts instead
    of the usual PSTATE masking.

    Cc: stable@vger.kernel.org # 4.12
    Fixes: 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put")
    Signed-off-by: Marc Zyngier

    Marc Zyngier
     
  • When calling debugfs functions, there is no need to ever check the
    return value. The function can work or not, but the code logic should
    never do something different based on this.

    Also, when doing this, change kvm_arch_create_vcpu_debugfs() to return
    void instead of an integer, as we should not care at all about if this
    function actually does anything or not.

    Cc: Paolo Bonzini
    Cc: "Radim Krčmář"
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Borislav Petkov
    Cc: "H. Peter Anvin"
    Cc:
    Cc:
    Signed-off-by: Greg Kroah-Hartman
    Signed-off-by: Paolo Bonzini

    Greg KH
     
  • There is no need for this function as all arches have to implement
    kvm_arch_create_vcpu_debugfs() no matter what. A #define symbol
    let us actually simplify the code.

    Signed-off-by: Paolo Bonzini

    Paolo Bonzini
     
  • After commit d73eb57b80b (KVM: Boost vCPUs that are delivering interrupts), a
    five years old bug is exposed. Running ebizzy benchmark in three 80 vCPUs VMs
    on one 80 pCPUs Skylake server, a lot of rcu_sched stall warning splatting
    in the VMs after stress testing:

    INFO: rcu_sched detected stalls on CPUs/tasks: { 4 41 57 62 77} (detected by 15, t=60004 jiffies, g=899, c=898, q=15073)
    Call Trace:
    flush_tlb_mm_range+0x68/0x140
    tlb_flush_mmu.part.75+0x37/0xe0
    tlb_finish_mmu+0x55/0x60
    zap_page_range+0x142/0x190
    SyS_madvise+0x3cd/0x9c0
    system_call_fastpath+0x1c/0x21

    swait_active() sustains to be true before finish_swait() is called in
    kvm_vcpu_block(), voluntarily preempted vCPUs are taken into account
    by kvm_vcpu_on_spin() loop greatly increases the probability condition
    kvm_arch_vcpu_runnable(vcpu) is checked and can be true, when APICv
    is enabled the yield-candidate vCPU's VMCS RVI field leaks(by
    vmx_sync_pir_to_irr()) into spinning-on-a-taken-lock vCPU's current
    VMCS.

    This patch fixes it by checking conservatively a subset of events.

    Cc: Paolo Bonzini
    Cc: Radim Krčmář
    Cc: Christian Borntraeger
    Cc: Marc Zyngier
    Cc: stable@vger.kernel.org
    Fixes: 98f4a1467 (KVM: add kvm_arch_vcpu_runnable() test to kvm_vcpu_on_spin() loop)
    Signed-off-by: Wanpeng Li
    Signed-off-by: Paolo Bonzini

    Wanpeng Li
     
  • preempted_in_kernel is updated in preempt_notifier when involuntary preemption
    ocurrs, it can be stale when the voluntarily preempted vCPUs are taken into
    account by kvm_vcpu_on_spin() loop. This patch lets it just check preempted_in_kernel
    for involuntary preemption.

    Cc: Paolo Bonzini
    Cc: Radim Krčmář
    Signed-off-by: Wanpeng Li
    Signed-off-by: Paolo Bonzini

    Wanpeng Li
     

26 Jul, 2019

1 commit

  • When fall-through warnings was enabled by default the following warnings
    was starting to show up:

    ../virt/kvm/arm/hyp/vgic-v3-sr.c: In function ‘__vgic_v3_save_aprs’:
    ../virt/kvm/arm/hyp/vgic-v3-sr.c:351:24: warning: this statement may fall
    through [-Wimplicit-fallthrough=]
    cpu_if->vgic_ap0r[2] = __vgic_v3_read_ap0rn(2);
    ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
    ../virt/kvm/arm/hyp/vgic-v3-sr.c:352:2: note: here
    case 6:
    ^~~~
    ../virt/kvm/arm/hyp/vgic-v3-sr.c:353:24: warning: this statement may fall
    through [-Wimplicit-fallthrough=]
    cpu_if->vgic_ap0r[1] = __vgic_v3_read_ap0rn(1);
    ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
    ../virt/kvm/arm/hyp/vgic-v3-sr.c:354:2: note: here
    default:
    ^~~~~~~

    Rework so that the compiler doesn't warn about fall-through.

    Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning")
    Signed-off-by: Anders Roxell
    Signed-off-by: Marc Zyngier

    Anders Roxell
     

24 Jul, 2019

1 commit

  • Renaming docs seems to be en vogue at the moment, so fix on of the
    grossly misnamed directories. We usually never use "virtual" as
    a shortcut for virtualization in the kernel, but always virt,
    as seen in the virt/ top-level directory. Fix up the documentation
    to match that.

    Fixes: ed16648eb5b8 ("Move kvm, uml, and lguest subdirectories under a common "virtual" directory, I.E:")
    Signed-off-by: Christoph Hellwig
    Signed-off-by: Paolo Bonzini

    Christoph Hellwig
     

23 Jul, 2019

1 commit

  • We use "pmc->idx" and the "chained" bitmap to determine if the pmc is
    chained, in kvm_pmu_pmc_is_chained(). But idx might be uninitialized
    (and random) when we doing this decision, through a KVM_ARM_VCPU_INIT
    ioctl -> kvm_pmu_vcpu_reset(). And the test_bit() against this random
    idx will potentially hit a KASAN BUG [1].

    In general, idx is the static property of a PMU counter that is not
    expected to be modified across resets, as suggested by Julien. It
    looks more reasonable if we can setup the PMU counter idx for a vcpu
    in its creation time. Introduce a new function - kvm_pmu_vcpu_init()
    for this basic setup. Oh, and the KASAN BUG will get fixed this way.

    [1] https://www.spinics.net/lists/kvm-arm/msg36700.html

    Fixes: 80f393a23be6 ("KVM: arm/arm64: Support chained PMU counters")
    Suggested-by: Andrew Murray
    Suggested-by: Julien Thierry
    Acked-by: Julien Thierry
    Signed-off-by: Zenghui Yu
    Signed-off-by: Marc Zyngier

    Zenghui Yu
     

20 Jul, 2019

1 commit

  • Inspired by commit 9cac38dd5d (KVM/s390: Set preempted flag during
    vcpu wakeup and interrupt delivery), we want to also boost not just
    lock holders but also vCPUs that are delivering interrupts. Most
    smp_call_function_many calls are synchronous, so the IPI target vCPUs
    are also good yield candidates. This patch introduces vcpu->ready to
    boost vCPUs during wakeup and interrupt delivery time; unlike s390 we do
    not reuse vcpu->preempted so that voluntarily preempted vCPUs are taken
    into account by kvm_vcpu_on_spin, but vmx_vcpu_pi_put is not affected
    (VT-d PI handles voluntary preemption separately, in pi_pre_block).

    Testing on 80 HT 2 socket Xeon Skylake server, with 80 vCPUs VM 80GB RAM:
    ebizzy -M

    vanilla boosting improved
    1VM 21443 23520 9%
    2VM 2800 8000 180%
    3VM 1800 3100 72%

    Testing on my Haswell desktop 8 HT, with 8 vCPUs VM 8GB RAM, two VMs,
    one running ebizzy -M, the other running 'stress --cpu 2':

    w/ boosting + w/o pv sched yield(vanilla)

    vanilla boosting improved
    1570 4000 155%

    w/ boosting + w/ pv sched yield(vanilla)

    vanilla boosting improved
    1844 5157 179%

    w/o boosting, perf top in VM:

    72.33% [kernel] [k] smp_call_function_many
    4.22% [kernel] [k] call_function_i
    3.71% [kernel] [k] async_page_fault

    w/ boosting, perf top in VM:

    38.43% [kernel] [k] smp_call_function_many
    6.31% [kernel] [k] async_page_fault
    6.13% libc-2.23.so [.] __memcpy_avx_unaligned
    4.88% [kernel] [k] call_function_interrupt

    Cc: Paolo Bonzini
    Cc: Radim Krčmář
    Cc: Christian Borntraeger
    Cc: Paul Mackerras
    Cc: Marc Zyngier
    Signed-off-by: Wanpeng Li
    Signed-off-by: Paolo Bonzini

    Wanpeng Li
     

13 Jul, 2019

2 commits

  • Pull KVM updates from Paolo Bonzini:
    "ARM:
    - support for chained PMU counters in guests
    - improved SError handling
    - handle Neoverse N1 erratum #1349291
    - allow side-channel mitigation status to be migrated
    - standardise most AArch64 system register accesses to msr_s/mrs_s
    - fix host MPIDR corruption on 32bit
    - selftests ckleanups

    x86:
    - PMU event {white,black}listing
    - ability for the guest to disable host-side interrupt polling
    - fixes for enlightened VMCS (Hyper-V pv nested virtualization),
    - new hypercall to yield to IPI target
    - support for passing cstate MSRs through to the guest
    - lots of cleanups and optimizations

    Generic:
    - Some txt->rST conversions for the documentation"

    * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (128 commits)
    Documentation: virtual: Add toctree hooks
    Documentation: kvm: Convert cpuid.txt to .rst
    Documentation: virtual: Convert paravirt_ops.txt to .rst
    KVM: x86: Unconditionally enable irqs in guest context
    KVM: x86: PMU Event Filter
    kvm: x86: Fix -Wmissing-prototypes warnings
    KVM: Properly check if "page" is valid in kvm_vcpu_unmap
    KVM: arm/arm64: Initialise host's MPIDRs by reading the actual register
    KVM: LAPIC: Retry tune per-vCPU timer_advance_ns if adaptive tuning goes insane
    kvm: LAPIC: write down valid APIC registers
    KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_s
    KVM: doc: Add API documentation on the KVM_REG_ARM_WORKAROUNDS register
    KVM: arm/arm64: Add save/restore support for firmware workaround state
    arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
    KVM: arm/arm64: Support chained PMU counters
    KVM: arm/arm64: Remove pmc->bitmask
    KVM: arm/arm64: Re-create event when setting counter value
    KVM: arm/arm64: Extract duplicated code to own function
    KVM: arm/arm64: Rename kvm_pmu_{enable/disable}_counter functions
    KVM: LAPIC: ARBPRI is a reserved register for x2APIC
    ...

    Linus Torvalds
     
  • The PTE allocations in arm64 are identical to the generic ones modulo the
    GFP flags.

    Using the generic pte_alloc_one() functions ensures that the user page
    tables are allocated with __GFP_ACCOUNT set.

    The arm64 definition of PGALLOC_GFP is removed and replaced with
    GFP_PGTABLE_USER for p[gum]d_alloc_one() for the user page tables and
    GFP_PGTABLE_KERNEL for the kernel page tables. The KVM memory cache is now
    using GFP_PGTABLE_USER.

    The mappings created with create_pgd_mapping() are now using
    GFP_PGTABLE_KERNEL.

    The conversion to the generic version of pte_free_kernel() removes the NULL
    check for pte.

    The pte_free() version on arm64 is identical to the generic one and
    can be simply dropped.

    [cai@lca.pw: fix a bogus GFP flag in pgd_alloc()]
    Link: https://lore.kernel.org/r/1559656836-24940-1-git-send-email-cai@lca.pw/
    [and fix it more]
    Link: https://lore.kernel.org/linux-mm/20190617151252.GF16810@rapoport-lnx/
    Link: http://lkml.kernel.org/r/1557296232-15361-5-git-send-email-rppt@linux.ibm.com
    Signed-off-by: Mike Rapoport
    Cc: Albert Ou
    Cc: Anshuman Khandual
    Cc: Anton Ivanov
    Cc: Arnd Bergmann
    Cc: Catalin Marinas
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Guo Ren
    Cc: Helge Deller
    Cc: Ley Foon Tan
    Cc: Matthew Wilcox
    Cc: Matt Turner
    Cc: Michael Ellerman
    Cc: Michal Hocko
    Cc: Palmer Dabbelt
    Cc: Paul Burton
    Cc: Ralf Baechle
    Cc: Richard Kuo
    Cc: Richard Weinberger
    Cc: Russell King
    Cc: Sam Creasey
    Cc: Vincent Chen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

11 Jul, 2019

1 commit


10 Jul, 2019

1 commit

  • The field "page" is initialized to KVM_UNMAPPED_PAGE when it is not used
    (i.e. when the memory lives outside kernel control). So this check will
    always end up using kunmap even for memremap regions.

    Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API")
    Cc: stable@vger.kernel.org
    Signed-off-by: KarimAllah Ahmed
    Signed-off-by: Paolo Bonzini

    KarimAllah Ahmed
     

08 Jul, 2019

1 commit

  • As part of setting up the host context, we populate its
    MPIDR by using cpu_logical_map(). It turns out that contrary
    to arm64, cpu_logical_map() on 32bit ARM doesn't return the
    *full* MPIDR, but a truncated version.

    This leaves the host MPIDR slightly corrupted after the first
    run of a VM, since we won't correctly restore the MPIDR on
    exit. Oops.

    Since we cannot trust cpu_logical_map(), let's adopt a different
    strategy. We move the initialization of the host CPU context as
    part of the per-CPU initialization (which, in retrospect, makes
    a lot of sense), and directly read the MPIDR from the HW. This
    is guaranteed to work on both arm and arm64.

    Reported-by: Andre Przywara
    Tested-by: Andre Przywara
    Fixes: 32f139551954 ("arm/arm64: KVM: Statically configure the host's view of MPIDR")
    Signed-off-by: Marc Zyngier

    Marc Zyngier
     

05 Jul, 2019

8 commits

  • Currently, the {read,write}_sysreg_el*() accessors for accessing
    particular ELs' sysregs in the presence of VHE rely on some local
    hacks and define their system register encodings in a way that is
    inconsistent with the core definitions in .

    As a result, it is necessary to add duplicate definitions for any
    system register that already needs a definition in sysreg.h for
    other reasons.

    This is a bit of a maintenance headache, and the reasons for the
    _el*() accessors working the way they do is a bit historical.

    This patch gets rid of the shadow sysreg definitions in
    , converts the _el*() accessors to use the core
    __msr_s/__mrs_s interface, and converts all call sites to use the
    standard sysreg #define names (i.e., upper case, with SYS_ prefix).

    This patch will conflict heavily anyway, so the opportunity
    to clean up some bad whitespace in the context of the changes is
    taken.

    The change exposes a few system registers that have no sysreg.h
    definition, due to msr_s/mrs_s being used in place of msr/mrs:
    additions are made in order to fill in the gaps.

    Signed-off-by: Dave Martin
    Cc: Catalin Marinas
    Cc: Christoffer Dall
    Cc: Mark Rutland
    Cc: Will Deacon
    Link: https://www.spinics.net/lists/kvm-arm/msg31717.html
    [Rebased to v4.21-rc1]
    Signed-off-by: Sudeep Holla
    [Rebased to v5.2-rc5, changelog updates]
    Signed-off-by: Marc Zyngier

    Dave Martin
     
  • KVM implements the firmware interface for mitigating cache speculation
    vulnerabilities. Guests may use this interface to ensure mitigation is
    active.
    If we want to migrate such a guest to a host with a different support
    level for those workarounds, migration might need to fail, to ensure that
    critical guests don't loose their protection.

    Introduce a way for userland to save and restore the workarounds state.
    On restoring we do checks that make sure we don't downgrade our
    mitigation level.

    Signed-off-by: Andre Przywara
    Reviewed-by: Eric Auger
    Reviewed-by: Steven Price
    Signed-off-by: Marc Zyngier

    Andre Przywara
     
  • Recent commits added the explicit notion of "workaround not required" to
    the state of the Spectre v2 (aka. BP_HARDENING) workaround, where we
    just had "needed" and "unknown" before.

    Export this knowledge to the rest of the kernel and enhance the existing
    kvm_arm_harden_branch_predictor() to report this new state as well.
    Export this new state to guests when they use KVM's firmware interface
    emulation.

    Signed-off-by: Andre Przywara
    Reviewed-by: Steven Price
    Signed-off-by: Marc Zyngier

    Andre Przywara
     
  • ARMv8 provides support for chained PMU counters, where an event type
    of 0x001E is set for odd-numbered counters, the event counter will
    increment by one for each overflow of the preceding even-numbered
    counter. Let's emulate this in KVM by creating a 64 bit perf counter
    when a user chains two emulated counters together.

    For chained events we only support generating an overflow interrupt
    on the high counter. We use the attributes of the low counter to
    determine the attributes of the perf event.

    Suggested-by: Marc Zyngier
    Signed-off-by: Andrew Murray
    Reviewed-by: Julien Thierry
    Reviewed-by: Suzuki K Poulose
    Signed-off-by: Marc Zyngier

    Andrew Murray
     
  • We currently use pmc->bitmask to determine the width of the pmc - however
    it's superfluous as the pmc index already describes if the pmc is a cycle
    counter or event counter. The architecture clearly describes the widths of
    these counters.

    Let's remove the bitmask to simplify the code.

    Signed-off-by: Andrew Murray
    Signed-off-by: Marc Zyngier

    Andrew Murray
     
  • The perf event sample_period is currently set based upon the current
    counter value, when PMXEVTYPER is written to and the perf event is created.
    However the user may choose to write the type before the counter value in
    which case sample_period will be set incorrectly. Let's instead decouple
    event creation from PMXEVTYPER and (re)create the event in either
    suitation.

    Signed-off-by: Andrew Murray
    Reviewed-by: Julien Thierry
    Reviewed-by: Suzuki K Poulose
    Signed-off-by: Marc Zyngier

    Andrew Murray
     
  • Let's reduce code duplication by extracting common code to its own
    function.

    Signed-off-by: Andrew Murray
    Reviewed-by: Suzuki K Poulose
    Signed-off-by: Marc Zyngier

    Andrew Murray
     
  • The kvm_pmu_{enable/disable}_counter functions can enable/disable
    multiple counters at once as they operate on a bitmask. Let's
    make this clearer by renaming the function.

    Suggested-by: Suzuki K Poulose
    Signed-off-by: Andrew Murray
    Reviewed-by: Julien Thierry
    Reviewed-by: Suzuki K Poulose
    Signed-off-by: Marc Zyngier

    Andrew Murray
     

22 Jun, 2019

1 commit

  • Pull still more SPDX updates from Greg KH:
    "Another round of SPDX updates for 5.2-rc6

    Here is what I am guessing is going to be the last "big" SPDX update
    for 5.2. It contains all of the remaining GPLv2 and GPLv2+ updates
    that were "easy" to determine by pattern matching. The ones after this
    are going to be a bit more difficult and the people on the spdx list
    will be discussing them on a case-by-case basis now.

    Another 5000+ files are fixed up, so our overall totals are:
    Files checked: 64545
    Files with SPDX: 45529

    Compared to the 5.1 kernel which was:
    Files checked: 63848
    Files with SPDX: 22576

    This is a huge improvement.

    Also, we deleted another 20000 lines of boilerplate license crud,
    always nice to see in a diffstat"

    * tag 'spdx-5.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: (65 commits)
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 507
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 506
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 505
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 504
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 503
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 502
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 501
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 499
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 498
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 497
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 496
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 495
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 491
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 490
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 489
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 488
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 487
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 486
    treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 485
    ...

    Linus Torvalds
     

21 Jun, 2019

2 commits

  • Pull kvm fixes from Paolo Bonzini:
    "Fixes for ARM and x86, plus selftest patches and nicer structs for
    nested state save/restore"

    * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
    KVM: nVMX: reorganize initial steps of vmx_set_nested_state
    KVM: arm/arm64: Fix emulated ptimer irq injection
    tests: kvm: Check for a kernel warning
    kvm: tests: Sort tests in the Makefile alphabetically
    KVM: x86/mmu: Allocate PAE root array when using SVM's 32-bit NPT
    KVM: x86: Modify struct kvm_nested_state to have explicit fields for data
    KVM: fix typo in documentation
    KVM: nVMX: use correct clean fields when copying from eVMCS
    KVM: arm/arm64: vgic: Fix kvm_device leak in vgic_its_destroy
    KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST
    KVM: arm64: Implement vq_present() as a macro

    Linus Torvalds
     
  • …git/kvmarm/kvmarm into HEAD

    KVM/arm fixes for 5.2, take #2

    - SVE cleanup killing a warning with ancient GCC versions
    - Don't report non-existent system registers to userspace
    - Fix memory leak when freeing the vgic ITS
    - Properly lower the interrupt on the emulated physical timer

    Paolo Bonzini
     

19 Jun, 2019

5 commits

  • Based on 1 normalized pattern(s):

    this file is free software you can redistribute it and or modify it
    under the terms of version 2 of the gnu general public license as
    published by the free software foundation this program is
    distributed in the hope that it will be useful but without any
    warranty without even the implied warranty of merchantability or
    fitness for a particular purpose see the gnu general public license
    for more details you should have received a copy of the gnu general
    public license along with this program if not write to the free
    software foundation inc 51 franklin st fifth floor boston ma 02110
    1301 usa

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 8 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Reviewed-by: Enrico Weigelt
    Reviewed-by: Kate Stewart
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190604081207.443595178@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     
  • Based on 2 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation #

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 4122 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Enrico Weigelt
    Reviewed-by: Kate Stewart
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     
  • Based on 1 normalized pattern(s):

    this work is licensed under the terms of the gnu gpl version 2 see
    the copying file in the top level directory

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 35 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Kate Stewart
    Reviewed-by: Enrico Weigelt
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190604081206.797835076@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     
  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation this program is
    distributed in the hope that it will be useful but without any
    warranty without even the implied warranty of merchantability or
    fitness for a particular purpose see the gnu general public license
    for more details you should have received a copy of the gnu general
    public license along with this program if not see http www gnu org
    licenses

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 503 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Alexios Zavras
    Reviewed-by: Allison Randal
    Reviewed-by: Enrico Weigelt
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     
  • The emulated ptimer needs to track the level changes, otherwise the
    the interrupt will never get deasserted, resulting in the guest getting
    stuck in an interrupt storm if it enables ptimer interrupts. This was
    found with kvm-unit-tests; the ptimer tests hung as soon as interrupts
    were enabled. Typical Linux guests don't have a problem as they prefer
    using the virtual timer.

    Fixes: bee038a674875 ("KVM: arm/arm64: Rework the timer code to use a timer_map")
    Signed-off-by: Andrew Jones
    [Simplified the patch to res we only care about emulated timers here]
    Signed-off-by: Marc Zyngier

    Andrew Jones