summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-03-18KVM: arm64: fix typos in commentsJulia Lawall
Various spelling mistakes in comments. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220318103729.157574-24-Julia.Lawall@inria.fr
2022-03-18KVM: arm64: Generalise VM features into a set of flagsMarc Zyngier
We currently deal with a set of booleans for VM features, while they could be better represented as set of flags contained in an unsigned long, similarily to what we are doing on the CPU side. Signed-off-by: Marc Zyngier <maz@kernel.org> [Oliver: Flag-ify the 'ran_once' boolean] Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220311174001.605719-2-oupton@google.com
2022-03-09Merge branch kvm-arm64/psci-1.1 into kvmarm-master/nextMarc Zyngier
* kvm-arm64/psci-1.1: : . : Limited PSCI-1.1 support from Will Deacon: : : This small series exposes the PSCI SYSTEM_RESET2 call to guests, which : allows the propagation of a "reset_type" and a "cookie" back to the VMM. : Although Linux guests only ever pass 0 for the type ("SYSTEM_WARM_RESET"), : the vendor-defined range can be used by a bootloader to provide additional : information about the reset, such as an error code. : . KVM: arm64: Really propagate PSCI SYSTEM_RESET2 arguments to userspace Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-03-09KVM: arm64: Really propagate PSCI SYSTEM_RESET2 arguments to userspaceWill Deacon
Commit d43583b890e7 ("KVM: arm64: Expose PSCI SYSTEM_RESET2 call to the guest") hooked up the SYSTEM_RESET2 PSCI call for guests but failed to preserve its arguments for userspace, instead overwriting them with zeroes via smccc_set_retval(). As Linux only passes zeroes for these arguments, this appeared to be working for Linux guests. Oh well. Don't call smccc_set_retval() for a SYSTEM_RESET2 heading to userspace and instead set X0 (and only X0) explicitly to PSCI_RET_INTERNAL_FAILURE just in case the vCPU re-enters the guest. Fixes: d43583b890e7 ("KVM: arm64: Expose PSCI SYSTEM_RESET2 call to the guest") Reported-by: Andrew Walbran <qwandor@google.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220309181308.982-1-will@kernel.org
2022-03-09Merge branch kvm-arm64/misc-5.18 into kvmarm-master/nextMarc Zyngier
* kvm-arm64/misc-5.18: : . : Misc fixes for KVM/arm64 5.18: : : - Drop unused kvm parameter to kvm_psci_version() : : - Implement CONFIG_DEBUG_LIST at EL2 : : - Make CONFIG_ARM64_ERRATUM_2077057 default y : : - Only do the interrupt dance if we have exited because of an interrupt : : - Remove traces of 32bit ARM host support from the documentation : . Documentation: KVM: Update documentation to indicate KVM is arm64-only KVM: arm64: Only open the interrupt window on exit due to an interrupt KVM: arm64: Enable Cortex-A510 erratum 2077057 by default Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-03-09Documentation: KVM: Update documentation to indicate KVM is arm64-onlyOliver Upton
KVM support for 32-bit ARM hosts (KVM/arm) has been removed from the kernel since commit 541ad0150ca4 ("arm: Remove 32bit KVM host support"). There still exists some remnants of the old architecture in the KVM documentation. Remove all traces of 32-bit host support from the documentation. Note that AArch32 guests are still supported. Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220308172856.2997250-1-oupton@google.com
2022-03-04KVM: arm64: Only open the interrupt window on exit due to an interruptMarc Zyngier
Now that we properly account for interrupts taken whilst the guest was running, it becomes obvious that there is no need to open this accounting window if we didn't exit because of an interrupt. This saves a number of system register accesses and other barriers if we exited for any other reason (such as a trap, for example). Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220304135914.1464721-1-maz@kernel.org
2022-03-02KVM: arm64: Enable Cortex-A510 erratum 2077057 by defaultMark Brown
The recently added configuration option for Cortex A510 erratum 2077057 does not have a "default y" unlike other errata fixes. This appears to simply be an oversight since the help text suggests enabling the option if unsure and there's nothing in the commit log to suggest it is intentional. Fixes: 1dd498e5e26ad ("KVM: arm64: Workaround Cortex-A510's single-step and PAC trap errata") Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220225184658.172527-1-broonie@kernel.org
2022-02-25Merge branch kvm-arm64/psci-1.1 into kvmarm-master/nextMarc Zyngier
* kvm-arm64/psci-1.1: : . : Limited PSCI-1.1 support from Will Deacon: : : This small series exposes the PSCI SYSTEM_RESET2 call to guests, which : allows the propagation of a "reset_type" and a "cookie" back to the VMM. : Although Linux guests only ever pass 0 for the type ("SYSTEM_WARM_RESET"), : the vendor-defined range can be used by a bootloader to provide additional : information about the reset, such as an error code. : . KVM: arm64: Remove unneeded semicolons KVM: arm64: Indicate SYSTEM_RESET2 in kvm_run::system_event flags field KVM: arm64: Expose PSCI SYSTEM_RESET2 call to the guest KVM: arm64: Bump guest PSCI version to 1.1 Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-25KVM: arm64: Remove unneeded semicolonsChangcheng Deng
Fix the following coccicheck review: ./arch/arm64/kvm/psci.c: 379: 3-4: Unneeded semicolon Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Changcheng Deng <deng.changcheng@zte.com.cn> [maz: squashed another instance of the same issue in the patch] Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220223092750.1934130-1-deng.changcheng@zte.com.cn Link: https://lore.kernel.org/r/20220225122922.GA19390@willie-the-truck
2022-02-21KVM: arm64: Indicate SYSTEM_RESET2 in kvm_run::system_event flags fieldWill Deacon
When handling reset and power-off PSCI calls from the guest, we initialise X0 to PSCI_RET_INTERNAL_FAILURE in case the VMM tries to re-run the vCPU after issuing the call. Unfortunately, this also means that the VMM cannot see which PSCI call was issued and therefore cannot distinguish between PSCI SYSTEM_RESET and SYSTEM_RESET2 calls, which is necessary in order to determine the validity of the "reset_type" in X1. Allocate bit 0 of the previously unused 'flags' field of the system_event structure so that we can indicate the PSCI call used to initiate the reset. Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220221153524.15397-4-will@kernel.org
2022-02-21KVM: arm64: Expose PSCI SYSTEM_RESET2 call to the guestWill Deacon
PSCI v1.1 introduces the optional SYSTEM_RESET2 call, which allows the caller to provide a vendor-specific "reset type" and "cookie" to request a particular form of reset or shutdown. Expose this call to the guest and handle it in the same way as PSCI SYSTEM_RESET, along with some basic range checking on the type argument. Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220221153524.15397-3-will@kernel.org
2022-02-21KVM: arm64: Bump guest PSCI version to 1.1Will Deacon
Expose PSCI version v1.1 to the guest by default. The only difference for now is that an updated version number is reported by PSCI_VERSION. Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220221153524.15397-2-will@kernel.org
2022-02-08Merge branch kvm-arm64/pmu-bl into kvmarm-master/nextMarc Zyngier
* kvm-arm64/pmu-bl: : . : Improve PMU support on heterogeneous systems, courtesy of Alexandru Elisei : . KVM: arm64: Refuse to run VCPU if the PMU doesn't match the physical CPU KVM: arm64: Add KVM_ARM_VCPU_PMU_V3_SET_PMU attribute KVM: arm64: Keep a list of probed PMUs KVM: arm64: Keep a per-VM pointer to the default PMU perf: Fix wrong name in comment for struct perf_cpu_context KVM: arm64: Do not change the PMU event filter after a VCPU has run Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-08KVM: arm64: Refuse to run VCPU if the PMU doesn't match the physical CPUAlexandru Elisei
Userspace can assign a PMU to a VCPU with the KVM_ARM_VCPU_PMU_V3_SET_PMU device ioctl. If the VCPU is scheduled on a physical CPU which has a different PMU, the perf events needed to emulate a guest PMU won't be scheduled in and the guest performance counters will stop counting. Treat it as an userspace error and refuse to run the VCPU in this situation. Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127161759.53553-7-alexandru.elisei@arm.com
2022-02-08KVM: arm64: Add KVM_ARM_VCPU_PMU_V3_SET_PMU attributeAlexandru Elisei
When KVM creates an event and there are more than one PMUs present on the system, perf_init_event() will go through the list of available PMUs and will choose the first one that can create the event. The order of the PMUs in this list depends on the probe order, which can change under various circumstances, for example if the order of the PMU nodes change in the DTB or if asynchronous driver probing is enabled on the kernel command line (with the driver_async_probe=armv8-pmu option). Another consequence of this approach is that on heteregeneous systems all virtual machines that KVM creates will use the same PMU. This might cause unexpected behaviour for userspace: when a VCPU is executing on the physical CPU that uses this default PMU, PMU events in the guest work correctly; but when the same VCPU executes on another CPU, PMU events in the guest will suddenly stop counting. Fortunately, perf core allows user to specify on which PMU to create an event by using the perf_event_attr->type field, which is used by perf_init_event() as an index in the radix tree of available PMUs. Add the KVM_ARM_VCPU_PMU_V3_CTRL(KVM_ARM_VCPU_PMU_V3_SET_PMU) VCPU attribute to allow userspace to specify the arm_pmu that KVM will use when creating events for that VCPU. KVM will make no attempt to run the VCPU on the physical CPUs that share the PMU, leaving it up to userspace to manage the VCPU threads' affinity accordingly. To ensure that KVM doesn't expose an asymmetric system to the guest, the PMU set for one VCPU will be used by all other VCPUs. Once a VCPU has run, the PMU cannot be changed in order to avoid changing the list of available events for a VCPU, or to change the semantics of existing events. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127161759.53553-6-alexandru.elisei@arm.com
2022-02-08KVM: arm64: Keep a list of probed PMUsAlexandru Elisei
The ARM PMU driver calls kvm_host_pmu_init() after probing to tell KVM that a hardware PMU is available for guest emulation. Heterogeneous systems can have more than one PMU present, and the callback gets called multiple times, once for each of them. Keep track of all the PMUs available to KVM, as they're going to be needed later. Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127161759.53553-5-alexandru.elisei@arm.com
2022-02-08KVM: arm64: Keep a per-VM pointer to the default PMUMarc Zyngier
As we are about to allow selection of the PMU exposed to a guest, start by keeping track of the default one instead of only the PMU version. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20220127161759.53553-4-alexandru.elisei@arm.com
2022-02-08perf: Fix wrong name in comment for struct perf_cpu_contextAlexandru Elisei
Commit 0793a61d4df8 ("performance counters: core code") added the perf subsystem (then called Performance Counters) to Linux, creating the struct perf_cpu_context. The comment for the struct referred to it as a "struct perf_counter_cpu_context". Commit cdd6c482c9ff ("perf: Do the big rename: Performance Counters -> Performance Events") changed the comment to refer to a "struct perf_event_cpu_context", which was still the wrong name for the struct. Change the comment to say "struct perf_cpu_context". CC: Thomas Gleixner <tglx@linutronix.de> CC: Ingo Molnar <mingo@redhat.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127161759.53553-3-alexandru.elisei@arm.com
2022-02-08KVM: arm64: Do not change the PMU event filter after a VCPU has runMarc Zyngier
Userspace can specify which events a guest is allowed to use with the KVM_ARM_VCPU_PMU_V3_FILTER attribute. The list of allowed events can be identified by a guest from reading the PMCEID{0,1}_EL0 registers. Changing the PMU event filter after a VCPU has run can cause reads of the registers performed before the filter is changed to return different values than reads performed with the new event filter in place. The architecture defines the two registers as read-only, and this behaviour contradicts that. Keep track when the first VCPU has run and deny changes to the PMU event filter to prevent this from happening. Signed-off-by: Marc Zyngier <maz@kernel.org> [ Alexandru E: Added commit message, updated ioctl documentation ] Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127161759.53553-2-alexandru.elisei@arm.com
2022-02-08Merge branch kvm-arm64/misc-5.18 into kvmarm-master/nextMarc Zyngier
* kvm-arm64/misc-5.18: : . : Misc fixes for KVM/arm64 5.18: : : - Drop unused kvm parameter to kvm_psci_version() : : - Implement CONFIG_DEBUG_LIST at EL2 : . KVM: arm64: pkvm: Implement CONFIG_DEBUG_LIST at EL2 KVM: arm64: Drop unused param from kvm_psci_version() Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-08KVM: arm64: pkvm: Implement CONFIG_DEBUG_LIST at EL2Keir Fraser
Currently the check functions are stubbed out at EL2. Implement versions suitable for the constrained EL2 environment. Signed-off-by: Keir Fraser <keirf@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220131124114.3103337-1-keirf@google.com
2022-02-08KVM: arm64: Drop unused param from kvm_psci_version()Oliver Upton
kvm_psci_version() consumes a pointer to struct kvm in addition to a vcpu pointer. Drop the kvm pointer as it is unused. While the comment suggests the explicit kvm pointer was useful for calling from hyp, there exist no such callsite in hyp. Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220208012705.640444-1-oupton@google.com
2022-02-08Merge branch kvm-arm64/selftest/vgic-5.18 into kvmarm-master/nextMarc Zyngier
* kvm-arm64/selftest/vgic-5.18: : . : A bunch of selftest fixes, courtesy of Ricardo Koller : . kvm: selftests: aarch64: use a tighter assert in vgic_poke_irq() kvm: selftests: aarch64: fix some vgic related comments kvm: selftests: aarch64: fix the failure check in kvm_set_gsi_routing_irqchip_check kvm: selftests: aarch64: pass vgic_irq guest args as a pointer kvm: selftests: aarch64: fix assert in gicv3_access_reg Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-08kvm: selftests: aarch64: use a tighter assert in vgic_poke_irq()Ricardo Koller
vgic_poke_irq() checks that the attr argument passed to the vgic device ioctl is sane. Make this check tighter by moving it to after the last attr update. Signed-off-by: Ricardo Koller <ricarkol@google.com> Reported-by: Reiji Watanabe <reijiw@google.com> Cc: Andrew Jones <drjones@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127030858.3269036-6-ricarkol@google.com
2022-02-08kvm: selftests: aarch64: fix some vgic related commentsRicardo Koller
Fix the formatting of some comments and the wording of one of them (in gicv3_access_reg). Signed-off-by: Ricardo Koller <ricarkol@google.com> Reported-by: Reiji Watanabe <reijiw@google.com> Cc: Andrew Jones <drjones@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127030858.3269036-5-ricarkol@google.com
2022-02-08kvm: selftests: aarch64: fix the failure check in ↵Ricardo Koller
kvm_set_gsi_routing_irqchip_check kvm_set_gsi_routing_irqchip_check(expect_failure=true) is used to check the error code returned by the kernel when trying to setup an invalid gsi routing table. The ioctl fails if "pin >= KVM_IRQCHIP_NUM_PINS", so kvm_set_gsi_routing_irqchip_check() should test the error only when "intid >= KVM_IRQCHIP_NUM_PINS+32". The issue is that the test check is "intid >= KVM_IRQCHIP_NUM_PINS", so for a case like "intid = KVM_IRQCHIP_NUM_PINS" the test wrongly assumes that the kernel will return an error. Fix this by using the right check. Signed-off-by: Ricardo Koller <ricarkol@google.com> Reported-by: Reiji Watanabe <reijiw@google.com> Cc: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127030858.3269036-4-ricarkol@google.com
2022-02-08kvm: selftests: aarch64: pass vgic_irq guest args as a pointerRicardo Koller
The guest in vgic_irq gets its arguments in a struct. This struct used to fit nicely in a single register so vcpu_args_set() was able to pass it by value by setting x0 with it. Unfortunately, this args struct grew after some commits and some guest args became random (specically kvm_supports_irqfd). Fix this by passing the guest args as a pointer (after allocating some guest memory for it). Signed-off-by: Ricardo Koller <ricarkol@google.com> Reported-by: Reiji Watanabe <reijiw@google.com> Cc: Andrew Jones <drjones@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127030858.3269036-3-ricarkol@google.com
2022-02-08kvm: selftests: aarch64: fix assert in gicv3_access_regRicardo Koller
The val argument in gicv3_access_reg can have any value when used for a read, not necessarily 0. Fix the assert by checking val only for writes. Signed-off-by: Ricardo Koller <ricarkol@google.com> Reported-by: Reiji Watanabe <reijiw@google.com> Cc: Andrew Jones <drjones@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127030858.3269036-2-ricarkol@google.com
2022-02-08Merge branch kvm-arm64/vmid-allocator into kvmarm-master/nextMarc Zyngier
* kvm-arm64/vmid-allocator: : . : VMID allocation rewrite from Shameerali Kolothum Thodi, paving the : way for pinned VMIDs and SVA. : . KVM: arm64: Make active_vmids invalid on vCPU schedule out KVM: arm64: Align the VMID allocation with the arm64 ASID KVM: arm64: Make VMID bits accessible outside of allocator KVM: arm64: Introduce a new VMID allocator for KVM Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-08KVM: arm64: Make active_vmids invalid on vCPU schedule outShameer Kolothum
Like ASID allocator, we copy the active_vmids into the reserved_vmids on a rollover. But it's unlikely that every CPU will have a vCPU as current task and we may end up unnecessarily reserving the VMID space. Hence, set active_vmids to an invalid one when scheduling out a vCPU. Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211122121844.867-5-shameerali.kolothum.thodi@huawei.com
2022-02-08KVM: arm64: Align the VMID allocation with the arm64 ASIDJulien Grall
At the moment, the VMID algorithm will send an SGI to all the CPUs to force an exit and then broadcast a full TLB flush and I-Cache invalidation. This patch uses the new VMID allocator. The benefits are:    - Aligns with arm64 ASID algorithm.    - CPUs are not forced to exit at roll-over. Instead, the VMID will be marked reserved and context invalidation is broadcasted. This will reduce the IPIs traffic.   - More flexible to add support for pinned KVM VMIDs in the future.     With the new algo, the code is now adapted:     - The call to update_vmid() will be done with preemption disabled as the new algo requires to store information per-CPU. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211122121844.867-4-shameerali.kolothum.thodi@huawei.com
2022-02-08KVM: arm64: Make VMID bits accessible outside of allocatorShameer Kolothum
Since we already set the kvm_arm_vmid_bits in the VMID allocator init function, make it accessible outside as well so that it can be used in the subsequent patch. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211122121844.867-3-shameerali.kolothum.thodi@huawei.com
2022-02-08KVM: arm64: Introduce a new VMID allocator for KVMShameer Kolothum
A new VMID allocator for arm64 KVM use. This is based on arm64 ASID allocator algorithm. One major deviation from the ASID allocator is the way we flush the context. Unlike ASID allocator, we expect less frequent rollover in the case of VMIDs. Hence, instead of marking the CPU as flush_pending and issuing a local context invalidation on the next context switch, we  broadcast TLB flush + I-cache invalidation over the inner shareable domain on rollover. Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211122121844.867-2-shameerali.kolothum.thodi@huawei.com
2022-02-08Merge branch kvm-arm64/fpsimd-doc into kvmarm-master/nextMarc Zyngier
* kvm-arm64/fpsimd-doc: : . : FPSIMD documentation update, courtesy of Mark Brown : . arm64/fpsimd: Clarify the purpose of using last in fpsimd_save() KVM: arm64: Add some more comments in kvm_hyp_handle_fpsimd() KVM: arm64: Add comments for context flush and sync callbacks Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-08arm64/fpsimd: Clarify the purpose of using last in fpsimd_save()Mark Brown
When saving the floating point context in fpsimd_save() we always reference the state using last-> rather than using current->. Looking at the FP code in isolation the reason for this is not entirely obvious, it's done because when KVM is running it will bind the guest context and rely on the host writing out the guest state on context switch away from the guest. There's a slight trick here in that KVM still uses TIF_FOREIGN_FPSTATE and TIF_SVE to communicate what needs to be saved, it maintains those flags and restores them when it is done running the guest so that the normal restore paths function when we return back to userspace. Add a comment to explain this to help future readers work out what's going on a bit faster. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220124161115.115200-1-broonie@kernel.org
2022-02-08KVM: arm64: Add some more comments in kvm_hyp_handle_fpsimd()Mark Brown
The handling for FPSIMD/SVE traps is multi stage and involves some trap manipulation which isn't quite so immediately obvious as might be desired so add a few more comments. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220124155720.3943374-3-broonie@kernel.org
2022-02-08KVM: arm64: Add comments for context flush and sync callbacksMark Brown
Add a little bit of information on where _ctxflush_fp() and _ctxsync_fp() are called to help people unfamiliar with the code get up to speed. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220124155720.3943374-2-broonie@kernel.org
2022-02-08Merge branch kvm-arm64/mmu-rwlock into kvmarm-master/nextMarc Zyngier
* kvm-arm64/mmu-rwlock: : . : MMU locking optimisations from Jing Zhang, allowing permission : relaxations to occur in parallel. : . KVM: selftests: Add vgic initialization for dirty log perf test for ARM KVM: arm64: Add fast path to handle permission relaxation during dirty logging KVM: arm64: Use read/write spin lock for MMU protection Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-08KVM: selftests: Add vgic initialization for dirty log perf test for ARMJing Zhang
For ARM64, if no vgic is setup before the dirty log perf test, the userspace irqchip would be used, which would affect the dirty log perf test result. Signed-off-by: Jing Zhang <jingzhangos@google.com> Tested-by: Fuad Tabba <tabba@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220118015703.3630552-4-jingzhangos@google.com
2022-02-08KVM: arm64: Add fast path to handle permission relaxation during dirty loggingJing Zhang
To reduce MMU lock contention during dirty logging, all permission relaxation operations would be performed under read lock. Signed-off-by: Jing Zhang <jingzhangos@google.com> Tested-by: Fuad Tabba <tabba@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220118015703.3630552-3-jingzhangos@google.com
2022-02-08KVM: arm64: Use read/write spin lock for MMU protectionJing Zhang
Replace MMU spinlock with rwlock and update all instances of the lock being acquired with a write lock acquisition. Future commit will add a fast path for permission relaxation during dirty logging under a read lock. Signed-off-by: Jing Zhang <jingzhangos@google.com> Tested-by: Fuad Tabba <tabba@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220118015703.3630552-2-jingzhangos@google.com
2022-02-08Merge branch kvm-arm64/oslock into kvmarm-master/nextMarc Zyngier
* kvm-arm64/oslock: : . : Debug OS-Lock emulation courtesy of Oliver Upton. From the cover letter: : : "KVM does not implement the debug architecture to the letter of the : specification. One such issue is the fact that KVM treats the OS Lock as : RAZ/WI, rather than emulating its behavior on hardware. This series adds : emulation support for the OS Lock to KVM. Emulation is warranted as the : OS Lock affects debug exceptions taken from all ELs, and is not limited : to only the context of the guest." : . selftests: KVM: Test OS lock behavior selftests: KVM: Add OSLSR_EL1 to the list of blessed regs KVM: arm64: Emulate the OS Lock KVM: arm64: Allow guest to set the OSLK bit KVM: arm64: Stash OSLSR_EL1 in the cpu context KVM: arm64: Correctly treat writes to OSLSR_EL1 as undefined Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-02-08selftests: KVM: Test OS lock behaviorOliver Upton
KVM now correctly handles the OS Lock for its guests. When set, KVM blocks all debug exceptions originating from the guest. Add test cases to the debug-exceptions test to assert that software breakpoint, hardware breakpoint, watchpoint, and single-step exceptions are in fact blocked. Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220203174159.2887882-7-oupton@google.com
2022-02-08selftests: KVM: Add OSLSR_EL1 to the list of blessed regsOliver Upton
OSLSR_EL1 is now part of the visible system register state. Add it to the get-reg-list selftest to ensure we keep it that way. Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220203174159.2887882-6-oupton@google.com
2022-02-08KVM: arm64: Emulate the OS LockOliver Upton
The OS lock blocks all debug exceptions at every EL. To date, KVM has not implemented the OS lock for its guests, despite the fact that it is mandatory per the architecture. Simple context switching between the guest and host is not appropriate, as its effects are not constrained to the guest context. Emulate the OS Lock by clearing MDE and SS in MDSCR_EL1, thereby blocking all but software breakpoint instructions. Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220203174159.2887882-5-oupton@google.com
2022-02-08KVM: arm64: Allow guest to set the OSLK bitOliver Upton
Allow writes to OSLAR and forward the OSLK bit to OSLSR. Do nothing with the value for now. Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220203174159.2887882-4-oupton@google.com
2022-02-08KVM: arm64: Stash OSLSR_EL1 in the cpu contextOliver Upton
An upcoming change to KVM will emulate the OS Lock from the PoV of the guest. Add OSLSR_EL1 to the cpu context and handle reads using the stored value. Define some mnemonics for for handling the OSLM field and use them to make the reset value of OSLSR_EL1 more readable. Wire up a custom handler for writes from userspace and prevent any of the invariant bits from changing. Note that the OSLK bit is not invariant and will be made writable by the aforementioned change. Reviewed-by: Reiji Watanabe <reijiw@google.com> Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220203174159.2887882-3-oupton@google.com
2022-02-08KVM: arm64: Correctly treat writes to OSLSR_EL1 as undefinedOliver Upton
Writes to OSLSR_EL1 are UNDEFINED and should never trap from EL1 to EL2, but the kvm trap handler for OSLSR_EL1 handles writes via ignore_write(). This is confusing to readers of code, but should have no functional impact. For clarity, use write_to_read_only() rather than ignore_write(). If a trap is unexpectedly taken to EL2 in violation of the architecture, this will WARN_ONCE() and inject an undef into the guest. Reviewed-by: Reiji Watanabe <reijiw@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> [adopted Mark's changelog suggestion, thanks!] Signed-off-by: Oliver Upton <oupton@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220203174159.2887882-2-oupton@google.com
2022-02-06Linux 5.17-rc3Linus Torvalds