* Rework almost all of KVM's exports to expose symbols only to KVM's x86
   vendor modules (kvm-{amd,intel}.ko and PPC's kvm-{pr,hv}.ko.
 
 x86:
 
 * Rework almost all of KVM x86's exports to expose symbols only to KVM's
   vendor modules, i.e. to kvm-{amd,intel}.ko.
 
 * Add support for virtualizing Control-flow Enforcement Technology (CET) on
   Intel (Shadow Stacks and Indirect Branch Tracking) and AMD (Shadow Stacks).
   It's worth noting that while SHSTK and IBT can be enabled separately in CPUID,
   it is not really possible to virtualize them separately.  Therefore, Intel
   processors will really allow both SHSTK and IBT under the hood if either is
   made visible in the guest's CPUID.  The alternative would be to intercept
   XSAVES/XRSTORS, which is not feasible for performance reasons.
 
 * Fix a variety of fuzzing WARNs all caused by checking L1 intercepts when
   completing userspace I/O.  KVM has already committed to allowing L2 to
   to perform I/O at that point.
 
 * Emulate PERF_CNTR_GLOBAL_STATUS_SET for PerfMonV2 guests, as the MSR is
   supposed to exist for v2 PMUs.
 
 * Allow Centaur CPU leaves (base 0xC000_0000) for Zhaoxin CPUs.
 
 * Add support for the immediate forms of RDMSR and WRMSRNS, sans full
   emulator support (KVM should never need to emulate the MSRs outside of
   forced emulation and other contrived testing scenarios).
 
 * Clean up the MSR APIs in preparation for CET and FRED virtualization, as
   well as mediated vPMU support.
 
 * Clean up a pile of PMU code in anticipation of adding support for mediated
   vPMUs.
 
 * Reject in-kernel IOAPIC/PIT for TDX VMs, as KVM can't obtain EOI vmexits
   needed to faithfully emulate an I/O APIC for such guests.
 
 * Many cleanups and minor fixes.
 
 * Recover possible NX huge pages within the TDP MMU under read lock to
   reduce guest jitter when restoring NX huge pages.
 
 * Return -EAGAIN during prefault if userspace concurrently deletes/moves the
   relevant memslot, to fix an issue where prefaulting could deadlock with the
   memslot update.
 
 x86 (AMD):
 
 * Enable AVIC by default for Zen4+ if x2AVIC (and other prereqs) is supported.
 
 * Require a minimum GHCB version of 2 when starting SEV-SNP guests via
   KVM_SEV_INIT2 so that invalid GHCB versions result in immediate errors
   instead of latent guest failures.
 
 * Add support for SEV-SNP's CipherText Hiding, an opt-in feature that prevents
   unauthorized CPU accesses from reading the ciphertext of SNP guest private
   memory, e.g. to attempt an offline attack.  This feature splits the shared
   SEV-ES/SEV-SNP ASID space into separate ranges for SEV-ES and SEV-SNP guests,
   therefore a new module parameter is needed to control the number of ASIDs
   that can be used for VMs with CipherText Hiding vs. how many can be used to
   run SEV-ES guests.
 
 * Add support for Secure TSC for SEV-SNP guests, which prevents the untrusted
   host from tampering with the guest's TSC frequency, while still allowing the
   the VMM to configure the guest's TSC frequency prior to launch.
 
 * Validate the XCR0 provided by the guest (via the GHCB) to avoid bugs
   resulting from bogus XCR0 values.
 
 * Save an SEV guest's policy if and only if LAUNCH_START fully succeeds to
   avoid leaving behind stale state (thankfully not consumed in KVM).
 
 * Explicitly reject non-positive effective lengths during SNP's LAUNCH_UPDATE
   instead of subtly relying on guest_memfd to deal with them.
 
 * Reload the pre-VMRUN TSC_AUX on #VMEXIT for SEV-ES guests, not the host's
   desired TSC_AUX, to fix a bug where KVM was keeping a different vCPU's
   TSC_AUX in the host MSR until return to userspace.
 
 KVM (Intel):
 
 * Preparation for FRED support.
 
 * Don't retry in TDX's anti-zero-step mitigation if the target memslot is
   invalid, i.e. is being deleted or moved, to fix a deadlock scenario similar
   to the aforementioned prefaulting case.
 
 * Misc bugfixes and minor cleanups.
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmjjx/0UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroMLFwf9HXZdqBn6VvkbSL/HIGdNG1BEzeJ0
 MQVEMMdmWJ72JtI6soJ6oN5NWTIJJeMTPuCgRrNxFbIivSdm9vYPTSCNwNBhKb+H
 FEsr62a9T4XgnTqy20h+yZJiKNvwtaggdTWFnUAUqsBSFkEtksAP72odvZx+GNv/
 cndqtxy/84TcJ4ZXFdxElylCcQ9xRoRkqkU8KaVfg88wqMIMbSR3OBSH/g8bqR+3
 cjvDGNC7TPHPEN2Wmq2AYluRlBxB2ZhsOauArsdidPXHAevO+AFnbS27fz6bixZK
 LTS/qwKOsvhFzyHngemuG6s6HgkgBEshfcKk5i7d2ReRjaGP4EvkhmlImA==
 =k49c
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull x86 kvm updates from Paolo Bonzini:
 "Generic:

   - Rework almost all of KVM's exports to expose symbols only to KVM's
     x86 vendor modules (kvm-{amd,intel}.ko and PPC's kvm-{pr,hv}.ko

  x86:

   - Rework almost all of KVM x86's exports to expose symbols only to
     KVM's vendor modules, i.e. to kvm-{amd,intel}.ko

   - Add support for virtualizing Control-flow Enforcement Technology
     (CET) on Intel (Shadow Stacks and Indirect Branch Tracking) and AMD
     (Shadow Stacks).

     It is worth noting that while SHSTK and IBT can be enabled
     separately in CPUID, it is not really possible to virtualize them
     separately. Therefore, Intel processors will really allow both
     SHSTK and IBT under the hood if either is made visible in the
     guest's CPUID. The alternative would be to intercept
     XSAVES/XRSTORS, which is not feasible for performance reasons

   - Fix a variety of fuzzing WARNs all caused by checking L1 intercepts
     when completing userspace I/O. KVM has already committed to
     allowing L2 to to perform I/O at that point

   - Emulate PERF_CNTR_GLOBAL_STATUS_SET for PerfMonV2 guests, as the
     MSR is supposed to exist for v2 PMUs

   - Allow Centaur CPU leaves (base 0xC000_0000) for Zhaoxin CPUs

   - Add support for the immediate forms of RDMSR and WRMSRNS, sans full
     emulator support (KVM should never need to emulate the MSRs outside
     of forced emulation and other contrived testing scenarios)

   - Clean up the MSR APIs in preparation for CET and FRED
     virtualization, as well as mediated vPMU support

   - Clean up a pile of PMU code in anticipation of adding support for
     mediated vPMUs

   - Reject in-kernel IOAPIC/PIT for TDX VMs, as KVM can't obtain EOI
     vmexits needed to faithfully emulate an I/O APIC for such guests

   - Many cleanups and minor fixes

   - Recover possible NX huge pages within the TDP MMU under read lock
     to reduce guest jitter when restoring NX huge pages

   - Return -EAGAIN during prefault if userspace concurrently
     deletes/moves the relevant memslot, to fix an issue where
     prefaulting could deadlock with the memslot update

  x86 (AMD):

   - Enable AVIC by default for Zen4+ if x2AVIC (and other prereqs) is
     supported

   - Require a minimum GHCB version of 2 when starting SEV-SNP guests
     via KVM_SEV_INIT2 so that invalid GHCB versions result in immediate
     errors instead of latent guest failures

   - Add support for SEV-SNP's CipherText Hiding, an opt-in feature that
     prevents unauthorized CPU accesses from reading the ciphertext of
     SNP guest private memory, e.g. to attempt an offline attack. This
     feature splits the shared SEV-ES/SEV-SNP ASID space into separate
     ranges for SEV-ES and SEV-SNP guests, therefore a new module
     parameter is needed to control the number of ASIDs that can be used
     for VMs with CipherText Hiding vs. how many can be used to run
     SEV-ES guests

   - Add support for Secure TSC for SEV-SNP guests, which prevents the
     untrusted host from tampering with the guest's TSC frequency, while
     still allowing the the VMM to configure the guest's TSC frequency
     prior to launch

   - Validate the XCR0 provided by the guest (via the GHCB) to avoid
     bugs resulting from bogus XCR0 values

   - Save an SEV guest's policy if and only if LAUNCH_START fully
     succeeds to avoid leaving behind stale state (thankfully not
     consumed in KVM)

   - Explicitly reject non-positive effective lengths during SNP's
     LAUNCH_UPDATE instead of subtly relying on guest_memfd to deal with
     them

   - Reload the pre-VMRUN TSC_AUX on #VMEXIT for SEV-ES guests, not the
     host's desired TSC_AUX, to fix a bug where KVM was keeping a
     different vCPU's TSC_AUX in the host MSR until return to userspace

  KVM (Intel):

   - Preparation for FRED support

   - Don't retry in TDX's anti-zero-step mitigation if the target
     memslot is invalid, i.e. is being deleted or moved, to fix a
     deadlock scenario similar to the aforementioned prefaulting case

   - Misc bugfixes and minor cleanups"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (142 commits)
  KVM: x86: Export KVM-internal symbols for sub-modules only
  KVM: x86: Drop pointless exports of kvm_arch_xxx() hooks
  KVM: x86: Move kvm_intr_is_single_vcpu() to lapic.c
  KVM: Export KVM-internal symbols for sub-modules only
  KVM: s390/vfio-ap: Use kvm_is_gpa_in_memslot() instead of open coded equivalent
  KVM: VMX: Make CR4.CET a guest owned bit
  KVM: selftests: Verify MSRs are (not) in save/restore list when (un)supported
  KVM: selftests: Add coverage for KVM-defined registers in MSRs test
  KVM: selftests: Add KVM_{G,S}ET_ONE_REG coverage to MSRs test
  KVM: selftests: Extend MSRs test to validate vCPUs without supported features
  KVM: selftests: Add support for MSR_IA32_{S,U}_CET to MSRs test
  KVM: selftests: Add an MSR test to exercise guest/host and read/write
  KVM: x86: Define AMD's #HV, #VC, and #SX exception vectors
  KVM: x86: Define Control Protection Exception (#CP) vector
  KVM: x86: Add human friendly formatting for #XM, and #VE
  KVM: SVM: Enable shadow stack virtualization for SVM
  KVM: SEV: Synchronize MSR_IA32_XSS from the GHCB when it's valid
  KVM: SVM: Pass through shadow stack MSRs as appropriate
  KVM: SVM: Update dump_vmcb with shadow stack save area additions
  KVM: nSVM: Save/load CET Shadow Stack state to/from vmcb12/vmcb02
  ...
This commit is contained in:
Linus Torvalds 2025-10-06 12:37:34 -07:00
commit 256e341706
71 changed files with 3205 additions and 1236 deletions

View File

@ -2962,6 +2962,27 @@
(enabled). Disable by KVM if hardware lacks support
for NPT.
kvm-amd.ciphertext_hiding_asids=
[KVM,AMD] Ciphertext hiding prevents disallowed accesses
to SNP private memory from reading ciphertext. Instead,
reads will see constant default values (0xff).
If ciphertext hiding is enabled, the joint SEV-ES and
SEV-SNP ASID space is partitioned into separate SEV-ES
and SEV-SNP ASID ranges, with the SEV-SNP range being
[1..max_snp_asid] and the SEV-ES range being
(max_snp_asid..min_sev_asid), where min_sev_asid is
enumerated by CPUID.0x.8000_001F[EDX].
A non-zero value enables SEV-SNP ciphertext hiding and
adjusts the ASID ranges for SEV-ES and SEV-SNP guests.
KVM caps the number of SEV-SNP ASIDs at the maximum
possible value, e.g. specifying -1u will assign all
joint SEV-ES and SEV-SNP ASIDs to SEV-SNP. Note,
assigning all joint ASIDs to SEV-SNP, i.e. configuring
max_snp_asid == min_sev_asid-1, will effectively make
SEV-ES unusable.
kvm-arm.mode=
[KVM,ARM,EARLY] Select one of KVM/arm64's modes of
operation.

View File

@ -2908,6 +2908,16 @@ such as set vcpu counter or reset vcpu, and they have the following id bit patte
0x9030 0000 0002 <reg:16>
x86 MSR registers have the following id bit patterns::
0x2030 0002 <msr number:32>
Following are the KVM-defined registers for x86:
======================= ========= =============================================
Encoding Register Description
======================= ========= =============================================
0x2030 0003 0000 0000 SSP Shadow Stack Pointer
======================= ========= =============================================
4.69 KVM_GET_ONE_REG
--------------------
@ -3075,6 +3085,12 @@ This IOCTL replaces the obsolete KVM_GET_PIT.
Sets the state of the in-kernel PIT model. Only valid after KVM_CREATE_PIT2.
See KVM_GET_PIT2 for details on struct kvm_pit_state2.
.. Tip::
``KVM_SET_PIT2`` strictly adheres to the spec of Intel 8254 PIT. For example,
a ``count`` value of 0 in ``struct kvm_pit_channel_state`` is interpreted as
65536, which is the maximum count value. Refer to `Intel 8254 programmable
interval timer <https://www.scs.stanford.edu/10wi-cs140/pintos/specs/8254.pdf>`_.
This IOCTL replaces the obsolete KVM_SET_PIT.
@ -3582,7 +3598,7 @@ VCPU matching underlying host.
---------------------
:Capability: basic
:Architectures: arm64, mips, riscv
:Architectures: arm64, mips, riscv, x86 (if KVM_CAP_ONE_REG)
:Type: vcpu ioctl
:Parameters: struct kvm_reg_list (in/out)
:Returns: 0 on success; -1 on error
@ -3625,6 +3641,8 @@ Note that s390 does not support KVM_GET_REG_LIST for historical reasons
- KVM_REG_S390_GBEA
Note, for x86, all MSRs enumerated by KVM_GET_MSR_INDEX_LIST are supported as
type KVM_X86_REG_TYPE_MSR, but are NOT enumerated via KVM_GET_REG_LIST.
4.85 KVM_ARM_SET_DEVICE_ADDR (deprecated)
-----------------------------------------

View File

@ -137,7 +137,7 @@ compute the CLOCK_REALTIME for its clock, at the same instant.
Returns KVM_EOPNOTSUPP if the host does not use TSC clocksource,
or if clock type is different than KVM_CLOCK_PAIRING_WALLCLOCK.
6. KVM_HC_SEND_IPI
7. KVM_HC_SEND_IPI
------------------
:Architecture: x86
@ -158,7 +158,7 @@ corresponds to the APIC ID a2+1, and so on.
Returns the number of CPUs to which the IPIs were delivered successfully.
7. KVM_HC_SCHED_YIELD
8. KVM_HC_SCHED_YIELD
---------------------
:Architecture: x86
@ -170,7 +170,7 @@ a0: destination APIC ID
:Usage example: When sending a call-function IPI-many to vCPUs, yield if
any of the IPI target vCPUs was preempted.
8. KVM_HC_MAP_GPA_RANGE
9. KVM_HC_MAP_GPA_RANGE
-------------------------
:Architecture: x86
:Status: active

View File

@ -3,7 +3,6 @@ generated-y += syscall_table_32.h
generated-y += syscall_table_64.h
generated-y += syscall_table_spu.h
generic-y += agp.h
generic-y += kvm_types.h
generic-y += mcs_spinlock.h
generic-y += qrwlock.h
generic-y += early_ioremap.h

View File

@ -0,0 +1,15 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_PPC_KVM_TYPES_H
#define _ASM_PPC_KVM_TYPES_H
#if IS_MODULE(CONFIG_KVM_BOOK3S_64_PR) && IS_MODULE(CONFIG_KVM_BOOK3S_64_HV)
#define KVM_SUB_MODULES kvm-pr,kvm-hv
#elif IS_MODULE(CONFIG_KVM_BOOK3S_64_PR)
#define KVM_SUB_MODULES kvm-pr
#elif IS_MODULE(CONFIG_KVM_BOOK3S_64_HV)
#define KVM_SUB_MODULES kvm-hv
#else
#undef KVM_SUB_MODULES
#endif
#endif

View File

@ -722,6 +722,8 @@ extern int kvm_s390_enter_exit_sie(struct kvm_s390_sie_block *scb,
extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc);
extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc);
bool kvm_s390_is_gpa_in_memslot(struct kvm *kvm, gpa_t gpa);
static inline void kvm_arch_free_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}

View File

@ -605,6 +605,14 @@ static int handle_io_inst(struct kvm_vcpu *vcpu)
}
}
#if IS_ENABLED(CONFIG_VFIO_AP)
bool kvm_s390_is_gpa_in_memslot(struct kvm *kvm, gpa_t gpa)
{
return kvm_is_gpa_in_memslot(kvm, gpa);
}
EXPORT_SYMBOL_FOR_MODULES(kvm_s390_is_gpa_in_memslot, "vfio_ap");
#endif
/*
* handle_pqap: Handling pqap interception
* @vcpu: the vcpu having issue the pqap instruction

View File

@ -444,6 +444,7 @@
#define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* VM Page Flush MSR is supported */
#define X86_FEATURE_SEV_ES (19*32+ 3) /* "sev_es" Secure Encrypted Virtualization - Encrypted State */
#define X86_FEATURE_SEV_SNP (19*32+ 4) /* "sev_snp" Secure Encrypted Virtualization - Secure Nested Paging */
#define X86_FEATURE_SNP_SECURE_TSC (19*32+ 8) /* SEV-SNP Secure TSC */
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */
#define X86_FEATURE_SME_COHERENT (19*32+10) /* hardware-enforced cache coherency */
#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" SEV-ES full debug state swap support */
@ -497,6 +498,7 @@
#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */
#define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Counters */
#define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions */
/*
* BUG word(s)

View File

@ -138,7 +138,7 @@ KVM_X86_OP(check_emulate_instruction)
KVM_X86_OP(apic_init_signal_blocked)
KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush)
KVM_X86_OP_OPTIONAL(migrate_timers)
KVM_X86_OP(recalc_msr_intercepts)
KVM_X86_OP(recalc_intercepts)
KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);

View File

@ -120,7 +120,7 @@
#define KVM_REQ_TLB_FLUSH_GUEST \
KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_APF_READY KVM_ARCH_REQ(28)
#define KVM_REQ_MSR_FILTER_CHANGED KVM_ARCH_REQ(29)
#define KVM_REQ_RECALC_INTERCEPTS KVM_ARCH_REQ(29)
#define KVM_REQ_UPDATE_CPU_DIRTY_LOGGING \
KVM_ARCH_REQ_FLAGS(30, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_MMU_FREE_OBSOLETE_ROOTS \
@ -142,7 +142,7 @@
| X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
| X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
| X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
| X86_CR4_LAM_SUP))
| X86_CR4_LAM_SUP | X86_CR4_CET))
#define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
@ -267,6 +267,7 @@ enum x86_intercept_stage;
#define PFERR_RSVD_MASK BIT(3)
#define PFERR_FETCH_MASK BIT(4)
#define PFERR_PK_MASK BIT(5)
#define PFERR_SS_MASK BIT(6)
#define PFERR_SGX_MASK BIT(15)
#define PFERR_GUEST_RMP_MASK BIT_ULL(31)
#define PFERR_GUEST_FINAL_MASK BIT_ULL(32)
@ -545,10 +546,10 @@ struct kvm_pmc {
#define KVM_MAX_NR_GP_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_GP_COUNTERS, \
KVM_MAX_NR_AMD_GP_COUNTERS)
#define KVM_MAX_NR_INTEL_FIXED_COUTNERS 3
#define KVM_MAX_NR_AMD_FIXED_COUTNERS 0
#define KVM_MAX_NR_FIXED_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_FIXED_COUTNERS, \
KVM_MAX_NR_AMD_FIXED_COUTNERS)
#define KVM_MAX_NR_INTEL_FIXED_COUNTERS 3
#define KVM_MAX_NR_AMD_FIXED_COUNTERS 0
#define KVM_MAX_NR_FIXED_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_FIXED_COUNTERS, \
KVM_MAX_NR_AMD_FIXED_COUNTERS)
struct kvm_pmu {
u8 version;
@ -579,6 +580,9 @@ struct kvm_pmu {
DECLARE_BITMAP(all_valid_pmc_idx, X86_PMC_IDX_MAX);
DECLARE_BITMAP(pmc_in_use, X86_PMC_IDX_MAX);
DECLARE_BITMAP(pmc_counting_instructions, X86_PMC_IDX_MAX);
DECLARE_BITMAP(pmc_counting_branches, X86_PMC_IDX_MAX);
u64 ds_area;
u64 pebs_enable;
u64 pebs_enable_rsvd;
@ -771,6 +775,7 @@ enum kvm_only_cpuid_leafs {
CPUID_7_2_EDX,
CPUID_24_0_EBX,
CPUID_8000_0021_ECX,
CPUID_7_1_ECX,
NR_KVM_CPU_CAPS,
NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
@ -811,7 +816,6 @@ struct kvm_vcpu_arch {
bool at_instruction_boundary;
bool tpr_access_reporting;
bool xfd_no_write_intercept;
u64 ia32_xss;
u64 microcode_version;
u64 arch_capabilities;
u64 perf_capabilities;
@ -872,6 +876,8 @@ struct kvm_vcpu_arch {
u64 xcr0;
u64 guest_supported_xcr0;
u64 ia32_xss;
u64 guest_supported_xss;
struct kvm_pio_request pio;
void *pio_data;
@ -926,6 +932,7 @@ struct kvm_vcpu_arch {
bool emulate_regs_need_sync_from_vcpu;
int (*complete_userspace_io)(struct kvm_vcpu *vcpu);
unsigned long cui_linear_rip;
int cui_rdmsr_imm_reg;
gpa_t time;
s8 pvclock_tsc_shift;
@ -1348,18 +1355,7 @@ enum kvm_apicv_inhibit {
__APICV_INHIBIT_REASON(LOGICAL_ID_ALIASED), \
__APICV_INHIBIT_REASON(PHYSICAL_ID_TOO_BIG)
struct kvm_arch {
unsigned long n_used_mmu_pages;
unsigned long n_requested_mmu_pages;
unsigned long n_max_mmu_pages;
unsigned int indirect_shadow_pages;
u8 mmu_valid_gen;
u8 vm_type;
bool has_private_mem;
bool has_protected_state;
bool pre_fault_allowed;
struct hlist_head *mmu_page_hash;
struct list_head active_mmu_pages;
struct kvm_possible_nx_huge_pages {
/*
* A list of kvm_mmu_page structs that, if zapped, could possibly be
* replaced by an NX huge page. A shadow page is on this list if its
@ -1371,7 +1367,32 @@ struct kvm_arch {
* guest attempts to execute from the region then KVM obviously can't
* create an NX huge page (without hanging the guest).
*/
struct list_head possible_nx_huge_pages;
struct list_head pages;
u64 nr_pages;
};
enum kvm_mmu_type {
KVM_SHADOW_MMU,
#ifdef CONFIG_X86_64
KVM_TDP_MMU,
#endif
KVM_NR_MMU_TYPES,
};
struct kvm_arch {
unsigned long n_used_mmu_pages;
unsigned long n_requested_mmu_pages;
unsigned long n_max_mmu_pages;
unsigned int indirect_shadow_pages;
u8 mmu_valid_gen;
u8 vm_type;
bool has_private_mem;
bool has_protected_state;
bool has_protected_eoi;
bool pre_fault_allowed;
struct hlist_head *mmu_page_hash;
struct list_head active_mmu_pages;
struct kvm_possible_nx_huge_pages possible_nx_huge_pages[KVM_NR_MMU_TYPES];
#ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING
struct kvm_page_track_notifier_head track_notifier_head;
#endif
@ -1526,7 +1547,7 @@ struct kvm_arch {
* is held in read mode:
* - tdp_mmu_roots (above)
* - the link field of kvm_mmu_page structs used by the TDP MMU
* - possible_nx_huge_pages;
* - possible_nx_huge_pages[KVM_TDP_MMU];
* - the possible_nx_huge_page_link field of kvm_mmu_page structs used
* by the TDP MMU
* Because the lock is only taken within the MMU lock, strictly
@ -1908,7 +1929,7 @@ struct kvm_x86_ops {
int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu);
void (*migrate_timers)(struct kvm_vcpu *vcpu);
void (*recalc_msr_intercepts)(struct kvm_vcpu *vcpu);
void (*recalc_intercepts)(struct kvm_vcpu *vcpu);
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector);
@ -2149,13 +2170,16 @@ void kvm_prepare_event_vectoring_exit(struct kvm_vcpu *vcpu, gpa_t gpa);
void kvm_enable_efer_bits(u64);
bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer);
int kvm_get_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_set_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 data);
int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated);
int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data);
int kvm_emulate_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_emulate_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data);
int __kvm_emulate_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int __kvm_emulate_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data);
int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data);
int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu);
int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg);
int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu);
int kvm_emulate_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg);
int kvm_emulate_as_nop(struct kvm_vcpu *vcpu);
int kvm_emulate_invd(struct kvm_vcpu *vcpu);
int kvm_emulate_mwait(struct kvm_vcpu *vcpu);
@ -2187,6 +2211,7 @@ int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val);
unsigned long kvm_get_dr(struct kvm_vcpu *vcpu, int dr);
unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu);
void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw);
int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr);
int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu);
int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr);
@ -2354,6 +2379,7 @@ int kvm_add_user_return_msr(u32 msr);
int kvm_find_user_return_msr(u32 msr);
int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
void kvm_user_return_msr_update_cache(unsigned int index, u64 val);
u64 kvm_get_user_return_msr(unsigned int slot);
static inline bool kvm_is_supported_user_return_msr(u32 msr)
{
@ -2390,9 +2416,6 @@ void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu);
bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu);
bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
struct kvm_vcpu **dest_vcpu);
static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
{
/* We can only post Fixed and LowPrio IRQs */

View File

@ -2,6 +2,16 @@
#ifndef _ASM_X86_KVM_TYPES_H
#define _ASM_X86_KVM_TYPES_H
#if IS_MODULE(CONFIG_KVM_AMD) && IS_MODULE(CONFIG_KVM_INTEL)
#define KVM_SUB_MODULES kvm-amd,kvm-intel
#elif IS_MODULE(CONFIG_KVM_AMD)
#define KVM_SUB_MODULES kvm-amd
#elif IS_MODULE(CONFIG_KVM_INTEL)
#define KVM_SUB_MODULES kvm-intel
#else
#undef KVM_SUB_MODULES
#endif
#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40
#endif /* _ASM_X86_KVM_TYPES_H */

View File

@ -315,9 +315,12 @@
#define PERF_CAP_PT_IDX 16
#define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6
#define PERF_CAP_LBR_FMT 0x3f
#define PERF_CAP_PEBS_TRAP BIT_ULL(6)
#define PERF_CAP_ARCH_REG BIT_ULL(7)
#define PERF_CAP_PEBS_FORMAT 0xf00
#define PERF_CAP_FW_WRITES BIT_ULL(13)
#define PERF_CAP_PEBS_BASELINE BIT_ULL(14)
#define PERF_CAP_PEBS_TIMING_INFO BIT_ULL(17)
#define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \
@ -747,6 +750,7 @@
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300
#define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET 0xc0000303
/* AMD Hardware Feedback Support MSRs */
#define MSR_AMD_WORKLOAD_CLASS_CONFIG 0xc0000500

View File

@ -299,6 +299,7 @@ static_assert((X2AVIC_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == X2AVIC_
#define SVM_SEV_FEAT_RESTRICTED_INJECTION BIT(3)
#define SVM_SEV_FEAT_ALTERNATE_INJECTION BIT(4)
#define SVM_SEV_FEAT_DEBUG_SWAP BIT(5)
#define SVM_SEV_FEAT_SECURE_TSC BIT(9)
#define VMCB_ALLOWED_SEV_FEATURES_VALID BIT_ULL(63)

View File

@ -106,6 +106,7 @@
#define VM_EXIT_CLEAR_BNDCFGS 0x00800000
#define VM_EXIT_PT_CONCEAL_PIP 0x01000000
#define VM_EXIT_CLEAR_IA32_RTIT_CTL 0x02000000
#define VM_EXIT_LOAD_CET_STATE 0x10000000
#define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR 0x00036dff
@ -119,6 +120,7 @@
#define VM_ENTRY_LOAD_BNDCFGS 0x00010000
#define VM_ENTRY_PT_CONCEAL_PIP 0x00020000
#define VM_ENTRY_LOAD_IA32_RTIT_CTL 0x00040000
#define VM_ENTRY_LOAD_CET_STATE 0x00100000
#define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR 0x000011ff
@ -132,6 +134,7 @@
#define VMX_BASIC_DUAL_MONITOR_TREATMENT BIT_ULL(49)
#define VMX_BASIC_INOUT BIT_ULL(54)
#define VMX_BASIC_TRUE_CTLS BIT_ULL(55)
#define VMX_BASIC_NO_HW_ERROR_CODE_CC BIT_ULL(56)
static inline u32 vmx_basic_vmcs_revision_id(u64 vmx_basic)
{
@ -369,6 +372,9 @@ enum vmcs_field {
GUEST_PENDING_DBG_EXCEPTIONS = 0x00006822,
GUEST_SYSENTER_ESP = 0x00006824,
GUEST_SYSENTER_EIP = 0x00006826,
GUEST_S_CET = 0x00006828,
GUEST_SSP = 0x0000682a,
GUEST_INTR_SSP_TABLE = 0x0000682c,
HOST_CR0 = 0x00006c00,
HOST_CR3 = 0x00006c02,
HOST_CR4 = 0x00006c04,
@ -381,6 +387,9 @@ enum vmcs_field {
HOST_IA32_SYSENTER_EIP = 0x00006c12,
HOST_RSP = 0x00006c14,
HOST_RIP = 0x00006c16,
HOST_S_CET = 0x00006c18,
HOST_SSP = 0x00006c1a,
HOST_INTR_SSP_TABLE = 0x00006c1c
};
/*

View File

@ -35,6 +35,11 @@
#define MC_VECTOR 18
#define XM_VECTOR 19
#define VE_VECTOR 20
#define CP_VECTOR 21
#define HV_VECTOR 28
#define VC_VECTOR 29
#define SX_VECTOR 30
/* Select x86 specific features in <linux/kvm.h> */
#define __KVM_HAVE_PIT
@ -411,6 +416,35 @@ struct kvm_xcrs {
__u64 padding[16];
};
#define KVM_X86_REG_TYPE_MSR 2
#define KVM_X86_REG_TYPE_KVM 3
#define KVM_X86_KVM_REG_SIZE(reg) \
({ \
reg == KVM_REG_GUEST_SSP ? KVM_REG_SIZE_U64 : 0; \
})
#define KVM_X86_REG_TYPE_SIZE(type, reg) \
({ \
__u64 type_size = (__u64)type << 32; \
\
type_size |= type == KVM_X86_REG_TYPE_MSR ? KVM_REG_SIZE_U64 : \
type == KVM_X86_REG_TYPE_KVM ? KVM_X86_KVM_REG_SIZE(reg) : \
0; \
type_size; \
})
#define KVM_X86_REG_ID(type, index) \
(KVM_REG_X86 | KVM_X86_REG_TYPE_SIZE(type, index) | index)
#define KVM_X86_REG_MSR(index) \
KVM_X86_REG_ID(KVM_X86_REG_TYPE_MSR, index)
#define KVM_X86_REG_KVM(index) \
KVM_X86_REG_ID(KVM_X86_REG_TYPE_KVM, index)
/* KVM-defined registers starting from 0 */
#define KVM_REG_GUEST_SSP 0
#define KVM_SYNC_X86_REGS (1UL << 0)
#define KVM_SYNC_X86_SREGS (1UL << 1)
#define KVM_SYNC_X86_EVENTS (1UL << 2)

View File

@ -94,6 +94,8 @@
#define EXIT_REASON_BUS_LOCK 74
#define EXIT_REASON_NOTIFY 75
#define EXIT_REASON_TDCALL 77
#define EXIT_REASON_MSR_READ_IMM 84
#define EXIT_REASON_MSR_WRITE_IMM 85
#define VMX_EXIT_REASONS \
{ EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \
@ -158,7 +160,9 @@
{ EXIT_REASON_TPAUSE, "TPAUSE" }, \
{ EXIT_REASON_BUS_LOCK, "BUS_LOCK" }, \
{ EXIT_REASON_NOTIFY, "NOTIFY" }, \
{ EXIT_REASON_TDCALL, "TDCALL" }
{ EXIT_REASON_TDCALL, "TDCALL" }, \
{ EXIT_REASON_MSR_READ_IMM, "MSR_READ_IMM" }, \
{ EXIT_REASON_MSR_WRITE_IMM, "MSR_WRITE_IMM" }
#define VMX_EXIT_REASON_FLAGS \
{ VMX_EXIT_REASONS_FAILED_VMENTRY, "FAILED_VMENTRY" }

View File

@ -27,6 +27,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 },
{ X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 },
{ X86_FEATURE_INTEL_PPIN, CPUID_EBX, 0, 0x00000007, 1 },
{ X86_FEATURE_MSR_IMM, CPUID_ECX, 5, 0x00000007, 1 },
{ X86_FEATURE_APX, CPUID_EDX, 21, 0x00000007, 1 },
{ X86_FEATURE_RRSBA_CTRL, CPUID_EDX, 2, 0x00000007, 2 },
{ X86_FEATURE_BHI_CTRL, CPUID_EDX, 4, 0x00000007, 2 },

View File

@ -34,7 +34,7 @@
* aligned to sizeof(unsigned long) because it's not accessed via bitops.
*/
u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly;
EXPORT_SYMBOL_GPL(kvm_cpu_caps);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_caps);
struct cpuid_xstate_sizes {
u32 eax;
@ -131,7 +131,7 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry2(
return NULL;
}
EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry2);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_find_cpuid_entry2);
static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
{
@ -263,6 +263,17 @@ static u64 cpuid_get_supported_xcr0(struct kvm_vcpu *vcpu)
return (best->eax | ((u64)best->edx << 32)) & kvm_caps.supported_xcr0;
}
static u64 cpuid_get_supported_xss(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid_entry2 *best;
best = kvm_find_cpuid_entry_index(vcpu, 0xd, 1);
if (!best)
return 0;
return (best->ecx | ((u64)best->edx << 32)) & kvm_caps.supported_xss;
}
static __always_inline void kvm_update_feature_runtime(struct kvm_vcpu *vcpu,
struct kvm_cpuid_entry2 *entry,
unsigned int x86_feature,
@ -305,7 +316,8 @@ static void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
best = kvm_find_cpuid_entry_index(vcpu, 0xD, 1);
if (best && (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
best->ebx = xstate_required_size(vcpu->arch.xcr0 |
vcpu->arch.ia32_xss, true);
}
static bool kvm_cpuid_has_hyperv(struct kvm_vcpu *vcpu)
@ -424,6 +436,7 @@ void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
}
vcpu->arch.guest_supported_xcr0 = cpuid_get_supported_xcr0(vcpu);
vcpu->arch.guest_supported_xss = cpuid_get_supported_xss(vcpu);
vcpu->arch.pv_cpuid.features = kvm_apply_cpuid_pv_features_quirk(vcpu);
@ -448,6 +461,8 @@ void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
* adjustments to the reserved GPA bits.
*/
kvm_mmu_after_set_cpuid(vcpu);
kvm_make_request(KVM_REQ_RECALC_INTERCEPTS, vcpu);
}
int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu)
@ -931,6 +946,7 @@ void kvm_set_cpu_caps(void)
VENDOR_F(WAITPKG),
F(SGX_LC),
F(BUS_LOCK_DETECT),
X86_64_F(SHSTK),
);
/*
@ -940,6 +956,14 @@ void kvm_set_cpu_caps(void)
if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
kvm_cpu_cap_clear(X86_FEATURE_PKU);
/*
* Shadow Stacks aren't implemented in the Shadow MMU. Shadow Stack
* accesses require "magic" Writable=0,Dirty=1 protection, which KVM
* doesn't know how to emulate or map.
*/
if (!tdp_enabled)
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_init(CPUID_7_EDX,
F(AVX512_4VNNIW),
F(AVX512_4FMAPS),
@ -957,8 +981,19 @@ void kvm_set_cpu_caps(void)
F(AMX_INT8),
F(AMX_BF16),
F(FLUSH_L1D),
F(IBT),
);
/*
* Disable support for IBT and SHSTK if KVM is configured to emulate
* accesses to reserved GPAs, as KVM's emulator doesn't support IBT or
* SHSTK, nor does KVM handle Shadow Stack #PFs (see above).
*/
if (allow_smaller_maxphyaddr) {
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
}
if (boot_cpu_has(X86_FEATURE_AMD_IBPB_RET) &&
boot_cpu_has(X86_FEATURE_AMD_IBPB) &&
boot_cpu_has(X86_FEATURE_AMD_IBRS))
@ -985,6 +1020,10 @@ void kvm_set_cpu_caps(void)
F(LAM),
);
kvm_cpu_cap_init(CPUID_7_1_ECX,
SCATTERED_F(MSR_IMM),
);
kvm_cpu_cap_init(CPUID_7_1_EDX,
F(AVX_VNNI_INT8),
F(AVX_NE_CONVERT),
@ -1222,7 +1261,7 @@ void kvm_set_cpu_caps(void)
kvm_cpu_cap_clear(X86_FEATURE_RDPID);
}
}
EXPORT_SYMBOL_GPL(kvm_set_cpu_caps);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_cpu_caps);
#undef F
#undef SCATTERED_F
@ -1411,9 +1450,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
goto out;
cpuid_entry_override(entry, CPUID_7_1_EAX);
cpuid_entry_override(entry, CPUID_7_1_ECX);
cpuid_entry_override(entry, CPUID_7_1_EDX);
entry->ebx = 0;
entry->ecx = 0;
}
if (max_idx >= 2) {
entry = do_host_cpuid(array, function, 2);
@ -1820,7 +1859,8 @@ static int get_cpuid_func(struct kvm_cpuid_array *array, u32 func,
int r;
if (func == CENTAUR_CPUID_SIGNATURE &&
boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR &&
boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN)
return 0;
r = do_cpuid_func(array, func, type);
@ -2001,7 +2041,7 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
if (function == 7 && index == 0) {
u64 data;
if ((*ebx & (feature_bit(RTM) | feature_bit(HLE))) &&
!__kvm_get_msr(vcpu, MSR_IA32_TSX_CTRL, &data, true) &&
!kvm_msr_read(vcpu, MSR_IA32_TSX_CTRL, &data) &&
(data & TSX_CTRL_CPUID_CLEAR))
*ebx &= ~(feature_bit(RTM) | feature_bit(HLE));
} else if (function == 0x80000007) {
@ -2045,7 +2085,7 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
used_max_basic);
return exact;
}
EXPORT_SYMBOL_GPL(kvm_cpuid);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpuid);
int kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
{
@ -2063,4 +2103,4 @@ int kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
kvm_rdx_write(vcpu, edx);
return kvm_skip_emulated_instruction(vcpu);
}
EXPORT_SYMBOL_GPL(kvm_emulate_cpuid);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_cpuid);

View File

@ -178,6 +178,7 @@
#define IncSP ((u64)1 << 54) /* SP is incremented before ModRM calc */
#define TwoMemOp ((u64)1 << 55) /* Instruction has two memory operand */
#define IsBranch ((u64)1 << 56) /* Instruction is considered a branch. */
#define ShadowStack ((u64)1 << 57) /* Instruction affects Shadow Stacks. */
#define DstXacc (DstAccLo | SrcAccHi | SrcWrite)
@ -1553,6 +1554,37 @@ static int write_segment_descriptor(struct x86_emulate_ctxt *ctxt,
return linear_write_system(ctxt, addr, desc, sizeof(*desc));
}
static bool emulator_is_ssp_invalid(struct x86_emulate_ctxt *ctxt, u8 cpl)
{
const u32 MSR_IA32_X_CET = cpl == 3 ? MSR_IA32_U_CET : MSR_IA32_S_CET;
u64 efer = 0, cet = 0, ssp = 0;
if (!(ctxt->ops->get_cr(ctxt, 4) & X86_CR4_CET))
return false;
if (ctxt->ops->get_msr(ctxt, MSR_EFER, &efer))
return true;
/* SSP is guaranteed to be valid if the vCPU was already in 32-bit mode. */
if (!(efer & EFER_LMA))
return false;
if (ctxt->ops->get_msr(ctxt, MSR_IA32_X_CET, &cet))
return true;
if (!(cet & CET_SHSTK_EN))
return false;
if (ctxt->ops->get_msr(ctxt, MSR_KVM_INTERNAL_GUEST_SSP, &ssp))
return true;
/*
* On transfer from 64-bit mode to compatibility mode, SSP[63:32] must
* be 0, i.e. SSP must be a 32-bit value outside of 64-bit mode.
*/
return ssp >> 32;
}
static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
u16 selector, int seg, u8 cpl,
enum x86_transfer_type transfer,
@ -1693,6 +1725,10 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
if (efer & EFER_LMA)
goto exception;
}
if (!seg_desc.l && emulator_is_ssp_invalid(ctxt, cpl)) {
err_code = 0;
goto exception;
}
/* CS(RPL) <- CPL */
selector = (selector & 0xfffc) | cpl;
@ -4068,8 +4104,8 @@ static const struct opcode group4[] = {
static const struct opcode group5[] = {
F(DstMem | SrcNone | Lock, em_inc),
F(DstMem | SrcNone | Lock, em_dec),
I(SrcMem | NearBranch | IsBranch, em_call_near_abs),
I(SrcMemFAddr | ImplicitOps | IsBranch, em_call_far),
I(SrcMem | NearBranch | IsBranch | ShadowStack, em_call_near_abs),
I(SrcMemFAddr | ImplicitOps | IsBranch | ShadowStack, em_call_far),
I(SrcMem | NearBranch | IsBranch, em_jmp_abs),
I(SrcMemFAddr | ImplicitOps | IsBranch, em_jmp_far),
I(SrcMem | Stack | TwoMemOp, em_push), D(Undefined),
@ -4304,7 +4340,7 @@ static const struct opcode opcode_table[256] = {
DI(SrcAcc | DstReg, pause), X7(D(SrcAcc | DstReg)),
/* 0x98 - 0x9F */
D(DstAcc | SrcNone), I(ImplicitOps | SrcAcc, em_cwd),
I(SrcImmFAddr | No64 | IsBranch, em_call_far), N,
I(SrcImmFAddr | No64 | IsBranch | ShadowStack, em_call_far), N,
II(ImplicitOps | Stack, em_pushf, pushf),
II(ImplicitOps | Stack, em_popf, popf),
I(ImplicitOps, em_sahf), I(ImplicitOps, em_lahf),
@ -4324,19 +4360,19 @@ static const struct opcode opcode_table[256] = {
X8(I(DstReg | SrcImm64 | Mov, em_mov)),
/* 0xC0 - 0xC7 */
G(ByteOp | Src2ImmByte, group2), G(Src2ImmByte, group2),
I(ImplicitOps | NearBranch | SrcImmU16 | IsBranch, em_ret_near_imm),
I(ImplicitOps | NearBranch | IsBranch, em_ret),
I(ImplicitOps | NearBranch | SrcImmU16 | IsBranch | ShadowStack, em_ret_near_imm),
I(ImplicitOps | NearBranch | IsBranch | ShadowStack, em_ret),
I(DstReg | SrcMemFAddr | ModRM | No64 | Src2ES, em_lseg),
I(DstReg | SrcMemFAddr | ModRM | No64 | Src2DS, em_lseg),
G(ByteOp, group11), G(0, group11),
/* 0xC8 - 0xCF */
I(Stack | SrcImmU16 | Src2ImmByte | IsBranch, em_enter),
I(Stack | IsBranch, em_leave),
I(ImplicitOps | SrcImmU16 | IsBranch, em_ret_far_imm),
I(ImplicitOps | IsBranch, em_ret_far),
D(ImplicitOps | IsBranch), DI(SrcImmByte | IsBranch, intn),
I(Stack | SrcImmU16 | Src2ImmByte, em_enter),
I(Stack, em_leave),
I(ImplicitOps | SrcImmU16 | IsBranch | ShadowStack, em_ret_far_imm),
I(ImplicitOps | IsBranch | ShadowStack, em_ret_far),
D(ImplicitOps | IsBranch), DI(SrcImmByte | IsBranch | ShadowStack, intn),
D(ImplicitOps | No64 | IsBranch),
II(ImplicitOps | IsBranch, em_iret, iret),
II(ImplicitOps | IsBranch | ShadowStack, em_iret, iret),
/* 0xD0 - 0xD7 */
G(Src2One | ByteOp, group2), G(Src2One, group2),
G(Src2CL | ByteOp, group2), G(Src2CL, group2),
@ -4352,7 +4388,7 @@ static const struct opcode opcode_table[256] = {
I2bvIP(SrcImmUByte | DstAcc, em_in, in, check_perm_in),
I2bvIP(SrcAcc | DstImmUByte, em_out, out, check_perm_out),
/* 0xE8 - 0xEF */
I(SrcImm | NearBranch | IsBranch, em_call),
I(SrcImm | NearBranch | IsBranch | ShadowStack, em_call),
D(SrcImm | ImplicitOps | NearBranch | IsBranch),
I(SrcImmFAddr | No64 | IsBranch, em_jmp_far),
D(SrcImmByte | ImplicitOps | NearBranch | IsBranch),
@ -4371,7 +4407,7 @@ static const struct opcode opcode_table[256] = {
static const struct opcode twobyte_table[256] = {
/* 0x00 - 0x0F */
G(0, group6), GD(0, &group7), N, N,
N, I(ImplicitOps | EmulateOnUD | IsBranch, em_syscall),
N, I(ImplicitOps | EmulateOnUD | IsBranch | ShadowStack, em_syscall),
II(ImplicitOps | Priv, em_clts, clts), N,
DI(ImplicitOps | Priv, invd), DI(ImplicitOps | Priv, wbinvd), N, N,
N, D(ImplicitOps | ModRM | SrcMem | NoAccess), N, N,
@ -4402,8 +4438,8 @@ static const struct opcode twobyte_table[256] = {
IIP(ImplicitOps, em_rdtsc, rdtsc, check_rdtsc),
II(ImplicitOps | Priv, em_rdmsr, rdmsr),
IIP(ImplicitOps, em_rdpmc, rdpmc, check_rdpmc),
I(ImplicitOps | EmulateOnUD | IsBranch, em_sysenter),
I(ImplicitOps | Priv | EmulateOnUD | IsBranch, em_sysexit),
I(ImplicitOps | EmulateOnUD | IsBranch | ShadowStack, em_sysenter),
I(ImplicitOps | Priv | EmulateOnUD | IsBranch | ShadowStack, em_sysexit),
N, N,
N, N, N, N, N, N, N, N,
/* 0x40 - 0x4F */
@ -4514,6 +4550,60 @@ static const struct opcode opcode_map_0f_38[256] = {
#undef I2bvIP
#undef I6ALU
static bool is_shstk_instruction(struct x86_emulate_ctxt *ctxt)
{
return ctxt->d & ShadowStack;
}
static bool is_ibt_instruction(struct x86_emulate_ctxt *ctxt)
{
u64 flags = ctxt->d;
if (!(flags & IsBranch))
return false;
/*
* All far JMPs and CALLs (including SYSCALL, SYSENTER, and INTn) are
* indirect and thus affect IBT state. All far RETs (including SYSEXIT
* and IRET) are protected via Shadow Stacks and thus don't affect IBT
* state. IRET #GPs when returning to virtual-8086 and IBT or SHSTK is
* enabled, but that should be handled by IRET emulation (in the very
* unlikely scenario that KVM adds support for fully emulating IRET).
*/
if (!(flags & NearBranch))
return ctxt->execute != em_iret &&
ctxt->execute != em_ret_far &&
ctxt->execute != em_ret_far_imm &&
ctxt->execute != em_sysexit;
switch (flags & SrcMask) {
case SrcReg:
case SrcMem:
case SrcMem16:
case SrcMem32:
return true;
case SrcMemFAddr:
case SrcImmFAddr:
/* Far branches should be handled above. */
WARN_ON_ONCE(1);
return true;
case SrcNone:
case SrcImm:
case SrcImmByte:
/*
* Note, ImmU16 is used only for the stack adjustment operand on ENTER
* and RET instructions. ENTER isn't a branch and RET FAR is handled
* by the NearBranch check above. RET itself isn't an indirect branch.
*/
case SrcImmU16:
return false;
default:
WARN_ONCE(1, "Unexpected Src operand '%llx' on branch",
flags & SrcMask);
return false;
}
}
static unsigned imm_size(struct x86_emulate_ctxt *ctxt)
{
unsigned size;
@ -4943,6 +5033,40 @@ done_prefixes:
ctxt->execute = opcode.u.execute;
/*
* Reject emulation if KVM might need to emulate shadow stack updates
* and/or indirect branch tracking enforcement, which the emulator
* doesn't support.
*/
if ((is_ibt_instruction(ctxt) || is_shstk_instruction(ctxt)) &&
ctxt->ops->get_cr(ctxt, 4) & X86_CR4_CET) {
u64 u_cet = 0, s_cet = 0;
/*
* Check both User and Supervisor on far transfers as inter-
* privilege level transfers are impacted by CET at the target
* privilege level, and that is not known at this time. The
* expectation is that the guest will not require emulation of
* any CET-affected instructions at any privilege level.
*/
if (!(ctxt->d & NearBranch))
u_cet = s_cet = CET_SHSTK_EN | CET_ENDBR_EN;
else if (ctxt->ops->cpl(ctxt) == 3)
u_cet = CET_SHSTK_EN | CET_ENDBR_EN;
else
s_cet = CET_SHSTK_EN | CET_ENDBR_EN;
if ((u_cet && ctxt->ops->get_msr(ctxt, MSR_IA32_U_CET, &u_cet)) ||
(s_cet && ctxt->ops->get_msr(ctxt, MSR_IA32_S_CET, &s_cet)))
return EMULATION_FAILED;
if ((u_cet | s_cet) & CET_SHSTK_EN && is_shstk_instruction(ctxt))
return EMULATION_FAILED;
if ((u_cet | s_cet) & CET_ENDBR_EN && is_ibt_instruction(ctxt))
return EMULATION_FAILED;
}
if (unlikely(emulation_type & EMULTYPE_TRAP_UD) &&
likely(!(ctxt->d & EmulateOnUD)))
return EMULATION_FAILED;
@ -5107,12 +5231,11 @@ void init_decode_cache(struct x86_emulate_ctxt *ctxt)
ctxt->mem_read.end = 0;
}
int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
int x86_emulate_insn(struct x86_emulate_ctxt *ctxt, bool check_intercepts)
{
const struct x86_emulate_ops *ops = ctxt->ops;
int rc = X86EMUL_CONTINUE;
int saved_dst_type = ctxt->dst.type;
bool is_guest_mode = ctxt->ops->is_guest_mode(ctxt);
ctxt->mem_read.pos = 0;
@ -5160,7 +5283,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
fetch_possible_mmx_operand(&ctxt->dst);
}
if (unlikely(is_guest_mode) && ctxt->intercept) {
if (unlikely(check_intercepts) && ctxt->intercept) {
rc = emulator_check_intercept(ctxt, ctxt->intercept,
X86_ICPT_PRE_EXCEPT);
if (rc != X86EMUL_CONTINUE)
@ -5189,7 +5312,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
goto done;
}
if (unlikely(is_guest_mode) && (ctxt->d & Intercept)) {
if (unlikely(check_intercepts) && (ctxt->d & Intercept)) {
rc = emulator_check_intercept(ctxt, ctxt->intercept,
X86_ICPT_POST_EXCEPT);
if (rc != X86EMUL_CONTINUE)
@ -5243,7 +5366,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
special_insn:
if (unlikely(is_guest_mode) && (ctxt->d & Intercept)) {
if (unlikely(check_intercepts) && (ctxt->d & Intercept)) {
rc = emulator_check_intercept(ctxt, ctxt->intercept,
X86_ICPT_POST_MEMACCESS);
if (rc != X86EMUL_CONTINUE)

View File

@ -923,7 +923,7 @@ bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu)
return false;
return vcpu->arch.pv_eoi.msr_val & KVM_MSR_ENABLED;
}
EXPORT_SYMBOL_GPL(kvm_hv_assist_page_enabled);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_hv_assist_page_enabled);
int kvm_hv_get_assist_page(struct kvm_vcpu *vcpu)
{
@ -935,7 +935,7 @@ int kvm_hv_get_assist_page(struct kvm_vcpu *vcpu)
return kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data,
&hv_vcpu->vp_assist_page, sizeof(struct hv_vp_assist_page));
}
EXPORT_SYMBOL_GPL(kvm_hv_get_assist_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_hv_get_assist_page);
static void stimer_prepare_msg(struct kvm_vcpu_hv_stimer *stimer)
{
@ -1168,15 +1168,15 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
BUILD_BUG_ON(sizeof(tsc_seq) != sizeof(hv->tsc_ref.tsc_sequence));
BUILD_BUG_ON(offsetof(struct ms_hyperv_tsc_page, tsc_sequence) != 0);
mutex_lock(&hv->hv_lock);
guard(mutex)(&hv->hv_lock);
if (hv->hv_tsc_page_status == HV_TSC_PAGE_BROKEN ||
hv->hv_tsc_page_status == HV_TSC_PAGE_SET ||
hv->hv_tsc_page_status == HV_TSC_PAGE_UNSET)
goto out_unlock;
return;
if (!(hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE))
goto out_unlock;
return;
gfn = hv->hv_tsc_page >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT;
/*
@ -1192,7 +1192,7 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
goto out_err;
hv->hv_tsc_page_status = HV_TSC_PAGE_SET;
goto out_unlock;
return;
}
/*
@ -1228,12 +1228,10 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
goto out_err;
hv->hv_tsc_page_status = HV_TSC_PAGE_SET;
goto out_unlock;
return;
out_err:
hv->hv_tsc_page_status = HV_TSC_PAGE_BROKEN;
out_unlock:
mutex_unlock(&hv->hv_lock);
}
void kvm_hv_request_tsc_page_update(struct kvm *kvm)

View File

@ -1,3 +1,4 @@
// SPDX-License-Identifier: LGPL-2.1-or-later
/*
* Copyright (C) 2001 MandrakeSoft S.A.
* Copyright 2010 Red Hat, Inc. and/or its affiliates.
@ -8,20 +9,6 @@
* http://www.linux-mandrake.com/
* http://www.mandrakesoft.com/
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* Yunhong Jiang <yunhong.jiang@intel.com>
* Yaozu (Eddie) Dong <eddie.dong@intel.com>
* Based on Xen 3.1 code.

View File

@ -103,7 +103,7 @@ int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v)
return kvm_apic_has_interrupt(v) != -1; /* LAPIC */
}
EXPORT_SYMBOL_GPL(kvm_cpu_has_injectable_intr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_has_injectable_intr);
/*
* check if there is pending interrupt without
@ -119,7 +119,7 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *v)
return kvm_apic_has_interrupt(v) != -1; /* LAPIC */
}
EXPORT_SYMBOL_GPL(kvm_cpu_has_interrupt);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_has_interrupt);
/*
* Read pending interrupt(from non-APIC source)
@ -148,7 +148,7 @@ int kvm_cpu_get_extint(struct kvm_vcpu *v)
WARN_ON_ONCE(!irqchip_split(v->kvm));
return get_userspace_extint(v);
}
EXPORT_SYMBOL_GPL(kvm_cpu_get_extint);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_cpu_get_extint);
/*
* Read pending interrupt vector and intack.
@ -195,63 +195,6 @@ bool kvm_arch_irqchip_in_kernel(struct kvm *kvm)
return irqchip_in_kernel(kvm);
}
int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
struct kvm_lapic_irq *irq, struct dest_map *dest_map)
{
int r = -1;
struct kvm_vcpu *vcpu, *lowest = NULL;
unsigned long i, dest_vcpu_bitmap[BITS_TO_LONGS(KVM_MAX_VCPUS)];
unsigned int dest_vcpus = 0;
if (kvm_irq_delivery_to_apic_fast(kvm, src, irq, &r, dest_map))
return r;
if (irq->dest_mode == APIC_DEST_PHYSICAL &&
irq->dest_id == 0xff && kvm_lowest_prio_delivery(irq)) {
pr_info("apic: phys broadcast and lowest prio\n");
irq->delivery_mode = APIC_DM_FIXED;
}
memset(dest_vcpu_bitmap, 0, sizeof(dest_vcpu_bitmap));
kvm_for_each_vcpu(i, vcpu, kvm) {
if (!kvm_apic_present(vcpu))
continue;
if (!kvm_apic_match_dest(vcpu, src, irq->shorthand,
irq->dest_id, irq->dest_mode))
continue;
if (!kvm_lowest_prio_delivery(irq)) {
if (r < 0)
r = 0;
r += kvm_apic_set_irq(vcpu, irq, dest_map);
} else if (kvm_apic_sw_enabled(vcpu->arch.apic)) {
if (!kvm_vector_hashing_enabled()) {
if (!lowest)
lowest = vcpu;
else if (kvm_apic_compare_prio(vcpu, lowest) < 0)
lowest = vcpu;
} else {
__set_bit(i, dest_vcpu_bitmap);
dest_vcpus++;
}
}
}
if (dest_vcpus != 0) {
int idx = kvm_vector_to_index(irq->vector, dest_vcpus,
dest_vcpu_bitmap, KVM_MAX_VCPUS);
lowest = kvm_get_vcpu(kvm, idx);
}
if (lowest)
r = kvm_apic_set_irq(lowest, irq, dest_map);
return r;
}
static void kvm_msi_to_lapic_irq(struct kvm *kvm,
struct kvm_kernel_irq_routing_entry *e,
struct kvm_lapic_irq *irq)
@ -411,34 +354,6 @@ int kvm_set_routing_entry(struct kvm *kvm,
return 0;
}
bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
struct kvm_vcpu **dest_vcpu)
{
int r = 0;
unsigned long i;
struct kvm_vcpu *vcpu;
if (kvm_intr_is_single_vcpu_fast(kvm, irq, dest_vcpu))
return true;
kvm_for_each_vcpu(i, vcpu, kvm) {
if (!kvm_apic_present(vcpu))
continue;
if (!kvm_apic_match_dest(vcpu, NULL, irq->shorthand,
irq->dest_id, irq->dest_mode))
continue;
if (++r == 2)
return false;
*dest_vcpu = vcpu;
}
return r == 1;
}
EXPORT_SYMBOL_GPL(kvm_intr_is_single_vcpu);
void kvm_scan_ioapic_irq(struct kvm_vcpu *vcpu, u32 dest_id, u16 dest_mode,
u8 vector, unsigned long *ioapic_handled_vectors)
{

View File

@ -121,8 +121,4 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu);
int apic_has_pending_timer(struct kvm_vcpu *vcpu);
int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
struct kvm_lapic_irq *irq,
struct dest_map *dest_map);
#endif

View File

@ -7,7 +7,8 @@
#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS | X86_CR0_WP)
#define KVM_POSSIBLE_CR4_GUEST_BITS \
(X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \
| X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE)
| X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE \
| X86_CR4_CET)
#define X86_CR0_PDPTR_BITS (X86_CR0_CD | X86_CR0_NW | X86_CR0_PG)
#define X86_CR4_TLBFLUSH_BITS (X86_CR4_PGE | X86_CR4_PCIDE | X86_CR4_PAE | X86_CR4_SMEP)

View File

@ -235,7 +235,6 @@ struct x86_emulate_ops {
void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked);
bool (*is_smm)(struct x86_emulate_ctxt *ctxt);
bool (*is_guest_mode)(struct x86_emulate_ctxt *ctxt);
int (*leave_smm)(struct x86_emulate_ctxt *ctxt);
void (*triple_fault)(struct x86_emulate_ctxt *ctxt);
int (*set_xcr)(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr);
@ -521,7 +520,7 @@ bool x86_page_table_writing_insn(struct x86_emulate_ctxt *ctxt);
#define EMULATION_RESTART 1
#define EMULATION_INTERCEPTED 2
void init_decode_cache(struct x86_emulate_ctxt *ctxt);
int x86_emulate_insn(struct x86_emulate_ctxt *ctxt);
int x86_emulate_insn(struct x86_emulate_ctxt *ctxt, bool check_intercepts);
int emulator_task_switch(struct x86_emulate_ctxt *ctxt,
u16 tss_selector, int idt_index, int reason,
bool has_error_code, u32 error_code);

View File

@ -101,13 +101,13 @@ int hv_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, gfn_t nr_pages)
return __hv_flush_remote_tlbs_range(kvm, &range);
}
EXPORT_SYMBOL_GPL(hv_flush_remote_tlbs_range);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(hv_flush_remote_tlbs_range);
int hv_flush_remote_tlbs(struct kvm *kvm)
{
return __hv_flush_remote_tlbs_range(kvm, NULL);
}
EXPORT_SYMBOL_GPL(hv_flush_remote_tlbs);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(hv_flush_remote_tlbs);
void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
{
@ -121,4 +121,4 @@ void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
spin_unlock(&kvm_arch->hv_root_tdp_lock);
}
}
EXPORT_SYMBOL_GPL(hv_track_root_tdp);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(hv_track_root_tdp);

View File

@ -74,6 +74,10 @@ module_param(lapic_timer_advance, bool, 0444);
#define LAPIC_TIMER_ADVANCE_NS_MAX 5000
/* step-by-step approximation to mitigate fluctuation */
#define LAPIC_TIMER_ADVANCE_ADJUST_STEP 8
static bool __read_mostly vector_hashing_enabled = true;
module_param_named(vector_hashing, vector_hashing_enabled, bool, 0444);
static int kvm_lapic_msr_read(struct kvm_lapic *apic, u32 reg, u64 *data);
static int kvm_lapic_msr_write(struct kvm_lapic *apic, u32 reg, u64 data);
@ -102,7 +106,7 @@ bool kvm_apic_pending_eoi(struct kvm_vcpu *vcpu, int vector)
}
__read_mostly DEFINE_STATIC_KEY_FALSE(kvm_has_noapic_vcpu);
EXPORT_SYMBOL_GPL(kvm_has_noapic_vcpu);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_has_noapic_vcpu);
__read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_hw_disabled, HZ);
__read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_sw_disabled, HZ);
@ -130,7 +134,7 @@ static bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu)
(kvm_mwait_in_guest(vcpu->kvm) || kvm_hlt_in_guest(vcpu->kvm));
}
bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu)
static bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu)
{
return kvm_x86_ops.set_hv_timer
&& !(kvm_mwait_in_guest(vcpu->kvm) ||
@ -642,7 +646,7 @@ bool __kvm_apic_update_irr(unsigned long *pir, void *regs, int *max_irr)
return ((max_updated_irr != -1) &&
(max_updated_irr == *max_irr));
}
EXPORT_SYMBOL_GPL(__kvm_apic_update_irr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_apic_update_irr);
bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, unsigned long *pir, int *max_irr)
{
@ -653,7 +657,7 @@ bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, unsigned long *pir, int *max_irr
apic->irr_pending = true;
return irr_updated;
}
EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_update_irr);
static inline int apic_search_irr(struct kvm_lapic *apic)
{
@ -693,7 +697,7 @@ void kvm_apic_clear_irr(struct kvm_vcpu *vcpu, int vec)
{
apic_clear_irr(vec, vcpu->arch.apic);
}
EXPORT_SYMBOL_GPL(kvm_apic_clear_irr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_clear_irr);
static void *apic_vector_to_isr(int vec, struct kvm_lapic *apic)
{
@ -775,7 +779,7 @@ void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu)
kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
}
EXPORT_SYMBOL_GPL(kvm_apic_update_hwapic_isr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_update_hwapic_isr);
int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
{
@ -786,7 +790,7 @@ int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
*/
return apic_find_highest_irr(vcpu->arch.apic);
}
EXPORT_SYMBOL_GPL(kvm_lapic_find_highest_irr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_find_highest_irr);
static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
int vector, int level, int trig_mode,
@ -950,7 +954,7 @@ void kvm_apic_update_ppr(struct kvm_vcpu *vcpu)
{
apic_update_ppr(vcpu->arch.apic);
}
EXPORT_SYMBOL_GPL(kvm_apic_update_ppr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_update_ppr);
static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr)
{
@ -1061,21 +1065,14 @@ bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
return false;
}
}
EXPORT_SYMBOL_GPL(kvm_apic_match_dest);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_match_dest);
int kvm_vector_to_index(u32 vector, u32 dest_vcpus,
const unsigned long *bitmap, u32 bitmap_size)
static int kvm_vector_to_index(u32 vector, u32 dest_vcpus,
const unsigned long *bitmap, u32 bitmap_size)
{
u32 mod;
int i, idx = -1;
mod = vector % dest_vcpus;
for (i = 0; i <= mod; i++) {
idx = find_next_bit(bitmap, bitmap_size, idx + 1);
BUG_ON(idx == bitmap_size);
}
int idx = find_nth_bit(bitmap, bitmap_size, vector % dest_vcpus);
BUG_ON(idx >= bitmap_size);
return idx;
}
@ -1106,6 +1103,16 @@ static bool kvm_apic_is_broadcast_dest(struct kvm *kvm, struct kvm_lapic **src,
return false;
}
static bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq)
{
return (irq->delivery_mode == APIC_DM_LOWEST || irq->msi_redir_hint);
}
static int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2)
{
return vcpu1->arch.apic_arb_prio - vcpu2->arch.apic_arb_prio;
}
/* Return true if the interrupt can be handled by using *bitmap as index mask
* for valid destinations in *dst array.
* Return false if kvm_apic_map_get_dest_lapic did nothing useful.
@ -1149,7 +1156,7 @@ static inline bool kvm_apic_map_get_dest_lapic(struct kvm *kvm,
if (!kvm_lowest_prio_delivery(irq))
return true;
if (!kvm_vector_hashing_enabled()) {
if (!vector_hashing_enabled) {
lowest = -1;
for_each_set_bit(i, bitmap, 16) {
if (!(*dst)[i])
@ -1230,8 +1237,9 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
* interrupt.
* - Otherwise, use remapped mode to inject the interrupt.
*/
bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq,
struct kvm_vcpu **dest_vcpu)
static bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm,
struct kvm_lapic_irq *irq,
struct kvm_vcpu **dest_vcpu)
{
struct kvm_apic_map *map;
unsigned long bitmap;
@ -1258,6 +1266,91 @@ bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq,
return ret;
}
bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
struct kvm_vcpu **dest_vcpu)
{
int r = 0;
unsigned long i;
struct kvm_vcpu *vcpu;
if (kvm_intr_is_single_vcpu_fast(kvm, irq, dest_vcpu))
return true;
kvm_for_each_vcpu(i, vcpu, kvm) {
if (!kvm_apic_present(vcpu))
continue;
if (!kvm_apic_match_dest(vcpu, NULL, irq->shorthand,
irq->dest_id, irq->dest_mode))
continue;
if (++r == 2)
return false;
*dest_vcpu = vcpu;
}
return r == 1;
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_intr_is_single_vcpu);
int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
struct kvm_lapic_irq *irq, struct dest_map *dest_map)
{
int r = -1;
struct kvm_vcpu *vcpu, *lowest = NULL;
unsigned long i, dest_vcpu_bitmap[BITS_TO_LONGS(KVM_MAX_VCPUS)];
unsigned int dest_vcpus = 0;
if (kvm_irq_delivery_to_apic_fast(kvm, src, irq, &r, dest_map))
return r;
if (irq->dest_mode == APIC_DEST_PHYSICAL &&
irq->dest_id == 0xff && kvm_lowest_prio_delivery(irq)) {
pr_info("apic: phys broadcast and lowest prio\n");
irq->delivery_mode = APIC_DM_FIXED;
}
memset(dest_vcpu_bitmap, 0, sizeof(dest_vcpu_bitmap));
kvm_for_each_vcpu(i, vcpu, kvm) {
if (!kvm_apic_present(vcpu))
continue;
if (!kvm_apic_match_dest(vcpu, src, irq->shorthand,
irq->dest_id, irq->dest_mode))
continue;
if (!kvm_lowest_prio_delivery(irq)) {
if (r < 0)
r = 0;
r += kvm_apic_set_irq(vcpu, irq, dest_map);
} else if (kvm_apic_sw_enabled(vcpu->arch.apic)) {
if (!vector_hashing_enabled) {
if (!lowest)
lowest = vcpu;
else if (kvm_apic_compare_prio(vcpu, lowest) < 0)
lowest = vcpu;
} else {
__set_bit(i, dest_vcpu_bitmap);
dest_vcpus++;
}
}
}
if (dest_vcpus != 0) {
int idx = kvm_vector_to_index(irq->vector, dest_vcpus,
dest_vcpu_bitmap, KVM_MAX_VCPUS);
lowest = kvm_get_vcpu(kvm, idx);
}
if (lowest)
r = kvm_apic_set_irq(lowest, irq, dest_map);
return r;
}
/*
* Add a pending IRQ into lapic.
* Return 1 if successfully added and 0 if discarded.
@ -1401,11 +1494,6 @@ void kvm_bitmap_or_dest_vcpus(struct kvm *kvm, struct kvm_lapic_irq *irq,
rcu_read_unlock();
}
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2)
{
return vcpu1->arch.apic_arb_prio - vcpu2->arch.apic_arb_prio;
}
static bool kvm_ioapic_handles_vector(struct kvm_lapic *apic, int vector)
{
return test_bit(vector, apic->vcpu->arch.ioapic_handled_vectors);
@ -1481,32 +1569,38 @@ void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector)
kvm_ioapic_send_eoi(apic, vector);
kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
}
EXPORT_SYMBOL_GPL(kvm_apic_set_eoi_accelerated);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_set_eoi_accelerated);
static void kvm_icr_to_lapic_irq(struct kvm_lapic *apic, u32 icr_low,
u32 icr_high, struct kvm_lapic_irq *irq)
{
/* KVM has no delay and should always clear the BUSY/PENDING flag. */
WARN_ON_ONCE(icr_low & APIC_ICR_BUSY);
irq->vector = icr_low & APIC_VECTOR_MASK;
irq->delivery_mode = icr_low & APIC_MODE_MASK;
irq->dest_mode = icr_low & APIC_DEST_MASK;
irq->level = (icr_low & APIC_INT_ASSERT) != 0;
irq->trig_mode = icr_low & APIC_INT_LEVELTRIG;
irq->shorthand = icr_low & APIC_SHORT_MASK;
irq->msi_redir_hint = false;
if (apic_x2apic_mode(apic))
irq->dest_id = icr_high;
else
irq->dest_id = GET_XAPIC_DEST_FIELD(icr_high);
}
void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high)
{
struct kvm_lapic_irq irq;
/* KVM has no delay and should always clear the BUSY/PENDING flag. */
WARN_ON_ONCE(icr_low & APIC_ICR_BUSY);
irq.vector = icr_low & APIC_VECTOR_MASK;
irq.delivery_mode = icr_low & APIC_MODE_MASK;
irq.dest_mode = icr_low & APIC_DEST_MASK;
irq.level = (icr_low & APIC_INT_ASSERT) != 0;
irq.trig_mode = icr_low & APIC_INT_LEVELTRIG;
irq.shorthand = icr_low & APIC_SHORT_MASK;
irq.msi_redir_hint = false;
if (apic_x2apic_mode(apic))
irq.dest_id = icr_high;
else
irq.dest_id = GET_XAPIC_DEST_FIELD(icr_high);
kvm_icr_to_lapic_irq(apic, icr_low, icr_high, &irq);
trace_kvm_apic_ipi(icr_low, irq.dest_id);
kvm_irq_delivery_to_apic(apic->vcpu->kvm, apic, &irq, NULL);
}
EXPORT_SYMBOL_GPL(kvm_apic_send_ipi);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_send_ipi);
static u32 apic_get_tmcct(struct kvm_lapic *apic)
{
@ -1623,7 +1717,7 @@ u64 kvm_lapic_readable_reg_mask(struct kvm_lapic *apic)
return valid_reg_mask;
}
EXPORT_SYMBOL_GPL(kvm_lapic_readable_reg_mask);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_readable_reg_mask);
static int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
void *data)
@ -1864,7 +1958,7 @@ void kvm_wait_lapic_expire(struct kvm_vcpu *vcpu)
lapic_timer_int_injected(vcpu))
__kvm_wait_lapic_expire(vcpu);
}
EXPORT_SYMBOL_GPL(kvm_wait_lapic_expire);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_wait_lapic_expire);
static void kvm_apic_inject_pending_timer_irqs(struct kvm_lapic *apic)
{
@ -2178,7 +2272,7 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
out:
preempt_enable();
}
EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_expired_hv_timer);
void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu)
{
@ -2431,11 +2525,11 @@ void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
{
kvm_lapic_reg_write(vcpu->arch.apic, APIC_EOI, 0);
}
EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lapic_set_eoi);
#define X2APIC_ICR_RESERVED_BITS (GENMASK_ULL(31, 20) | GENMASK_ULL(17, 16) | BIT(13))
int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
static int __kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data, bool fast)
{
if (data & X2APIC_ICR_RESERVED_BITS)
return 1;
@ -2450,7 +2544,20 @@ int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
*/
data &= ~APIC_ICR_BUSY;
kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
if (fast) {
struct kvm_lapic_irq irq;
int ignored;
kvm_icr_to_lapic_irq(apic, (u32)data, (u32)(data >> 32), &irq);
if (!kvm_irq_delivery_to_apic_fast(apic->vcpu->kvm, apic, &irq,
&ignored, NULL))
return -EWOULDBLOCK;
trace_kvm_apic_ipi((u32)data, irq.dest_id);
} else {
kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
}
if (kvm_x86_ops.x2apic_icr_is_split) {
kvm_lapic_set_reg(apic, APIC_ICR, data);
kvm_lapic_set_reg(apic, APIC_ICR2, data >> 32);
@ -2461,6 +2568,16 @@ int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
return 0;
}
static int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
{
return __kvm_x2apic_icr_write(apic, data, false);
}
int kvm_x2apic_icr_write_fast(struct kvm_lapic *apic, u64 data)
{
return __kvm_x2apic_icr_write(apic, data, true);
}
static u64 kvm_x2apic_icr_read(struct kvm_lapic *apic)
{
if (kvm_x86_ops.x2apic_icr_is_split)
@ -2491,7 +2608,7 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
else
kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset));
}
EXPORT_SYMBOL_GPL(kvm_apic_write_nodecode);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_write_nodecode);
void kvm_free_lapic(struct kvm_vcpu *vcpu)
{
@ -2629,7 +2746,7 @@ int kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 value, bool host_initiated)
kvm_recalculate_apic_map(vcpu->kvm);
return 0;
}
EXPORT_SYMBOL_GPL(kvm_apic_set_base);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_set_base);
void kvm_apic_update_apicv(struct kvm_vcpu *vcpu)
{
@ -2661,26 +2778,23 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu)
int kvm_alloc_apic_access_page(struct kvm *kvm)
{
void __user *hva;
int ret = 0;
mutex_lock(&kvm->slots_lock);
guard(mutex)(&kvm->slots_lock);
if (kvm->arch.apic_access_memslot_enabled ||
kvm->arch.apic_access_memslot_inhibited)
goto out;
return 0;
hva = __x86_set_memory_region(kvm, APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
APIC_DEFAULT_PHYS_BASE, PAGE_SIZE);
if (IS_ERR(hva)) {
ret = PTR_ERR(hva);
goto out;
}
if (IS_ERR(hva))
return PTR_ERR(hva);
kvm->arch.apic_access_memslot_enabled = true;
out:
mutex_unlock(&kvm->slots_lock);
return ret;
return 0;
}
EXPORT_SYMBOL_GPL(kvm_alloc_apic_access_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_alloc_apic_access_page);
void kvm_inhibit_apic_access_page(struct kvm_vcpu *vcpu)
{
@ -2944,7 +3058,7 @@ int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu)
__apic_update_ppr(apic, &ppr);
return apic_has_interrupt_for_ppr(apic, ppr);
}
EXPORT_SYMBOL_GPL(kvm_apic_has_interrupt);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_has_interrupt);
int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu)
{
@ -3003,7 +3117,7 @@ void kvm_apic_ack_interrupt(struct kvm_vcpu *vcpu, int vector)
}
}
EXPORT_SYMBOL_GPL(kvm_apic_ack_interrupt);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_apic_ack_interrupt);
static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
struct kvm_lapic_state *s, bool set)

View File

@ -105,7 +105,6 @@ void kvm_apic_set_version(struct kvm_vcpu *vcpu);
void kvm_apic_after_set_mcg_cap(struct kvm_vcpu *vcpu);
bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
int shorthand, unsigned int dest, int dest_mode);
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2);
void kvm_apic_clear_irr(struct kvm_vcpu *vcpu, int vec);
bool __kvm_apic_update_irr(unsigned long *pir, void *regs, int *max_irr);
bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, unsigned long *pir, int *max_irr);
@ -119,6 +118,9 @@ void kvm_inhibit_apic_access_page(struct kvm_vcpu *vcpu);
bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
struct kvm_lapic_irq *irq, int *r, struct dest_map *dest_map);
int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
struct kvm_lapic_irq *irq,
struct dest_map *dest_map);
void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high);
int kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 value, bool host_initiated);
@ -137,7 +139,7 @@ int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu);
int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data);
int kvm_x2apic_icr_write_fast(struct kvm_lapic *apic, u64 data);
int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data);
int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
@ -222,12 +224,6 @@ static inline bool kvm_apic_init_sipi_allowed(struct kvm_vcpu *vcpu)
!kvm_x86_call(apic_init_signal_blocked)(vcpu);
}
static inline bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq)
{
return (irq->delivery_mode == APIC_DM_LOWEST ||
irq->msi_redir_hint);
}
static inline int kvm_lapic_latched_init(struct kvm_vcpu *vcpu)
{
return lapic_in_kernel(vcpu) && test_bit(KVM_APIC_INIT, &vcpu->arch.apic->pending_events);
@ -240,16 +236,13 @@ void kvm_wait_lapic_expire(struct kvm_vcpu *vcpu);
void kvm_bitmap_or_dest_vcpus(struct kvm *kvm, struct kvm_lapic_irq *irq,
unsigned long *vcpu_bitmap);
bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq,
struct kvm_vcpu **dest_vcpu);
int kvm_vector_to_index(u32 vector, u32 dest_vcpus,
const unsigned long *bitmap, u32 bitmap_size);
bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
struct kvm_vcpu **dest_vcpu);
void kvm_lapic_switch_to_sw_timer(struct kvm_vcpu *vcpu);
void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu);
void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu);
bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu);
void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu);
bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu);
static inline enum lapic_mode kvm_apic_mode(u64 apic_base)
{

View File

@ -212,7 +212,7 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
fault = (mmu->permissions[index] >> pte_access) & 1;
WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK));
WARN_ON_ONCE(pfec & (PFERR_PK_MASK | PFERR_SS_MASK | PFERR_RSVD_MASK));
if (unlikely(mmu->pkru_mask)) {
u32 pkru_bits, offset;

View File

@ -110,7 +110,7 @@ static bool __ro_after_init tdp_mmu_allowed;
#ifdef CONFIG_X86_64
bool __read_mostly tdp_mmu_enabled = true;
module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444);
EXPORT_SYMBOL_GPL(tdp_mmu_enabled);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(tdp_mmu_enabled);
#endif
static int max_huge_page_level __read_mostly;
@ -776,7 +776,8 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
kvm_flush_remote_tlbs_gfn(kvm, gfn, PG_LEVEL_4K);
}
void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp)
void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
enum kvm_mmu_type mmu_type)
{
/*
* If it's possible to replace the shadow page with an NX huge page,
@ -790,8 +791,9 @@ void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp)
return;
++kvm->stat.nx_lpage_splits;
++kvm->arch.possible_nx_huge_pages[mmu_type].nr_pages;
list_add_tail(&sp->possible_nx_huge_page_link,
&kvm->arch.possible_nx_huge_pages);
&kvm->arch.possible_nx_huge_pages[mmu_type].pages);
}
static void account_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
@ -800,7 +802,7 @@ static void account_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
sp->nx_huge_page_disallowed = true;
if (nx_huge_page_possible)
track_possible_nx_huge_page(kvm, sp);
track_possible_nx_huge_page(kvm, sp, KVM_SHADOW_MMU);
}
static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
@ -819,12 +821,14 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
kvm_mmu_gfn_allow_lpage(slot, gfn);
}
void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp)
void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
enum kvm_mmu_type mmu_type)
{
if (list_empty(&sp->possible_nx_huge_page_link))
return;
--kvm->stat.nx_lpage_splits;
--kvm->arch.possible_nx_huge_pages[mmu_type].nr_pages;
list_del_init(&sp->possible_nx_huge_page_link);
}
@ -832,7 +836,7 @@ static void unaccount_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp)
{
sp->nx_huge_page_disallowed = false;
untrack_possible_nx_huge_page(kvm, sp);
untrack_possible_nx_huge_page(kvm, sp, KVM_SHADOW_MMU);
}
static struct kvm_memory_slot *gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu,
@ -3861,7 +3865,7 @@ void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu,
write_unlock(&kvm->mmu_lock);
}
}
EXPORT_SYMBOL_GPL(kvm_mmu_free_roots);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_free_roots);
void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu)
{
@ -3888,7 +3892,7 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu)
kvm_mmu_free_roots(kvm, mmu, roots_to_free);
}
EXPORT_SYMBOL_GPL(kvm_mmu_free_guest_mode_roots);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_free_guest_mode_roots);
static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant,
u8 level)
@ -4663,10 +4667,16 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
/*
* Retry the page fault if the gfn hit a memslot that is being deleted
* or moved. This ensures any existing SPTEs for the old memslot will
* be zapped before KVM inserts a new MMIO SPTE for the gfn.
* be zapped before KVM inserts a new MMIO SPTE for the gfn. Punt the
* error to userspace if this is a prefault, as KVM's prefaulting ABI
* doesn't provide the same forward progress guarantees as KVM_RUN.
*/
if (slot->flags & KVM_MEMSLOT_INVALID)
if (slot->flags & KVM_MEMSLOT_INVALID) {
if (fault->prefetch)
return -EAGAIN;
return RET_PF_RETRY;
}
if (slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT) {
/*
@ -4866,7 +4876,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
return r;
}
EXPORT_SYMBOL_GPL(kvm_handle_page_fault);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_handle_page_fault);
#ifdef CONFIG_X86_64
static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
@ -4956,7 +4966,7 @@ int kvm_tdp_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code, u8 *level
return -EIO;
}
}
EXPORT_SYMBOL_GPL(kvm_tdp_map_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_tdp_map_page);
long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
struct kvm_pre_fault_memory *range)
@ -5152,7 +5162,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd)
__clear_sp_write_flooding_count(sp);
}
}
EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_new_pgd);
static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
unsigned int access)
@ -5798,7 +5808,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
shadow_mmu_init_context(vcpu, context, cpu_role, root_role);
kvm_mmu_new_pgd(vcpu, nested_cr3);
}
EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_shadow_npt_mmu);
static union kvm_cpu_role
kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
@ -5852,7 +5862,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
kvm_mmu_new_pgd(vcpu, new_eptp);
}
EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_shadow_ept_mmu);
static void init_kvm_softmmu(struct kvm_vcpu *vcpu,
union kvm_cpu_role cpu_role)
@ -5917,7 +5927,7 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu)
else
init_kvm_softmmu(vcpu, cpu_role);
}
EXPORT_SYMBOL_GPL(kvm_init_mmu);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_mmu);
void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu)
{
@ -5953,7 +5963,7 @@ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)
kvm_mmu_unload(vcpu);
kvm_init_mmu(vcpu);
}
EXPORT_SYMBOL_GPL(kvm_mmu_reset_context);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_reset_context);
int kvm_mmu_load(struct kvm_vcpu *vcpu)
{
@ -5987,7 +5997,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
out:
return r;
}
EXPORT_SYMBOL_GPL(kvm_mmu_load);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_load);
void kvm_mmu_unload(struct kvm_vcpu *vcpu)
{
@ -6049,7 +6059,7 @@ void kvm_mmu_free_obsolete_roots(struct kvm_vcpu *vcpu)
__kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.root_mmu);
__kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.guest_mmu);
}
EXPORT_SYMBOL_GPL(kvm_mmu_free_obsolete_roots);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_free_obsolete_roots);
static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,
int *bytes)
@ -6375,7 +6385,7 @@ emulate:
return x86_emulate_instruction(vcpu, cr2_or_gpa, emulation_type, insn,
insn_len);
}
EXPORT_SYMBOL_GPL(kvm_mmu_page_fault);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_page_fault);
void kvm_mmu_print_sptes(struct kvm_vcpu *vcpu, gpa_t gpa, const char *msg)
{
@ -6391,7 +6401,7 @@ void kvm_mmu_print_sptes(struct kvm_vcpu *vcpu, gpa_t gpa, const char *msg)
pr_cont(", spte[%d] = 0x%llx", level, sptes[level]);
pr_cont("\n");
}
EXPORT_SYMBOL_GPL(kvm_mmu_print_sptes);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_print_sptes);
static void __kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
u64 addr, hpa_t root_hpa)
@ -6457,7 +6467,7 @@ void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
__kvm_mmu_invalidate_addr(vcpu, mmu, addr, mmu->prev_roots[i].hpa);
}
}
EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_addr);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_invalidate_addr);
void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
{
@ -6474,7 +6484,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
kvm_mmu_invalidate_addr(vcpu, vcpu->arch.walk_mmu, gva, KVM_MMU_ROOTS_ALL);
++vcpu->stat.invlpg;
}
EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_invlpg);
void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
@ -6527,7 +6537,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
else
max_huge_page_level = PG_LEVEL_2M;
}
EXPORT_SYMBOL_GPL(kvm_configure_mmu);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_configure_mmu);
static void free_mmu_pages(struct kvm_mmu *mmu)
{
@ -6751,11 +6761,12 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
int kvm_mmu_init_vm(struct kvm *kvm)
{
int r;
int r, i;
kvm->arch.shadow_mmio_value = shadow_mmio_value;
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
for (i = 0; i < KVM_NR_MMU_TYPES; ++i)
INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages[i].pages);
spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
if (tdp_mmu_enabled) {
@ -7193,7 +7204,7 @@ restart:
return need_tlb_flush;
}
EXPORT_SYMBOL_GPL(kvm_zap_gfn_range);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_zap_gfn_range);
static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm,
const struct kvm_memory_slot *slot)
@ -7596,19 +7607,64 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
return err;
}
static void kvm_recover_nx_huge_pages(struct kvm *kvm)
static unsigned long nx_huge_pages_to_zap(struct kvm *kvm,
enum kvm_mmu_type mmu_type)
{
unsigned long pages = READ_ONCE(kvm->arch.possible_nx_huge_pages[mmu_type].nr_pages);
unsigned int ratio = READ_ONCE(nx_huge_pages_recovery_ratio);
return ratio ? DIV_ROUND_UP(pages, ratio) : 0;
}
static bool kvm_mmu_sp_dirty_logging_enabled(struct kvm *kvm,
struct kvm_mmu_page *sp)
{
unsigned long nx_lpage_splits = kvm->stat.nx_lpage_splits;
struct kvm_memory_slot *slot;
int rcu_idx;
/*
* Skip the memslot lookup if dirty tracking can't possibly be enabled,
* as memslot lookups are relatively expensive.
*
* If a memslot update is in progress, reading an incorrect value of
* kvm->nr_memslots_dirty_logging is not a problem: if it is becoming
* zero, KVM will do an unnecessary memslot lookup; if it is becoming
* nonzero, the page will be zapped unnecessarily. Either way, this
* only affects efficiency in racy situations, and not correctness.
*/
if (!atomic_read(&kvm->nr_memslots_dirty_logging))
return false;
slot = __gfn_to_memslot(kvm_memslots_for_spte_role(kvm, sp->role), sp->gfn);
if (WARN_ON_ONCE(!slot))
return false;
return kvm_slot_dirty_track_enabled(slot);
}
static void kvm_recover_nx_huge_pages(struct kvm *kvm,
const enum kvm_mmu_type mmu_type)
{
#ifdef CONFIG_X86_64
const bool is_tdp_mmu = mmu_type == KVM_TDP_MMU;
spinlock_t *tdp_mmu_pages_lock = &kvm->arch.tdp_mmu_pages_lock;
#else
const bool is_tdp_mmu = false;
spinlock_t *tdp_mmu_pages_lock = NULL;
#endif
unsigned long to_zap = nx_huge_pages_to_zap(kvm, mmu_type);
struct list_head *nx_huge_pages;
struct kvm_mmu_page *sp;
unsigned int ratio;
LIST_HEAD(invalid_list);
bool flush = false;
ulong to_zap;
int rcu_idx;
nx_huge_pages = &kvm->arch.possible_nx_huge_pages[mmu_type].pages;
rcu_idx = srcu_read_lock(&kvm->srcu);
write_lock(&kvm->mmu_lock);
if (is_tdp_mmu)
read_lock(&kvm->mmu_lock);
else
write_lock(&kvm->mmu_lock);
/*
* Zapping TDP MMU shadow pages, including the remote TLB flush, must
@ -7617,11 +7673,15 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
*/
rcu_read_lock();
ratio = READ_ONCE(nx_huge_pages_recovery_ratio);
to_zap = ratio ? DIV_ROUND_UP(nx_lpage_splits, ratio) : 0;
for ( ; to_zap; --to_zap) {
if (list_empty(&kvm->arch.possible_nx_huge_pages))
if (is_tdp_mmu)
spin_lock(tdp_mmu_pages_lock);
if (list_empty(nx_huge_pages)) {
if (is_tdp_mmu)
spin_unlock(tdp_mmu_pages_lock);
break;
}
/*
* We use a separate list instead of just using active_mmu_pages
@ -7630,56 +7690,44 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
* the total number of shadow pages. And because the TDP MMU
* doesn't use active_mmu_pages.
*/
sp = list_first_entry(&kvm->arch.possible_nx_huge_pages,
sp = list_first_entry(nx_huge_pages,
struct kvm_mmu_page,
possible_nx_huge_page_link);
WARN_ON_ONCE(!sp->nx_huge_page_disallowed);
WARN_ON_ONCE(!sp->role.direct);
/*
* Unaccount and do not attempt to recover any NX Huge Pages
* that are being dirty tracked, as they would just be faulted
* back in as 4KiB pages. The NX Huge Pages in this slot will be
* recovered, along with all the other huge pages in the slot,
* when dirty logging is disabled.
*
* Since gfn_to_memslot() is relatively expensive, it helps to
* skip it if it the test cannot possibly return true. On the
* other hand, if any memslot has logging enabled, chances are
* good that all of them do, in which case unaccount_nx_huge_page()
* is much cheaper than zapping the page.
*
* If a memslot update is in progress, reading an incorrect value
* of kvm->nr_memslots_dirty_logging is not a problem: if it is
* becoming zero, gfn_to_memslot() will be done unnecessarily; if
* it is becoming nonzero, the page will be zapped unnecessarily.
* Either way, this only affects efficiency in racy situations,
* and not correctness.
*/
slot = NULL;
if (atomic_read(&kvm->nr_memslots_dirty_logging)) {
struct kvm_memslots *slots;
unaccount_nx_huge_page(kvm, sp);
if (is_tdp_mmu)
spin_unlock(tdp_mmu_pages_lock);
/*
* Do not attempt to recover any NX Huge Pages that are being
* dirty tracked, as they would just be faulted back in as 4KiB
* pages. The NX Huge Pages in this slot will be recovered,
* along with all the other huge pages in the slot, when dirty
* logging is disabled.
*/
if (!kvm_mmu_sp_dirty_logging_enabled(kvm, sp)) {
if (is_tdp_mmu)
flush |= kvm_tdp_mmu_zap_possible_nx_huge_page(kvm, sp);
else
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
slots = kvm_memslots_for_spte_role(kvm, sp->role);
slot = __gfn_to_memslot(slots, sp->gfn);
WARN_ON_ONCE(!slot);
}
if (slot && kvm_slot_dirty_track_enabled(slot))
unaccount_nx_huge_page(kvm, sp);
else if (is_tdp_mmu_page(sp))
flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
else
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
WARN_ON_ONCE(sp->nx_huge_page_disallowed);
if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
rcu_read_unlock();
cond_resched_rwlock_write(&kvm->mmu_lock);
flush = false;
if (is_tdp_mmu)
cond_resched_rwlock_read(&kvm->mmu_lock);
else
cond_resched_rwlock_write(&kvm->mmu_lock);
flush = false;
rcu_read_lock();
}
}
@ -7687,7 +7735,10 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
rcu_read_unlock();
write_unlock(&kvm->mmu_lock);
if (is_tdp_mmu)
read_unlock(&kvm->mmu_lock);
else
write_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, rcu_idx);
}
@ -7698,9 +7749,10 @@ static void kvm_nx_huge_page_recovery_worker_kill(void *data)
static bool kvm_nx_huge_page_recovery_worker(void *data)
{
struct kvm *kvm = data;
long remaining_time;
bool enabled;
uint period;
long remaining_time;
int i;
enabled = calc_nx_huge_pages_recovery_period(&period);
if (!enabled)
@ -7715,7 +7767,8 @@ static bool kvm_nx_huge_page_recovery_worker(void *data)
}
__set_current_state(TASK_RUNNING);
kvm_recover_nx_huge_pages(kvm);
for (i = 0; i < KVM_NR_MMU_TYPES; ++i)
kvm_recover_nx_huge_pages(kvm, i);
kvm->arch.nx_huge_page_last = get_jiffies_64();
return true;
}

View File

@ -416,7 +416,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault,
void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level);
void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
enum kvm_mmu_type mmu_type);
void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
enum kvm_mmu_type mmu_type);
#endif /* __KVM_X86_MMU_INTERNAL_H */

View File

@ -51,6 +51,9 @@
{ PFERR_PRESENT_MASK, "P" }, \
{ PFERR_WRITE_MASK, "W" }, \
{ PFERR_USER_MASK, "U" }, \
{ PFERR_PK_MASK, "PK" }, \
{ PFERR_SS_MASK, "SS" }, \
{ PFERR_SGX_MASK, "SGX" }, \
{ PFERR_RSVD_MASK, "RSVD" }, \
{ PFERR_FETCH_MASK, "F" }

View File

@ -22,7 +22,7 @@
bool __read_mostly enable_mmio_caching = true;
static bool __ro_after_init allow_mmio_caching;
module_param_named(mmio_caching, enable_mmio_caching, bool, 0444);
EXPORT_SYMBOL_GPL(enable_mmio_caching);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_mmio_caching);
bool __read_mostly kvm_ad_enabled;
@ -470,13 +470,13 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
shadow_mmio_mask = mmio_mask;
shadow_mmio_access_mask = access_mask;
}
EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_mmio_spte_mask);
void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value)
{
kvm->arch.shadow_mmio_value = mmio_value;
}
EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_value);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_mmio_spte_value);
void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
{
@ -487,7 +487,7 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
shadow_me_value = me_value;
shadow_me_mask = me_mask;
}
EXPORT_SYMBOL_GPL(kvm_mmu_set_me_spte_mask);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_me_spte_mask);
void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
{
@ -513,7 +513,7 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE,
VMX_EPT_RWX_MASK | VMX_EPT_SUPPRESS_VE_BIT, 0);
}
EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_ept_masks);
void kvm_mmu_reset_all_pte_masks(void)
{

View File

@ -355,7 +355,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
spin_lock(&kvm->arch.tdp_mmu_pages_lock);
sp->nx_huge_page_disallowed = false;
untrack_possible_nx_huge_page(kvm, sp);
untrack_possible_nx_huge_page(kvm, sp, KVM_TDP_MMU);
spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
}
@ -925,23 +925,52 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
rcu_read_unlock();
}
bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
bool kvm_tdp_mmu_zap_possible_nx_huge_page(struct kvm *kvm,
struct kvm_mmu_page *sp)
{
u64 old_spte;
struct tdp_iter iter = {
.old_spte = sp->ptep ? kvm_tdp_mmu_read_spte(sp->ptep) : 0,
.sptep = sp->ptep,
.level = sp->role.level + 1,
.gfn = sp->gfn,
.as_id = kvm_mmu_page_as_id(sp),
};
lockdep_assert_held_read(&kvm->mmu_lock);
if (WARN_ON_ONCE(!is_tdp_mmu_page(sp)))
return false;
/*
* This helper intentionally doesn't allow zapping a root shadow page,
* which doesn't have a parent page table and thus no associated entry.
* Root shadow pages don't have a parent page table and thus no
* associated entry, but they can never be possible NX huge pages.
*/
if (WARN_ON_ONCE(!sp->ptep))
return false;
old_spte = kvm_tdp_mmu_read_spte(sp->ptep);
if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte)))
/*
* Since mmu_lock is held in read mode, it's possible another task has
* already modified the SPTE. Zap the SPTE if and only if the SPTE
* points at the SP's page table, as checking shadow-present isn't
* sufficient, e.g. the SPTE could be replaced by a leaf SPTE, or even
* another SP. Note, spte_to_child_pt() also checks that the SPTE is
* shadow-present, i.e. guards against zapping a frozen SPTE.
*/
if ((tdp_ptep_t)sp->spt != spte_to_child_pt(iter.old_spte, iter.level))
return false;
tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte,
SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1);
/*
* If a different task modified the SPTE, then it should be impossible
* for the SPTE to still be used for the to-be-zapped SP. Non-leaf
* SPTEs don't have Dirty bits, KVM always sets the Accessed bit when
* creating non-leaf SPTEs, and all other bits are immutable for non-
* leaf SPTEs, i.e. the only legal operations for non-leaf SPTEs are
* zapping and replacement.
*/
if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE)) {
WARN_ON_ONCE((tdp_ptep_t)sp->spt == spte_to_child_pt(iter.old_spte, iter.level));
return false;
}
return true;
}
@ -1303,7 +1332,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
fault->req_level >= iter.level) {
spin_lock(&kvm->arch.tdp_mmu_pages_lock);
if (sp->nx_huge_page_disallowed)
track_possible_nx_huge_page(kvm, sp);
track_possible_nx_huge_page(kvm, sp, KVM_TDP_MMU);
spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
}
}
@ -1953,7 +1982,7 @@ bool kvm_tdp_mmu_gpa_is_mapped(struct kvm_vcpu *vcpu, u64 gpa)
spte = sptes[leaf];
return is_shadow_present_pte(spte) && is_last_spte(spte, leaf);
}
EXPORT_SYMBOL_GPL(kvm_tdp_mmu_gpa_is_mapped);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_tdp_mmu_gpa_is_mapped);
/*
* Returns the last level spte pointer of the shadow page walk for the given

View File

@ -64,7 +64,8 @@ static inline struct kvm_mmu_page *tdp_mmu_get_root(struct kvm_vcpu *vcpu,
}
bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush);
bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
bool kvm_tdp_mmu_zap_possible_nx_huge_page(struct kvm *kvm,
struct kvm_mmu_page *sp);
void kvm_tdp_mmu_zap_all(struct kvm *kvm);
void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm,
enum kvm_tdp_mmu_root_types root_types);

View File

@ -26,11 +26,18 @@
/* This is enough to filter the vast majority of currently defined events. */
#define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300
struct x86_pmu_capability __read_mostly kvm_pmu_cap;
EXPORT_SYMBOL_GPL(kvm_pmu_cap);
/* Unadultered PMU capabilities of the host, i.e. of hardware. */
static struct x86_pmu_capability __read_mostly kvm_host_pmu;
struct kvm_pmu_emulated_event_selectors __read_mostly kvm_pmu_eventsel;
EXPORT_SYMBOL_GPL(kvm_pmu_eventsel);
/* KVM's PMU capabilities, i.e. the intersection of KVM and hardware support. */
struct x86_pmu_capability __read_mostly kvm_pmu_cap;
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_cap);
struct kvm_pmu_emulated_event_selectors {
u64 INSTRUCTIONS_RETIRED;
u64 BRANCH_INSTRUCTIONS_RETIRED;
};
static struct kvm_pmu_emulated_event_selectors __read_mostly kvm_pmu_eventsel;
/* Precise Distribution of Instructions Retired (PDIR) */
static const struct x86_cpu_id vmx_pebs_pdir_cpu[] = {
@ -96,6 +103,54 @@ void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops)
#undef __KVM_X86_PMU_OP
}
void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
{
bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS;
perf_get_x86_pmu_capability(&kvm_host_pmu);
/*
* Hybrid PMUs don't play nice with virtualization without careful
* configuration by userspace, and KVM's APIs for reporting supported
* vPMU features do not account for hybrid PMUs. Disable vPMU support
* for hybrid PMUs until KVM gains a way to let userspace opt-in.
*/
if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))
enable_pmu = false;
if (enable_pmu) {
/*
* WARN if perf did NOT disable hardware PMU if the number of
* architecturally required GP counters aren't present, i.e. if
* there are a non-zero number of counters, but fewer than what
* is architecturally required.
*/
if (!kvm_host_pmu.num_counters_gp ||
WARN_ON_ONCE(kvm_host_pmu.num_counters_gp < min_nr_gp_ctrs))
enable_pmu = false;
else if (is_intel && !kvm_host_pmu.version)
enable_pmu = false;
}
if (!enable_pmu) {
memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap));
return;
}
memcpy(&kvm_pmu_cap, &kvm_host_pmu, sizeof(kvm_host_pmu));
kvm_pmu_cap.version = min(kvm_pmu_cap.version, 2);
kvm_pmu_cap.num_counters_gp = min(kvm_pmu_cap.num_counters_gp,
pmu_ops->MAX_NR_GP_COUNTERS);
kvm_pmu_cap.num_counters_fixed = min(kvm_pmu_cap.num_counters_fixed,
KVM_MAX_NR_FIXED_COUNTERS);
kvm_pmu_eventsel.INSTRUCTIONS_RETIRED =
perf_get_hw_event_config(PERF_COUNT_HW_INSTRUCTIONS);
kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED =
perf_get_hw_event_config(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
}
static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi)
{
struct kvm_pmu *pmu = pmc_to_pmu(pmc);
@ -318,7 +373,7 @@ void pmc_write_counter(struct kvm_pmc *pmc, u64 val)
pmc->counter &= pmc_bitmask(pmc);
pmc_update_sample_period(pmc);
}
EXPORT_SYMBOL_GPL(pmc_write_counter);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(pmc_write_counter);
static int filter_cmp(const void *pa, const void *pb, u64 mask)
{
@ -426,7 +481,7 @@ static bool is_fixed_event_allowed(struct kvm_x86_pmu_event_filter *filter,
return true;
}
static bool check_pmu_event_filter(struct kvm_pmc *pmc)
static bool pmc_is_event_allowed(struct kvm_pmc *pmc)
{
struct kvm_x86_pmu_event_filter *filter;
struct kvm *kvm = pmc->vcpu->kvm;
@ -441,12 +496,6 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
return is_fixed_event_allowed(filter, pmc->idx);
}
static bool pmc_event_is_allowed(struct kvm_pmc *pmc)
{
return pmc_is_globally_enabled(pmc) && pmc_speculative_in_use(pmc) &&
check_pmu_event_filter(pmc);
}
static int reprogram_counter(struct kvm_pmc *pmc)
{
struct kvm_pmu *pmu = pmc_to_pmu(pmc);
@ -457,7 +506,8 @@ static int reprogram_counter(struct kvm_pmc *pmc)
emulate_overflow = pmc_pause_counter(pmc);
if (!pmc_event_is_allowed(pmc))
if (!pmc_is_globally_enabled(pmc) || !pmc_is_locally_enabled(pmc) ||
!pmc_is_event_allowed(pmc))
return 0;
if (emulate_overflow)
@ -492,6 +542,47 @@ static int reprogram_counter(struct kvm_pmc *pmc)
eventsel & ARCH_PERFMON_EVENTSEL_INT);
}
static bool pmc_is_event_match(struct kvm_pmc *pmc, u64 eventsel)
{
/*
* Ignore checks for edge detect (all events currently emulated by KVM
* are always rising edges), pin control (unsupported by modern CPUs),
* and counter mask and its invert flag (KVM doesn't emulate multiple
* events in a single clock cycle).
*
* Note, the uppermost nibble of AMD's mask overlaps Intel's IN_TX (bit
* 32) and IN_TXCP (bit 33), as well as two reserved bits (bits 35:34).
* Checking the "in HLE/RTM transaction" flags is correct as the vCPU
* can't be in a transaction if KVM is emulating an instruction.
*
* Checking the reserved bits might be wrong if they are defined in the
* future, but so could ignoring them, so do the simple thing for now.
*/
return !((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB);
}
void kvm_pmu_recalc_pmc_emulation(struct kvm_pmu *pmu, struct kvm_pmc *pmc)
{
bitmap_clear(pmu->pmc_counting_instructions, pmc->idx, 1);
bitmap_clear(pmu->pmc_counting_branches, pmc->idx, 1);
/*
* Do NOT consult the PMU event filters, as the filters must be checked
* at the time of emulation to ensure KVM uses fresh information, e.g.
* omitting a PMC from a bitmap could result in a missed event if the
* filter is changed to allow counting the event.
*/
if (!pmc_is_locally_enabled(pmc))
return;
if (pmc_is_event_match(pmc, kvm_pmu_eventsel.INSTRUCTIONS_RETIRED))
bitmap_set(pmu->pmc_counting_instructions, pmc->idx, 1);
if (pmc_is_event_match(pmc, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED))
bitmap_set(pmu->pmc_counting_branches, pmc->idx, 1);
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_recalc_pmc_emulation);
void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
{
DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX);
@ -527,6 +618,9 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
*/
if (unlikely(pmu->need_cleanup))
kvm_pmu_cleanup(vcpu);
kvm_for_each_pmc(pmu, pmc, bit, bitmap)
kvm_pmu_recalc_pmc_emulation(pmu, pmc);
}
int kvm_pmu_check_rdpmc_early(struct kvm_vcpu *vcpu, unsigned int idx)
@ -650,6 +744,7 @@ int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
msr_info->data = pmu->global_ctrl;
break;
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR:
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET:
case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
msr_info->data = 0;
break;
@ -711,6 +806,10 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (!msr_info->host_initiated)
pmu->global_status &= ~data;
break;
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET:
if (!msr_info->host_initiated)
pmu->global_status |= data & ~pmu->global_status_rsvd;
break;
default:
kvm_pmu_mark_pmc_in_use(vcpu, msr_info->index);
return kvm_pmu_call(set_msr)(vcpu, msr_info);
@ -789,6 +888,10 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu)
*/
if (kvm_pmu_has_perf_global_ctrl(pmu) && pmu->nr_arch_gp_counters)
pmu->global_ctrl = GENMASK_ULL(pmu->nr_arch_gp_counters - 1, 0);
bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters);
bitmap_set(pmu->all_valid_pmc_idx, KVM_FIXED_PMC_BASE_IDX,
pmu->nr_arch_fixed_counters);
}
void kvm_pmu_init(struct kvm_vcpu *vcpu)
@ -813,7 +916,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu)
pmu->pmc_in_use, X86_PMC_IDX_MAX);
kvm_for_each_pmc(pmu, pmc, i, bitmask) {
if (pmc->perf_event && !pmc_speculative_in_use(pmc))
if (pmc->perf_event && !pmc_is_locally_enabled(pmc))
pmc_stop_counter(pmc);
}
@ -860,44 +963,46 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc)
select_user;
}
void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel)
static void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu,
const unsigned long *event_pmcs)
{
DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX);
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
struct kvm_pmc *pmc;
int i;
int i, idx;
BUILD_BUG_ON(sizeof(pmu->global_ctrl) * BITS_PER_BYTE != X86_PMC_IDX_MAX);
if (bitmap_empty(event_pmcs, X86_PMC_IDX_MAX))
return;
if (!kvm_pmu_has_perf_global_ctrl(pmu))
bitmap_copy(bitmap, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX);
else if (!bitmap_and(bitmap, pmu->all_valid_pmc_idx,
bitmap_copy(bitmap, event_pmcs, X86_PMC_IDX_MAX);
else if (!bitmap_and(bitmap, event_pmcs,
(unsigned long *)&pmu->global_ctrl, X86_PMC_IDX_MAX))
return;
idx = srcu_read_lock(&vcpu->kvm->srcu);
kvm_for_each_pmc(pmu, pmc, i, bitmap) {
/*
* Ignore checks for edge detect (all events currently emulated
* but KVM are always rising edges), pin control (unsupported
* by modern CPUs), and counter mask and its invert flag (KVM
* doesn't emulate multiple events in a single clock cycle).
*
* Note, the uppermost nibble of AMD's mask overlaps Intel's
* IN_TX (bit 32) and IN_TXCP (bit 33), as well as two reserved
* bits (bits 35:34). Checking the "in HLE/RTM transaction"
* flags is correct as the vCPU can't be in a transaction if
* KVM is emulating an instruction. Checking the reserved bits
* might be wrong if they are defined in the future, but so
* could ignoring them, so do the simple thing for now.
*/
if (((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) ||
!pmc_event_is_allowed(pmc) || !cpl_is_matched(pmc))
if (!pmc_is_event_allowed(pmc) || !cpl_is_matched(pmc))
continue;
kvm_pmu_incr_counter(pmc);
}
srcu_read_unlock(&vcpu->kvm->srcu, idx);
}
EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event);
void kvm_pmu_instruction_retired(struct kvm_vcpu *vcpu)
{
kvm_pmu_trigger_event(vcpu, vcpu_to_pmu(vcpu)->pmc_counting_instructions);
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_instruction_retired);
void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu)
{
kvm_pmu_trigger_event(vcpu, vcpu_to_pmu(vcpu)->pmc_counting_branches);
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired);
static bool is_masked_filter_valid(const struct kvm_x86_pmu_event_filter *filter)
{

View File

@ -23,11 +23,6 @@
#define KVM_FIXED_PMC_BASE_IDX INTEL_PMC_IDX_FIXED
struct kvm_pmu_emulated_event_selectors {
u64 INSTRUCTIONS_RETIRED;
u64 BRANCH_INSTRUCTIONS_RETIRED;
};
struct kvm_pmu_ops {
struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu,
unsigned int idx, u64 *mask);
@ -165,7 +160,7 @@ static inline struct kvm_pmc *get_fixed_pmc(struct kvm_pmu *pmu, u32 msr)
return NULL;
}
static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc)
static inline bool pmc_is_locally_enabled(struct kvm_pmc *pmc)
{
struct kvm_pmu *pmu = pmc_to_pmu(pmc);
@ -178,57 +173,15 @@ static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc)
}
extern struct x86_pmu_capability kvm_pmu_cap;
extern struct kvm_pmu_emulated_event_selectors kvm_pmu_eventsel;
static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
{
bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS;
void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops);
/*
* Hybrid PMUs don't play nice with virtualization without careful
* configuration by userspace, and KVM's APIs for reporting supported
* vPMU features do not account for hybrid PMUs. Disable vPMU support
* for hybrid PMUs until KVM gains a way to let userspace opt-in.
*/
if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))
enable_pmu = false;
if (enable_pmu) {
perf_get_x86_pmu_capability(&kvm_pmu_cap);
/*
* WARN if perf did NOT disable hardware PMU if the number of
* architecturally required GP counters aren't present, i.e. if
* there are a non-zero number of counters, but fewer than what
* is architecturally required.
*/
if (!kvm_pmu_cap.num_counters_gp ||
WARN_ON_ONCE(kvm_pmu_cap.num_counters_gp < min_nr_gp_ctrs))
enable_pmu = false;
else if (is_intel && !kvm_pmu_cap.version)
enable_pmu = false;
}
if (!enable_pmu) {
memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap));
return;
}
kvm_pmu_cap.version = min(kvm_pmu_cap.version, 2);
kvm_pmu_cap.num_counters_gp = min(kvm_pmu_cap.num_counters_gp,
pmu_ops->MAX_NR_GP_COUNTERS);
kvm_pmu_cap.num_counters_fixed = min(kvm_pmu_cap.num_counters_fixed,
KVM_MAX_NR_FIXED_COUNTERS);
kvm_pmu_eventsel.INSTRUCTIONS_RETIRED =
perf_get_hw_event_config(PERF_COUNT_HW_INSTRUCTIONS);
kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED =
perf_get_hw_event_config(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
}
void kvm_pmu_recalc_pmc_emulation(struct kvm_pmu *pmu, struct kvm_pmc *pmc);
static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc)
{
kvm_pmu_recalc_pmc_emulation(pmc_to_pmu(pmc), pmc);
set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi);
kvm_make_request(KVM_REQ_PMU, pmc->vcpu);
}
@ -272,7 +225,8 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu);
void kvm_pmu_cleanup(struct kvm_vcpu *vcpu);
void kvm_pmu_destroy(struct kvm_vcpu *vcpu);
int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp);
void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel);
void kvm_pmu_instruction_retired(struct kvm_vcpu *vcpu);
void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu);
bool is_vmware_backdoor_pmc(u32 pmc_idx);

View File

@ -25,6 +25,9 @@
#define KVM_X86_FEATURE_SGX2 KVM_X86_FEATURE(CPUID_12_EAX, 1)
#define KVM_X86_FEATURE_SGX_EDECCSSA KVM_X86_FEATURE(CPUID_12_EAX, 11)
/* Intel-defined sub-features, CPUID level 0x00000007:1 (ECX) */
#define KVM_X86_FEATURE_MSR_IMM KVM_X86_FEATURE(CPUID_7_1_ECX, 5)
/* Intel-defined sub-features, CPUID level 0x00000007:1 (EDX) */
#define X86_FEATURE_AVX_VNNI_INT8 KVM_X86_FEATURE(CPUID_7_1_EDX, 4)
#define X86_FEATURE_AVX_NE_CONVERT KVM_X86_FEATURE(CPUID_7_1_EDX, 5)
@ -87,6 +90,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
[CPUID_7_2_EDX] = { 7, 2, CPUID_EDX},
[CPUID_24_0_EBX] = { 0x24, 0, CPUID_EBX},
[CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX},
[CPUID_7_1_ECX] = { 7, 1, CPUID_ECX},
};
/*
@ -128,6 +132,7 @@ static __always_inline u32 __feature_translate(int x86_feature)
KVM_X86_TRANSLATE_FEATURE(BHI_CTRL);
KVM_X86_TRANSLATE_FEATURE(TSA_SQ_NO);
KVM_X86_TRANSLATE_FEATURE(TSA_L1_NO);
KVM_X86_TRANSLATE_FEATURE(MSR_IMM);
default:
return x86_feature;
}

View File

@ -131,7 +131,7 @@ void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm)
kvm_mmu_reset_context(vcpu);
}
EXPORT_SYMBOL_GPL(kvm_smm_changed);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_smm_changed);
void process_smi(struct kvm_vcpu *vcpu)
{
@ -269,6 +269,10 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu,
enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS);
smram->int_shadow = kvm_x86_call(get_interrupt_shadow)(vcpu);
if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
kvm_msr_read(vcpu, MSR_KVM_INTERNAL_GUEST_SSP, &smram->ssp))
kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
}
#endif
@ -529,7 +533,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
vcpu->arch.smbase = smstate->smbase;
if (kvm_set_msr(vcpu, MSR_EFER, smstate->efer & ~EFER_LMA))
if (__kvm_emulate_msr_write(vcpu, MSR_EFER, smstate->efer & ~EFER_LMA))
return X86EMUL_UNHANDLEABLE;
rsm_load_seg_64(vcpu, &smstate->tr, VCPU_SREG_TR);
@ -558,6 +562,10 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
kvm_x86_call(set_interrupt_shadow)(vcpu, 0);
ctxt->interruptibility = (u8)smstate->int_shadow;
if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
kvm_msr_write(vcpu, MSR_KVM_INTERNAL_GUEST_SSP, smstate->ssp))
return X86EMUL_UNHANDLEABLE;
return X86EMUL_CONTINUE;
}
#endif
@ -620,7 +628,7 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt)
/* And finally go back to 32-bit mode. */
efer = 0;
kvm_set_msr(vcpu, MSR_EFER, efer);
__kvm_emulate_msr_write(vcpu, MSR_EFER, efer);
}
#endif

View File

@ -116,8 +116,8 @@ struct kvm_smram_state_64 {
u32 smbase;
u32 reserved4[5];
/* ssp and svm_* fields below are not implemented by KVM */
u64 ssp;
/* svm_* fields below are not implemented by KVM */
u64 svm_guest_pat;
u64 svm_host_efer;
u64 svm_host_cr4;

View File

@ -64,6 +64,34 @@
static_assert(__AVIC_GATAG(AVIC_VM_ID_MASK, AVIC_VCPU_IDX_MASK) == -1u);
#define AVIC_AUTO_MODE -1
static int avic_param_set(const char *val, const struct kernel_param *kp)
{
if (val && sysfs_streq(val, "auto")) {
*(int *)kp->arg = AVIC_AUTO_MODE;
return 0;
}
return param_set_bint(val, kp);
}
static const struct kernel_param_ops avic_ops = {
.flags = KERNEL_PARAM_OPS_FL_NOARG,
.set = avic_param_set,
.get = param_get_bool,
};
/*
* Enable / disable AVIC. In "auto" mode (default behavior), AVIC is enabled
* for Zen4+ CPUs with x2AVIC (and all other criteria for enablement are met).
*/
static int avic = AVIC_AUTO_MODE;
module_param_cb(avic, &avic_ops, &avic, 0444);
__MODULE_PARM_TYPE(avic, "bool");
module_param(enable_ipiv, bool, 0444);
static bool force_avic;
module_param_unsafe(force_avic, bool, 0444);
@ -77,7 +105,58 @@ static DEFINE_HASHTABLE(svm_vm_data_hash, SVM_VM_DATA_HASH_BITS);
static u32 next_vm_id = 0;
static bool next_vm_id_wrapped = 0;
static DEFINE_SPINLOCK(svm_vm_data_hash_lock);
bool x2avic_enabled;
static bool x2avic_enabled;
static void avic_set_x2apic_msr_interception(struct vcpu_svm *svm,
bool intercept)
{
static const u32 x2avic_passthrough_msrs[] = {
X2APIC_MSR(APIC_ID),
X2APIC_MSR(APIC_LVR),
X2APIC_MSR(APIC_TASKPRI),
X2APIC_MSR(APIC_ARBPRI),
X2APIC_MSR(APIC_PROCPRI),
X2APIC_MSR(APIC_EOI),
X2APIC_MSR(APIC_RRR),
X2APIC_MSR(APIC_LDR),
X2APIC_MSR(APIC_DFR),
X2APIC_MSR(APIC_SPIV),
X2APIC_MSR(APIC_ISR),
X2APIC_MSR(APIC_TMR),
X2APIC_MSR(APIC_IRR),
X2APIC_MSR(APIC_ESR),
X2APIC_MSR(APIC_ICR),
X2APIC_MSR(APIC_ICR2),
/*
* Note! Always intercept LVTT, as TSC-deadline timer mode
* isn't virtualized by hardware, and the CPU will generate a
* #GP instead of a #VMEXIT.
*/
X2APIC_MSR(APIC_LVTTHMR),
X2APIC_MSR(APIC_LVTPC),
X2APIC_MSR(APIC_LVT0),
X2APIC_MSR(APIC_LVT1),
X2APIC_MSR(APIC_LVTERR),
X2APIC_MSR(APIC_TMICT),
X2APIC_MSR(APIC_TMCCT),
X2APIC_MSR(APIC_TDCR),
};
int i;
if (intercept == svm->x2avic_msrs_intercepted)
return;
if (!x2avic_enabled)
return;
for (i = 0; i < ARRAY_SIZE(x2avic_passthrough_msrs); i++)
svm_set_intercept_for_msr(&svm->vcpu, x2avic_passthrough_msrs[i],
MSR_TYPE_RW, intercept);
svm->x2avic_msrs_intercepted = intercept;
}
static void avic_activate_vmcb(struct vcpu_svm *svm)
{
@ -99,7 +178,7 @@ static void avic_activate_vmcb(struct vcpu_svm *svm)
vmcb->control.int_ctl |= X2APIC_MODE_MASK;
vmcb->control.avic_physical_id |= X2AVIC_MAX_PHYSICAL_ID;
/* Disabling MSR intercept for x2APIC registers */
svm_set_x2apic_msr_interception(svm, false);
avic_set_x2apic_msr_interception(svm, false);
} else {
/*
* Flush the TLB, the guest may have inserted a non-APIC
@ -110,7 +189,7 @@ static void avic_activate_vmcb(struct vcpu_svm *svm)
/* For xAVIC and hybrid-xAVIC modes */
vmcb->control.avic_physical_id |= AVIC_MAX_PHYSICAL_ID;
/* Enabling MSR intercept for x2APIC registers */
svm_set_x2apic_msr_interception(svm, true);
avic_set_x2apic_msr_interception(svm, true);
}
}
@ -130,7 +209,7 @@ static void avic_deactivate_vmcb(struct vcpu_svm *svm)
return;
/* Enabling MSR intercept for x2APIC registers */
svm_set_x2apic_msr_interception(svm, true);
avic_set_x2apic_msr_interception(svm, true);
}
/* Note:
@ -1090,23 +1169,27 @@ void avic_vcpu_unblocking(struct kvm_vcpu *vcpu)
avic_vcpu_load(vcpu, vcpu->cpu);
}
/*
* Note:
* - The module param avic enable both xAPIC and x2APIC mode.
* - Hypervisor can support both xAVIC and x2AVIC in the same guest.
* - The mode can be switched at run-time.
*/
bool avic_hardware_setup(void)
static bool __init avic_want_avic_enabled(void)
{
if (!npt_enabled)
/*
* In "auto" mode, enable AVIC by default for Zen4+ if x2AVIC is
* supported (to avoid enabling partial support by default, and because
* x2AVIC should be supported by all Zen4+ CPUs). Explicitly check for
* family 0x19 and later (Zen5+), as the kernel's synthetic ZenX flags
* aren't inclusive of previous generations, i.e. the kernel will set
* at most one ZenX feature flag.
*/
if (avic == AVIC_AUTO_MODE)
avic = boot_cpu_has(X86_FEATURE_X2AVIC) &&
(boot_cpu_data.x86 > 0x19 || cpu_feature_enabled(X86_FEATURE_ZEN4));
if (!avic || !npt_enabled)
return false;
/* AVIC is a prerequisite for x2AVIC. */
if (!boot_cpu_has(X86_FEATURE_AVIC) && !force_avic) {
if (boot_cpu_has(X86_FEATURE_X2AVIC)) {
pr_warn(FW_BUG "Cannot support x2AVIC due to AVIC is disabled");
pr_warn(FW_BUG "Try enable AVIC using force_avic option");
}
if (boot_cpu_has(X86_FEATURE_X2AVIC))
pr_warn(FW_BUG "Cannot enable x2AVIC, AVIC is unsupported\n");
return false;
}
@ -1116,21 +1199,37 @@ bool avic_hardware_setup(void)
return false;
}
if (boot_cpu_has(X86_FEATURE_AVIC)) {
pr_info("AVIC enabled\n");
} else if (force_avic) {
/*
* Some older systems does not advertise AVIC support.
* See Revision Guide for specific AMD processor for more detail.
*/
pr_warn("AVIC is not supported in CPUID but force enabled");
pr_warn("Your system might crash and burn");
}
/*
* Print a scary message if AVIC is force enabled to make it abundantly
* clear that ignoring CPUID could have repercussions. See Revision
* Guide for specific AMD processor for more details.
*/
if (!boot_cpu_has(X86_FEATURE_AVIC))
pr_warn("AVIC unsupported in CPUID but force enabled, your system might crash and burn\n");
return true;
}
/*
* Note:
* - The module param avic enable both xAPIC and x2APIC mode.
* - Hypervisor can support both xAVIC and x2AVIC in the same guest.
* - The mode can be switched at run-time.
*/
bool __init avic_hardware_setup(void)
{
avic = avic_want_avic_enabled();
if (!avic)
return false;
pr_info("AVIC enabled\n");
/* AVIC is a prerequisite for x2AVIC. */
x2avic_enabled = boot_cpu_has(X86_FEATURE_X2AVIC);
if (x2avic_enabled)
pr_info("x2AVIC enabled\n");
else
svm_x86_ops.allow_apicv_in_x2apic_without_x2apic_virtualization = true;
/*
* Disable IPI virtualization for AMD Family 17h CPUs (Zen1 and Zen2)

View File

@ -636,6 +636,14 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
vmcb_mark_dirty(vmcb02, VMCB_DT);
}
if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
(unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_CET)))) {
vmcb02->save.s_cet = vmcb12->save.s_cet;
vmcb02->save.isst_addr = vmcb12->save.isst_addr;
vmcb02->save.ssp = vmcb12->save.ssp;
vmcb_mark_dirty(vmcb02, VMCB_CET);
}
kvm_set_rflags(vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);
svm_set_efer(vcpu, svm->nested.save.efer);
@ -1044,6 +1052,12 @@ void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
to_save->rsp = from_save->rsp;
to_save->rip = from_save->rip;
to_save->cpl = 0;
if (kvm_cpu_cap_has(X86_FEATURE_SHSTK)) {
to_save->s_cet = from_save->s_cet;
to_save->isst_addr = from_save->isst_addr;
to_save->ssp = from_save->ssp;
}
}
void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
@ -1111,6 +1125,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
vmcb12->save.dr6 = svm->vcpu.arch.dr6;
vmcb12->save.cpl = vmcb02->save.cpl;
if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
vmcb12->save.s_cet = vmcb02->save.s_cet;
vmcb12->save.isst_addr = vmcb02->save.isst_addr;
vmcb12->save.ssp = vmcb02->save.ssp;
}
vmcb12->control.int_state = vmcb02->control.int_state;
vmcb12->control.exit_code = vmcb02->control.exit_code;
vmcb12->control.exit_code_hi = vmcb02->control.exit_code_hi;
@ -1798,17 +1818,15 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
if (kvm_state->size < sizeof(*kvm_state) + KVM_STATE_NESTED_SVM_VMCB_SIZE)
return -EINVAL;
ret = -ENOMEM;
ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
save = kzalloc(sizeof(*save), GFP_KERNEL);
if (!ctl || !save)
goto out_free;
ctl = memdup_user(&user_vmcb->control, sizeof(*ctl));
if (IS_ERR(ctl))
return PTR_ERR(ctl);
ret = -EFAULT;
if (copy_from_user(ctl, &user_vmcb->control, sizeof(*ctl)))
goto out_free;
if (copy_from_user(save, &user_vmcb->save, sizeof(*save)))
goto out_free;
save = memdup_user(&user_vmcb->save, sizeof(*save));
if (IS_ERR(save)) {
kfree(ctl);
return PTR_ERR(save);
}
ret = -EINVAL;
__nested_copy_vmcb_control_to_cache(vcpu, &ctl_cached, ctl);

View File

@ -41,7 +41,7 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
unsigned int idx;
if (!vcpu->kvm->arch.enable_pmu)
if (!pmu->version)
return NULL;
switch (msr) {
@ -113,6 +113,7 @@ static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
case MSR_AMD64_PERF_CNTR_GLOBAL_CTL:
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR:
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET:
return pmu->version > 1;
default:
if (msr > MSR_F15H_PERF_CTR5 &&
@ -199,17 +200,16 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
kvm_pmu_cap.num_counters_gp);
if (pmu->version > 1) {
pmu->global_ctrl_rsvd = ~((1ull << pmu->nr_arch_gp_counters) - 1);
pmu->global_ctrl_rsvd = ~(BIT_ULL(pmu->nr_arch_gp_counters) - 1);
pmu->global_status_rsvd = pmu->global_ctrl_rsvd;
}
pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(48) - 1;
pmu->reserved_bits = 0xfffffff000280000ull;
pmu->raw_event_mask = AMD64_RAW_EVENT_MASK;
/* not applicable to AMD; but clean them to prevent any fall out */
pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
pmu->nr_arch_fixed_counters = 0;
bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters);
}
static void amd_pmu_init(struct kvm_vcpu *vcpu)

View File

@ -37,7 +37,6 @@
#include "trace.h"
#define GHCB_VERSION_MAX 2ULL
#define GHCB_VERSION_DEFAULT 2ULL
#define GHCB_VERSION_MIN 1ULL
#define GHCB_HV_FT_SUPPORTED (GHCB_HV_FT_SNP | GHCB_HV_FT_SNP_AP_CREATION)
@ -59,6 +58,9 @@ static bool sev_es_debug_swap_enabled = true;
module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
static u64 sev_supported_vmsa_features;
static unsigned int nr_ciphertext_hiding_asids;
module_param_named(ciphertext_hiding_asids, nr_ciphertext_hiding_asids, uint, 0444);
#define AP_RESET_HOLD_NONE 0
#define AP_RESET_HOLD_NAE_EVENT 1
#define AP_RESET_HOLD_MSR_PROTO 2
@ -85,6 +87,10 @@ static DECLARE_RWSEM(sev_deactivate_lock);
static DEFINE_MUTEX(sev_bitmap_lock);
unsigned int max_sev_asid;
static unsigned int min_sev_asid;
static unsigned int max_sev_es_asid;
static unsigned int min_sev_es_asid;
static unsigned int max_snp_asid;
static unsigned int min_snp_asid;
static unsigned long sev_me_mask;
static unsigned int nr_asids;
static unsigned long *sev_asid_bitmap;
@ -147,6 +153,14 @@ static bool sev_vcpu_has_debug_swap(struct vcpu_svm *svm)
return sev->vmsa_features & SVM_SEV_FEAT_DEBUG_SWAP;
}
static bool snp_is_secure_tsc_enabled(struct kvm *kvm)
{
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
return (sev->vmsa_features & SVM_SEV_FEAT_SECURE_TSC) &&
!WARN_ON_ONCE(!sev_snp_guest(kvm));
}
/* Must be called with the sev_bitmap_lock held */
static bool __sev_recycle_asids(unsigned int min_asid, unsigned int max_asid)
{
@ -173,20 +187,34 @@ static void sev_misc_cg_uncharge(struct kvm_sev_info *sev)
misc_cg_uncharge(type, sev->misc_cg, 1);
}
static int sev_asid_new(struct kvm_sev_info *sev)
static int sev_asid_new(struct kvm_sev_info *sev, unsigned long vm_type)
{
/*
* SEV-enabled guests must use asid from min_sev_asid to max_sev_asid.
* SEV-ES-enabled guest can use from 1 to min_sev_asid - 1.
* Note: min ASID can end up larger than the max if basic SEV support is
* effectively disabled by disallowing use of ASIDs for SEV guests.
*/
unsigned int min_asid = sev->es_active ? 1 : min_sev_asid;
unsigned int max_asid = sev->es_active ? min_sev_asid - 1 : max_sev_asid;
unsigned int asid;
unsigned int min_asid, max_asid, asid;
bool retry = true;
int ret;
if (vm_type == KVM_X86_SNP_VM) {
min_asid = min_snp_asid;
max_asid = max_snp_asid;
} else if (sev->es_active) {
min_asid = min_sev_es_asid;
max_asid = max_sev_es_asid;
} else {
min_asid = min_sev_asid;
max_asid = max_sev_asid;
}
/*
* The min ASID can end up larger than the max if basic SEV support is
* effectively disabled by disallowing use of ASIDs for SEV guests.
* Similarly for SEV-ES guests the min ASID can end up larger than the
* max when ciphertext hiding is enabled, effectively disabling SEV-ES
* support.
*/
if (min_asid > max_asid)
return -ENOTTY;
@ -406,6 +434,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
struct sev_platform_init_args init_args = {0};
bool es_active = vm_type != KVM_X86_SEV_VM;
bool snp_active = vm_type == KVM_X86_SNP_VM;
u64 valid_vmsa_features = es_active ? sev_supported_vmsa_features : 0;
int ret;
@ -415,12 +444,26 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
if (data->flags)
return -EINVAL;
if (!snp_active)
valid_vmsa_features &= ~SVM_SEV_FEAT_SECURE_TSC;
if (data->vmsa_features & ~valid_vmsa_features)
return -EINVAL;
if (data->ghcb_version > GHCB_VERSION_MAX || (!es_active && data->ghcb_version))
return -EINVAL;
/*
* KVM supports the full range of mandatory features defined by version
* 2 of the GHCB protocol, so default to that for SEV-ES guests created
* via KVM_SEV_INIT2 (KVM_SEV_INIT forces version 1).
*/
if (es_active && !data->ghcb_version)
data->ghcb_version = 2;
if (snp_active && data->ghcb_version < 2)
return -EINVAL;
if (unlikely(sev->active))
return -EINVAL;
@ -429,18 +472,10 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
sev->vmsa_features = data->vmsa_features;
sev->ghcb_version = data->ghcb_version;
/*
* Currently KVM supports the full range of mandatory features defined
* by version 2 of the GHCB protocol, so default to that for SEV-ES
* guests created via KVM_SEV_INIT2.
*/
if (sev->es_active && !sev->ghcb_version)
sev->ghcb_version = GHCB_VERSION_DEFAULT;
if (vm_type == KVM_X86_SNP_VM)
if (snp_active)
sev->vmsa_features |= SVM_SEV_FEAT_SNP_ACTIVE;
ret = sev_asid_new(sev);
ret = sev_asid_new(sev, vm_type);
if (ret)
goto e_no_asid;
@ -455,7 +490,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
}
/* This needs to happen after SEV/SNP firmware initialization. */
if (vm_type == KVM_X86_SNP_VM) {
if (snp_active) {
ret = snp_guest_req_init(kvm);
if (ret)
goto e_free;
@ -569,8 +604,6 @@ static int sev_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (copy_from_user(&params, u64_to_user_ptr(argp->data), sizeof(params)))
return -EFAULT;
sev->policy = params.policy;
memset(&start, 0, sizeof(start));
dh_blob = NULL;
@ -618,6 +651,7 @@ static int sev_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
goto e_free_session;
}
sev->policy = params.policy;
sev->handle = start.handle;
sev->fd = argp->sev_fd;
@ -1968,7 +2002,7 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
kvm_for_each_vcpu(i, dst_vcpu, dst_kvm) {
dst_svm = to_svm(dst_vcpu);
sev_init_vmcb(dst_svm);
sev_init_vmcb(dst_svm, false);
if (!dst->es_active)
continue;
@ -2180,7 +2214,12 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (!(params.policy & SNP_POLICY_MASK_RSVD_MBO))
return -EINVAL;
sev->policy = params.policy;
if (snp_is_secure_tsc_enabled(kvm)) {
if (WARN_ON_ONCE(!kvm->arch.default_tsc_khz))
return -EINVAL;
start.desired_tsc_khz = kvm->arch.default_tsc_khz;
}
sev->snp_context = snp_context_create(kvm, argp);
if (!sev->snp_context)
@ -2188,6 +2227,7 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
start.gctx_paddr = __psp_pa(sev->snp_context);
start.policy = params.policy;
memcpy(start.gosvw, params.gosvw, sizeof(params.gosvw));
rc = __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_LAUNCH_START, &start, &argp->error);
if (rc) {
@ -2196,6 +2236,7 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
goto e_free_context;
}
sev->policy = params.policy;
sev->fd = argp->sev_fd;
rc = snp_bind_asid(kvm, &argp->error);
if (rc) {
@ -2329,7 +2370,7 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
pr_debug("%s: GFN start 0x%llx length 0x%llx type %d flags %d\n", __func__,
params.gfn_start, params.len, params.type, params.flags);
if (!PAGE_ALIGNED(params.len) || params.flags ||
if (!params.len || !PAGE_ALIGNED(params.len) || params.flags ||
(params.type != KVM_SEV_SNP_PAGE_TYPE_NORMAL &&
params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO &&
params.type != KVM_SEV_SNP_PAGE_TYPE_UNMEASURED &&
@ -3038,6 +3079,9 @@ void __init sev_hardware_setup(void)
if (min_sev_asid == 1)
goto out;
min_sev_es_asid = min_snp_asid = 1;
max_sev_es_asid = max_snp_asid = min_sev_asid - 1;
sev_es_asid_count = min_sev_asid - 1;
WARN_ON_ONCE(misc_cg_set_capacity(MISC_CG_RES_SEV_ES, sev_es_asid_count));
sev_es_supported = true;
@ -3046,10 +3090,32 @@ void __init sev_hardware_setup(void)
out:
if (sev_enabled) {
init_args.probe = true;
if (sev_is_snp_ciphertext_hiding_supported())
init_args.max_snp_asid = min(nr_ciphertext_hiding_asids,
min_sev_asid - 1);
if (sev_platform_init(&init_args))
sev_supported = sev_es_supported = sev_snp_supported = false;
else if (sev_snp_supported)
sev_snp_supported = is_sev_snp_initialized();
if (sev_snp_supported)
nr_ciphertext_hiding_asids = init_args.max_snp_asid;
/*
* If ciphertext hiding is enabled, the joint SEV-ES/SEV-SNP
* ASID range is partitioned into separate SEV-ES and SEV-SNP
* ASID ranges, with the SEV-SNP range being [1..max_snp_asid]
* and the SEV-ES range being (max_snp_asid..max_sev_es_asid].
* Note, SEV-ES may effectively be disabled if all ASIDs from
* the joint range are assigned to SEV-SNP.
*/
if (nr_ciphertext_hiding_asids) {
max_snp_asid = nr_ciphertext_hiding_asids;
min_sev_es_asid = max_snp_asid + 1;
pr_info("SEV-SNP ciphertext hiding enabled\n");
}
}
if (boot_cpu_has(X86_FEATURE_SEV))
@ -3060,12 +3126,14 @@ out:
min_sev_asid, max_sev_asid);
if (boot_cpu_has(X86_FEATURE_SEV_ES))
pr_info("SEV-ES %s (ASIDs %u - %u)\n",
str_enabled_disabled(sev_es_supported),
min_sev_asid > 1 ? 1 : 0, min_sev_asid - 1);
sev_es_supported ? min_sev_es_asid <= max_sev_es_asid ? "enabled" :
"unusable" :
"disabled",
min_sev_es_asid, max_sev_es_asid);
if (boot_cpu_has(X86_FEATURE_SEV_SNP))
pr_info("SEV-SNP %s (ASIDs %u - %u)\n",
str_enabled_disabled(sev_snp_supported),
min_sev_asid > 1 ? 1 : 0, min_sev_asid - 1);
min_snp_asid, max_snp_asid);
sev_enabled = sev_supported;
sev_es_enabled = sev_es_supported;
@ -3078,6 +3146,9 @@ out:
sev_supported_vmsa_features = 0;
if (sev_es_debug_swap_enabled)
sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;
if (sev_snp_enabled && tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
sev_supported_vmsa_features |= SVM_SEV_FEAT_SECURE_TSC;
}
void sev_hardware_unsetup(void)
@ -3193,7 +3264,7 @@ skip_vmsa_free:
kvfree(svm->sev_es.ghcb_sa);
}
static u64 kvm_ghcb_get_sw_exit_code(struct vmcb_control_area *control)
static u64 kvm_get_cached_sw_exit_code(struct vmcb_control_area *control)
{
return (((u64)control->exit_code_hi) << 32) | control->exit_code;
}
@ -3219,7 +3290,7 @@ static void dump_ghcb(struct vcpu_svm *svm)
*/
pr_err("GHCB (GPA=%016llx) snapshot:\n", svm->vmcb->control.ghcb_gpa);
pr_err("%-20s%016llx is_valid: %u\n", "sw_exit_code",
kvm_ghcb_get_sw_exit_code(control), kvm_ghcb_sw_exit_code_is_valid(svm));
kvm_get_cached_sw_exit_code(control), kvm_ghcb_sw_exit_code_is_valid(svm));
pr_err("%-20s%016llx is_valid: %u\n", "sw_exit_info_1",
control->exit_info_1, kvm_ghcb_sw_exit_info_1_is_valid(svm));
pr_err("%-20s%016llx is_valid: %u\n", "sw_exit_info_2",
@ -3272,26 +3343,27 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
BUILD_BUG_ON(sizeof(svm->sev_es.valid_bitmap) != sizeof(ghcb->save.valid_bitmap));
memcpy(&svm->sev_es.valid_bitmap, &ghcb->save.valid_bitmap, sizeof(ghcb->save.valid_bitmap));
vcpu->arch.regs[VCPU_REGS_RAX] = kvm_ghcb_get_rax_if_valid(svm, ghcb);
vcpu->arch.regs[VCPU_REGS_RBX] = kvm_ghcb_get_rbx_if_valid(svm, ghcb);
vcpu->arch.regs[VCPU_REGS_RCX] = kvm_ghcb_get_rcx_if_valid(svm, ghcb);
vcpu->arch.regs[VCPU_REGS_RDX] = kvm_ghcb_get_rdx_if_valid(svm, ghcb);
vcpu->arch.regs[VCPU_REGS_RSI] = kvm_ghcb_get_rsi_if_valid(svm, ghcb);
vcpu->arch.regs[VCPU_REGS_RAX] = kvm_ghcb_get_rax_if_valid(svm);
vcpu->arch.regs[VCPU_REGS_RBX] = kvm_ghcb_get_rbx_if_valid(svm);
vcpu->arch.regs[VCPU_REGS_RCX] = kvm_ghcb_get_rcx_if_valid(svm);
vcpu->arch.regs[VCPU_REGS_RDX] = kvm_ghcb_get_rdx_if_valid(svm);
vcpu->arch.regs[VCPU_REGS_RSI] = kvm_ghcb_get_rsi_if_valid(svm);
svm->vmcb->save.cpl = kvm_ghcb_get_cpl_if_valid(svm, ghcb);
svm->vmcb->save.cpl = kvm_ghcb_get_cpl_if_valid(svm);
if (kvm_ghcb_xcr0_is_valid(svm)) {
vcpu->arch.xcr0 = ghcb_get_xcr0(ghcb);
vcpu->arch.cpuid_dynamic_bits_dirty = true;
}
if (kvm_ghcb_xcr0_is_valid(svm))
__kvm_set_xcr(vcpu, 0, kvm_ghcb_get_xcr0(svm));
if (kvm_ghcb_xss_is_valid(svm))
__kvm_emulate_msr_write(vcpu, MSR_IA32_XSS, kvm_ghcb_get_xss(svm));
/* Copy the GHCB exit information into the VMCB fields */
exit_code = ghcb_get_sw_exit_code(ghcb);
exit_code = kvm_ghcb_get_sw_exit_code(svm);
control->exit_code = lower_32_bits(exit_code);
control->exit_code_hi = upper_32_bits(exit_code);
control->exit_info_1 = ghcb_get_sw_exit_info_1(ghcb);
control->exit_info_2 = ghcb_get_sw_exit_info_2(ghcb);
svm->sev_es.sw_scratch = kvm_ghcb_get_sw_scratch_if_valid(svm, ghcb);
control->exit_info_1 = kvm_ghcb_get_sw_exit_info_1(svm);
control->exit_info_2 = kvm_ghcb_get_sw_exit_info_2(svm);
svm->sev_es.sw_scratch = kvm_ghcb_get_sw_scratch_if_valid(svm);
/* Clear the valid entries fields */
memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
@ -3308,7 +3380,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
* Retrieve the exit code now even though it may not be marked valid
* as it could help with debugging.
*/
exit_code = kvm_ghcb_get_sw_exit_code(control);
exit_code = kvm_get_cached_sw_exit_code(control);
/* Only GHCB Usage code 0 is supported */
if (svm->sev_es.ghcb->ghcb_usage) {
@ -3880,7 +3952,7 @@ next_range:
/*
* Invoked as part of svm_vcpu_reset() processing of an init event.
*/
void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu)
static void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_memory_slot *slot;
@ -3888,9 +3960,6 @@ void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu)
kvm_pfn_t pfn;
gfn_t gfn;
if (!sev_snp_guest(vcpu->kvm))
return;
guard(mutex)(&svm->sev_es.snp_vmsa_mutex);
if (!svm->sev_es.snp_ap_waiting_for_reset)
@ -4316,7 +4385,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
svm_vmgexit_success(svm, 0);
exit_code = kvm_ghcb_get_sw_exit_code(control);
exit_code = kvm_get_cached_sw_exit_code(control);
switch (exit_code) {
case SVM_VMGEXIT_MMIO_READ:
ret = setup_vmgexit_scratch(svm, true, control->exit_info_2);
@ -4448,6 +4517,9 @@ void sev_es_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
!guest_cpu_cap_has(vcpu, X86_FEATURE_RDTSCP) &&
!guest_cpu_cap_has(vcpu, X86_FEATURE_RDPID));
svm_set_intercept_for_msr(vcpu, MSR_AMD64_GUEST_TSC_FREQ, MSR_TYPE_R,
!snp_is_secure_tsc_enabled(vcpu->kvm));
/*
* For SEV-ES, accesses to MSR_IA32_XSS should not be intercepted if
* the host/guest supports its use.
@ -4476,7 +4548,7 @@ void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm)
vcpu->arch.reserved_gpa_bits &= ~(1UL << (best->ebx & 0x3f));
}
static void sev_es_init_vmcb(struct vcpu_svm *svm)
static void sev_es_init_vmcb(struct vcpu_svm *svm, bool init_event)
{
struct kvm_sev_info *sev = to_kvm_sev_info(svm->vcpu.kvm);
struct vmcb *vmcb = svm->vmcb01.ptr;
@ -4537,10 +4609,21 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
/* Can't intercept XSETBV, HV can't modify XCR0 directly */
svm_clr_intercept(svm, INTERCEPT_XSETBV);
/*
* Set the GHCB MSR value as per the GHCB specification when emulating
* vCPU RESET for an SEV-ES guest.
*/
if (!init_event)
set_ghcb_msr(svm, GHCB_MSR_SEV_INFO((__u64)sev->ghcb_version,
GHCB_VERSION_MIN,
sev_enc_bit));
}
void sev_init_vmcb(struct vcpu_svm *svm)
void sev_init_vmcb(struct vcpu_svm *svm, bool init_event)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
clr_exception_intercept(svm, UD_VECTOR);
@ -4550,24 +4633,36 @@ void sev_init_vmcb(struct vcpu_svm *svm)
*/
clr_exception_intercept(svm, GP_VECTOR);
if (sev_es_guest(svm->vcpu.kvm))
sev_es_init_vmcb(svm);
if (init_event && sev_snp_guest(vcpu->kvm))
sev_snp_init_protected_guest_state(vcpu);
if (sev_es_guest(vcpu->kvm))
sev_es_init_vmcb(svm, init_event);
}
void sev_es_vcpu_reset(struct vcpu_svm *svm)
int sev_vcpu_create(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
struct kvm_sev_info *sev = to_kvm_sev_info(vcpu->kvm);
/*
* Set the GHCB MSR value as per the GHCB specification when emulating
* vCPU RESET for an SEV-ES guest.
*/
set_ghcb_msr(svm, GHCB_MSR_SEV_INFO((__u64)sev->ghcb_version,
GHCB_VERSION_MIN,
sev_enc_bit));
struct vcpu_svm *svm = to_svm(vcpu);
struct page *vmsa_page;
mutex_init(&svm->sev_es.snp_vmsa_mutex);
if (!sev_es_guest(vcpu->kvm))
return 0;
/*
* SEV-ES guests require a separate (from the VMCB) VMSA page used to
* contain the encrypted register state of the guest.
*/
vmsa_page = snp_safe_alloc_page();
if (!vmsa_page)
return -ENOMEM;
svm->sev_es.vmsa = page_address(vmsa_page);
vcpu->arch.guest_tsc_protected = snp_is_secure_tsc_enabled(vcpu->kvm);
return 0;
}
void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)
@ -4618,6 +4713,16 @@ void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_are
hostsa->dr2_addr_mask = amd_get_dr_addr_mask(2);
hostsa->dr3_addr_mask = amd_get_dr_addr_mask(3);
}
/*
* TSC_AUX is always virtualized for SEV-ES guests when the feature is
* available, i.e. TSC_AUX is loaded on #VMEXIT from the host save area.
* Set the save area to the current hardware value, i.e. the current
* user return value, so that the correct value is restored on #VMEXIT.
*/
if (cpu_feature_enabled(X86_FEATURE_V_TSC_AUX) &&
!WARN_ON_ONCE(tsc_aux_uret_slot < 0))
hostsa->tsc_aux = kvm_get_user_return_msr(tsc_aux_uret_slot);
}
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)

View File

@ -158,14 +158,6 @@ module_param(lbrv, int, 0444);
static int tsc_scaling = true;
module_param(tsc_scaling, int, 0444);
/*
* enable / disable AVIC. Because the defaults differ for APICv
* support between VMX and SVM we cannot use module_param_named.
*/
static bool avic;
module_param(avic, bool, 0444);
module_param(enable_ipiv, bool, 0444);
module_param(enable_device_posted_irqs, bool, 0444);
bool __read_mostly dump_invalid_vmcb;
@ -195,7 +187,7 @@ static DEFINE_MUTEX(vmcb_dump_mutex);
* RDTSCP and RDPID are not used in the kernel, specifically to allow KVM to
* defer the restoration of TSC_AUX until the CPU returns to userspace.
*/
static int tsc_aux_uret_slot __read_mostly = -1;
int tsc_aux_uret_slot __ro_after_init = -1;
static int get_npt_level(void)
{
@ -577,18 +569,6 @@ static int svm_enable_virtualization_cpu(void)
amd_pmu_enable_virt();
/*
* If TSC_AUX virtualization is supported, TSC_AUX becomes a swap type
* "B" field (see sev_es_prepare_switch_to_guest()) for SEV-ES guests.
* Since Linux does not change the value of TSC_AUX once set, prime the
* TSC_AUX field now to avoid a RDMSR on every vCPU run.
*/
if (boot_cpu_has(X86_FEATURE_V_TSC_AUX)) {
u32 __maybe_unused msr_hi;
rdmsr(MSR_TSC_AUX, sev_es_host_save_area(sd)->tsc_aux, msr_hi);
}
return 0;
}
@ -736,55 +716,6 @@ static void svm_recalc_lbr_msr_intercepts(struct kvm_vcpu *vcpu)
svm_set_intercept_for_msr(vcpu, MSR_IA32_DEBUGCTLMSR, MSR_TYPE_RW, intercept);
}
void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
{
static const u32 x2avic_passthrough_msrs[] = {
X2APIC_MSR(APIC_ID),
X2APIC_MSR(APIC_LVR),
X2APIC_MSR(APIC_TASKPRI),
X2APIC_MSR(APIC_ARBPRI),
X2APIC_MSR(APIC_PROCPRI),
X2APIC_MSR(APIC_EOI),
X2APIC_MSR(APIC_RRR),
X2APIC_MSR(APIC_LDR),
X2APIC_MSR(APIC_DFR),
X2APIC_MSR(APIC_SPIV),
X2APIC_MSR(APIC_ISR),
X2APIC_MSR(APIC_TMR),
X2APIC_MSR(APIC_IRR),
X2APIC_MSR(APIC_ESR),
X2APIC_MSR(APIC_ICR),
X2APIC_MSR(APIC_ICR2),
/*
* Note! Always intercept LVTT, as TSC-deadline timer mode
* isn't virtualized by hardware, and the CPU will generate a
* #GP instead of a #VMEXIT.
*/
X2APIC_MSR(APIC_LVTTHMR),
X2APIC_MSR(APIC_LVTPC),
X2APIC_MSR(APIC_LVT0),
X2APIC_MSR(APIC_LVT1),
X2APIC_MSR(APIC_LVTERR),
X2APIC_MSR(APIC_TMICT),
X2APIC_MSR(APIC_TMCCT),
X2APIC_MSR(APIC_TDCR),
};
int i;
if (intercept == svm->x2avic_msrs_intercepted)
return;
if (!x2avic_enabled)
return;
for (i = 0; i < ARRAY_SIZE(x2avic_passthrough_msrs); i++)
svm_set_intercept_for_msr(&svm->vcpu, x2avic_passthrough_msrs[i],
MSR_TYPE_RW, intercept);
svm->x2avic_msrs_intercepted = intercept;
}
void svm_vcpu_free_msrpm(void *msrpm)
{
__free_pages(virt_to_page(msrpm), get_order(MSRPM_SIZE));
@ -844,6 +775,17 @@ static void svm_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
svm_disable_intercept_for_msr(vcpu, MSR_IA32_MPERF, MSR_TYPE_R);
}
if (kvm_cpu_cap_has(X86_FEATURE_SHSTK)) {
bool shstk_enabled = guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK);
svm_set_intercept_for_msr(vcpu, MSR_IA32_U_CET, MSR_TYPE_RW, !shstk_enabled);
svm_set_intercept_for_msr(vcpu, MSR_IA32_S_CET, MSR_TYPE_RW, !shstk_enabled);
svm_set_intercept_for_msr(vcpu, MSR_IA32_PL0_SSP, MSR_TYPE_RW, !shstk_enabled);
svm_set_intercept_for_msr(vcpu, MSR_IA32_PL1_SSP, MSR_TYPE_RW, !shstk_enabled);
svm_set_intercept_for_msr(vcpu, MSR_IA32_PL2_SSP, MSR_TYPE_RW, !shstk_enabled);
svm_set_intercept_for_msr(vcpu, MSR_IA32_PL3_SSP, MSR_TYPE_RW, !shstk_enabled);
}
if (sev_es_guest(vcpu->kvm))
sev_es_recalc_msr_intercepts(vcpu);
@ -1077,13 +1019,13 @@ static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu)
}
}
static void svm_recalc_intercepts_after_set_cpuid(struct kvm_vcpu *vcpu)
static void svm_recalc_intercepts(struct kvm_vcpu *vcpu)
{
svm_recalc_instruction_intercepts(vcpu);
svm_recalc_msr_intercepts(vcpu);
}
static void init_vmcb(struct kvm_vcpu *vcpu)
static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct vmcb *vmcb = svm->vmcb01.ptr;
@ -1221,11 +1163,11 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
svm_set_intercept(svm, INTERCEPT_BUSLOCK);
if (sev_guest(vcpu->kvm))
sev_init_vmcb(svm);
sev_init_vmcb(svm, init_event);
svm_hv_init_vmcb(vmcb);
svm_recalc_intercepts_after_set_cpuid(vcpu);
kvm_make_request(KVM_REQ_RECALC_INTERCEPTS, vcpu);
vmcb_mark_all_dirty(vmcb);
@ -1244,9 +1186,6 @@ static void __svm_vcpu_reset(struct kvm_vcpu *vcpu)
svm->nmi_masked = false;
svm->awaiting_iret_completion = false;
if (sev_es_guest(vcpu->kvm))
sev_es_vcpu_reset(svm);
}
static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
@ -1256,10 +1195,7 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
svm->spec_ctrl = 0;
svm->virt_spec_ctrl = 0;
if (init_event)
sev_snp_init_protected_guest_state(vcpu);
init_vmcb(vcpu);
init_vmcb(vcpu, init_event);
if (!init_event)
__svm_vcpu_reset(vcpu);
@ -1275,7 +1211,6 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm;
struct page *vmcb01_page;
struct page *vmsa_page = NULL;
int err;
BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0);
@ -1286,24 +1221,18 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
if (!vmcb01_page)
goto out;
if (sev_es_guest(vcpu->kvm)) {
/*
* SEV-ES guests require a separate VMSA page used to contain
* the encrypted register state of the guest.
*/
vmsa_page = snp_safe_alloc_page();
if (!vmsa_page)
goto error_free_vmcb_page;
}
err = sev_vcpu_create(vcpu);
if (err)
goto error_free_vmcb_page;
err = avic_init_vcpu(svm);
if (err)
goto error_free_vmsa_page;
goto error_free_sev;
svm->msrpm = svm_vcpu_alloc_msrpm();
if (!svm->msrpm) {
err = -ENOMEM;
goto error_free_vmsa_page;
goto error_free_sev;
}
svm->x2avic_msrs_intercepted = true;
@ -1312,16 +1241,12 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT);
svm_switch_vmcb(svm, &svm->vmcb01);
if (vmsa_page)
svm->sev_es.vmsa = page_address(vmsa_page);
svm->guest_state_loaded = false;
return 0;
error_free_vmsa_page:
if (vmsa_page)
__free_page(vmsa_page);
error_free_sev:
sev_free_vcpu(vcpu);
error_free_vmcb_page:
__free_page(vmcb01_page);
out:
@ -1423,10 +1348,10 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
__svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
/*
* TSC_AUX is always virtualized for SEV-ES guests when the feature is
* available. The user return MSR support is not required in this case
* because TSC_AUX is restored on #VMEXIT from the host save area
* (which has been initialized in svm_enable_virtualization_cpu()).
* TSC_AUX is always virtualized (context switched by hardware) for
* SEV-ES guests when the feature is available. For non-SEV-ES guests,
* context switch TSC_AUX via the user_return MSR infrastructure (not
* all CPUs support TSC_AUX virtualization).
*/
if (likely(tsc_aux_uret_slot >= 0) &&
(!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm)))
@ -2727,8 +2652,8 @@ static int svm_get_feature_msr(u32 msr, u64 *data)
static bool sev_es_prevent_msr_access(struct kvm_vcpu *vcpu,
struct msr_data *msr_info)
{
return sev_es_guest(vcpu->kvm) &&
vcpu->arch.guest_state_protected &&
return sev_es_guest(vcpu->kvm) && vcpu->arch.guest_state_protected &&
msr_info->index != MSR_IA32_XSS &&
!msr_write_intercepted(vcpu, msr_info->index);
}
@ -2784,6 +2709,15 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (guest_cpuid_is_intel_compatible(vcpu))
msr_info->data |= (u64)svm->sysenter_esp_hi << 32;
break;
case MSR_IA32_S_CET:
msr_info->data = svm->vmcb->save.s_cet;
break;
case MSR_IA32_INT_SSP_TAB:
msr_info->data = svm->vmcb->save.isst_addr;
break;
case MSR_KVM_INTERNAL_GUEST_SSP:
msr_info->data = svm->vmcb->save.ssp;
break;
case MSR_TSC_AUX:
msr_info->data = svm->tsc_aux;
break;
@ -3016,13 +2950,24 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
svm->vmcb01.ptr->save.sysenter_esp = (u32)data;
svm->sysenter_esp_hi = guest_cpuid_is_intel_compatible(vcpu) ? (data >> 32) : 0;
break;
case MSR_IA32_S_CET:
svm->vmcb->save.s_cet = data;
vmcb_mark_dirty(svm->vmcb01.ptr, VMCB_CET);
break;
case MSR_IA32_INT_SSP_TAB:
svm->vmcb->save.isst_addr = data;
vmcb_mark_dirty(svm->vmcb01.ptr, VMCB_CET);
break;
case MSR_KVM_INTERNAL_GUEST_SSP:
svm->vmcb->save.ssp = data;
vmcb_mark_dirty(svm->vmcb01.ptr, VMCB_CET);
break;
case MSR_TSC_AUX:
/*
* TSC_AUX is always virtualized for SEV-ES guests when the
* feature is available. The user return MSR support is not
* required in this case because TSC_AUX is restored on #VMEXIT
* from the host save area (which has been initialized in
* svm_enable_virtualization_cpu()).
* from the host save area.
*/
if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) && sev_es_guest(vcpu->kvm))
break;
@ -3406,6 +3351,10 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
"rip:", save->rip, "rflags:", save->rflags);
pr_err("%-15s %016llx %-13s %016llx\n",
"rsp:", save->rsp, "rax:", save->rax);
pr_err("%-15s %016llx %-13s %016llx\n",
"s_cet:", save->s_cet, "ssp:", save->ssp);
pr_err("%-15s %016llx\n",
"isst_addr:", save->isst_addr);
pr_err("%-15s %016llx %-13s %016llx\n",
"star:", save01->star, "lstar:", save01->lstar);
pr_err("%-15s %016llx %-13s %016llx\n",
@ -3430,6 +3379,13 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
pr_err("%-15s %016llx\n",
"sev_features", vmsa->sev_features);
pr_err("%-15s %016llx %-13s %016llx\n",
"pl0_ssp:", vmsa->pl0_ssp, "pl1_ssp:", vmsa->pl1_ssp);
pr_err("%-15s %016llx %-13s %016llx\n",
"pl2_ssp:", vmsa->pl2_ssp, "pl3_ssp:", vmsa->pl3_ssp);
pr_err("%-15s %016llx\n",
"u_cet:", vmsa->u_cet);
pr_err("%-15s %016llx %-13s %016llx\n",
"rax:", vmsa->rax, "rbx:", vmsa->rbx);
pr_err("%-15s %016llx %-13s %016llx\n",
@ -4180,17 +4136,27 @@ static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct vmcb_control_area *control = &svm->vmcb->control;
/*
* Next RIP must be provided as IRQs are disabled, and accessing guest
* memory to decode the instruction might fault, i.e. might sleep.
*/
if (!nrips || !control->next_rip)
return EXIT_FASTPATH_NONE;
if (is_guest_mode(vcpu))
return EXIT_FASTPATH_NONE;
switch (svm->vmcb->control.exit_code) {
switch (control->exit_code) {
case SVM_EXIT_MSR:
if (!svm->vmcb->control.exit_info_1)
if (!control->exit_info_1)
break;
return handle_fastpath_set_msr_irqoff(vcpu);
return handle_fastpath_wrmsr(vcpu);
case SVM_EXIT_HLT:
return handle_fastpath_hlt(vcpu);
case SVM_EXIT_INVD:
return handle_fastpath_invd(vcpu);
default:
break;
}
@ -4467,8 +4433,6 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
if (sev_guest(vcpu->kvm))
sev_vcpu_after_set_cpuid(svm);
svm_recalc_intercepts_after_set_cpuid(vcpu);
}
static bool svm_has_wbinvd_exit(void)
@ -5041,7 +5005,7 @@ static void *svm_alloc_apic_backing_page(struct kvm_vcpu *vcpu)
return page_address(page);
}
static struct kvm_x86_ops svm_x86_ops __initdata = {
struct kvm_x86_ops svm_x86_ops __initdata = {
.name = KBUILD_MODNAME,
.check_processor_compatibility = svm_check_processor_compat,
@ -5170,7 +5134,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.apic_init_signal_blocked = svm_apic_init_signal_blocked,
.recalc_msr_intercepts = svm_recalc_msr_intercepts,
.recalc_intercepts = svm_recalc_intercepts,
.complete_emulated_msr = svm_complete_emulated_msr,
.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
@ -5228,7 +5192,8 @@ static __init void svm_set_cpu_caps(void)
kvm_set_cpu_caps();
kvm_caps.supported_perf_cap = 0;
kvm_caps.supported_xss = 0;
kvm_cpu_cap_clear(X86_FEATURE_IBT);
/* CPUID 0x80000001 and 0x8000000A (SVM features) */
if (nested) {
@ -5300,8 +5265,12 @@ static __init void svm_set_cpu_caps(void)
/* CPUID 0x8000001F (SME/SEV features) */
sev_set_cpu_caps();
/* Don't advertise Bus Lock Detect to guest if SVM support is absent */
/*
* Clear capabilities that are automatically configured by common code,
* but that require explicit SVM support (that isn't yet implemented).
*/
kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM);
}
static __init int svm_hardware_setup(void)
@ -5374,6 +5343,21 @@ static __init int svm_hardware_setup(void)
get_npt_level(), PG_LEVEL_1G);
pr_info("Nested Paging %s\n", str_enabled_disabled(npt_enabled));
/*
* It seems that on AMD processors PTE's accessed bit is
* being set by the CPU hardware before the NPF vmexit.
* This is not expected behaviour and our tests fail because
* of it.
* A workaround here is to disable support for
* GUEST_MAXPHYADDR < HOST_MAXPHYADDR if NPT is enabled.
* In this case userspace can know if there is support using
* KVM_CAP_SMALLER_MAXPHYADDR extension and decide how to handle
* it
* If future AMD CPU models change the behaviour described above,
* this variable can be changed accordingly
*/
allow_smaller_maxphyaddr = !npt_enabled;
/* Setup shadow_me_value and shadow_me_mask */
kvm_mmu_set_me_spte_mask(sme_me_mask, sme_me_mask);
@ -5408,15 +5392,12 @@ static __init int svm_hardware_setup(void)
goto err;
}
enable_apicv = avic = avic && avic_hardware_setup();
enable_apicv = avic_hardware_setup();
if (!enable_apicv) {
enable_ipiv = false;
svm_x86_ops.vcpu_blocking = NULL;
svm_x86_ops.vcpu_unblocking = NULL;
svm_x86_ops.vcpu_get_apicv_inhibit_reasons = NULL;
} else if (!x2avic_enabled) {
svm_x86_ops.allow_apicv_in_x2apic_without_x2apic_virtualization = true;
}
if (vls) {
@ -5453,21 +5434,6 @@ static __init int svm_hardware_setup(void)
svm_set_cpu_caps();
/*
* It seems that on AMD processors PTE's accessed bit is
* being set by the CPU hardware before the NPF vmexit.
* This is not expected behaviour and our tests fail because
* of it.
* A workaround here is to disable support for
* GUEST_MAXPHYADDR < HOST_MAXPHYADDR if NPT is enabled.
* In this case userspace can know if there is support using
* KVM_CAP_SMALLER_MAXPHYADDR extension and decide how to handle
* it
* If future AMD CPU models change the behaviour described above,
* this variable can be changed accordingly
*/
allow_smaller_maxphyaddr = !npt_enabled;
kvm_caps.inapplicable_quirks &= ~KVM_X86_QUIRK_CD_NW_CLEARED;
return 0;

View File

@ -48,10 +48,13 @@ extern bool npt_enabled;
extern int nrips;
extern int vgif;
extern bool intercept_smi;
extern bool x2avic_enabled;
extern bool vnmi;
extern int lbrv;
extern int tsc_aux_uret_slot __ro_after_init;
extern struct kvm_x86_ops svm_x86_ops __initdata;
/*
* Clean bits in VMCB.
* VMCB_ALL_CLEAN_MASK might also need to
@ -74,6 +77,7 @@ enum {
* AVIC PHYSICAL_TABLE pointer,
* AVIC LOGICAL_TABLE pointer
*/
VMCB_CET, /* S_CET, SSP, ISST_ADDR */
VMCB_SW = 31, /* Reserved for hypervisor/software use */
};
@ -82,7 +86,7 @@ enum {
(1U << VMCB_ASID) | (1U << VMCB_INTR) | \
(1U << VMCB_NPT) | (1U << VMCB_CR) | (1U << VMCB_DR) | \
(1U << VMCB_DT) | (1U << VMCB_SEG) | (1U << VMCB_CR2) | \
(1U << VMCB_LBR) | (1U << VMCB_AVIC) | \
(1U << VMCB_LBR) | (1U << VMCB_AVIC) | (1U << VMCB_CET) | \
(1U << VMCB_SW))
/* TPR and CR2 are always written before VMRUN */
@ -699,7 +703,6 @@ void svm_set_gif(struct vcpu_svm *svm, bool value);
int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code);
void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr,
int read, int write);
void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable);
void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
int trig_mode, int vec);
@ -801,7 +804,7 @@ extern struct kvm_x86_nested_ops svm_nested_ops;
BIT(APICV_INHIBIT_REASON_PHYSICAL_ID_TOO_BIG) \
)
bool avic_hardware_setup(void);
bool __init avic_hardware_setup(void);
int avic_ga_log_notifier(u32 ga_tag);
void avic_vm_destroy(struct kvm *kvm);
int avic_vm_init(struct kvm *kvm);
@ -826,10 +829,9 @@ void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu);
/* sev.c */
int pre_sev_run(struct vcpu_svm *svm, int cpu);
void sev_init_vmcb(struct vcpu_svm *svm);
void sev_init_vmcb(struct vcpu_svm *svm, bool init_event);
void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
void sev_es_vcpu_reset(struct vcpu_svm *svm);
void sev_es_recalc_msr_intercepts(struct kvm_vcpu *vcpu);
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa);
@ -854,6 +856,7 @@ static inline struct page *snp_safe_alloc_page(void)
return snp_safe_alloc_page_node(numa_node_id(), GFP_KERNEL_ACCOUNT);
}
int sev_vcpu_create(struct kvm_vcpu *vcpu);
void sev_free_vcpu(struct kvm_vcpu *vcpu);
void sev_vm_destroy(struct kvm *kvm);
void __init sev_set_cpu_caps(void);
@ -863,7 +866,6 @@ int sev_cpu_init(struct svm_cpu_data *sd);
int sev_dev_get_attr(u32 group, u64 attr, u64 *val);
extern unsigned int max_sev_asid;
void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code);
void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu);
int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order);
void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end);
int sev_gmem_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, bool is_private);
@ -880,6 +882,7 @@ static inline struct page *snp_safe_alloc_page(void)
return snp_safe_alloc_page_node(numa_node_id(), GFP_KERNEL_ACCOUNT);
}
static inline int sev_vcpu_create(struct kvm_vcpu *vcpu) { return 0; }
static inline void sev_free_vcpu(struct kvm_vcpu *vcpu) {}
static inline void sev_vm_destroy(struct kvm *kvm) {}
static inline void __init sev_set_cpu_caps(void) {}
@ -889,7 +892,6 @@ static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; }
static inline int sev_dev_get_attr(u32 group, u64 attr, u64 *val) { return -ENXIO; }
#define max_sev_asid 0
static inline void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) {}
static inline void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) {}
static inline int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order)
{
return 0;
@ -914,16 +916,21 @@ void __svm_sev_es_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted,
void __svm_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted);
#define DEFINE_KVM_GHCB_ACCESSORS(field) \
static __always_inline bool kvm_ghcb_##field##_is_valid(const struct vcpu_svm *svm) \
{ \
return test_bit(GHCB_BITMAP_IDX(field), \
(unsigned long *)&svm->sev_es.valid_bitmap); \
} \
\
static __always_inline u64 kvm_ghcb_get_##field##_if_valid(struct vcpu_svm *svm, struct ghcb *ghcb) \
{ \
return kvm_ghcb_##field##_is_valid(svm) ? ghcb->save.field : 0; \
} \
static __always_inline u64 kvm_ghcb_get_##field(struct vcpu_svm *svm) \
{ \
return READ_ONCE(svm->sev_es.ghcb->save.field); \
} \
\
static __always_inline bool kvm_ghcb_##field##_is_valid(const struct vcpu_svm *svm) \
{ \
return test_bit(GHCB_BITMAP_IDX(field), \
(unsigned long *)&svm->sev_es.valid_bitmap); \
} \
\
static __always_inline u64 kvm_ghcb_get_##field##_if_valid(struct vcpu_svm *svm) \
{ \
return kvm_ghcb_##field##_is_valid(svm) ? kvm_ghcb_get_##field(svm) : 0; \
}
DEFINE_KVM_GHCB_ACCESSORS(cpl)
DEFINE_KVM_GHCB_ACCESSORS(rax)
@ -936,5 +943,6 @@ DEFINE_KVM_GHCB_ACCESSORS(sw_exit_info_1)
DEFINE_KVM_GHCB_ACCESSORS(sw_exit_info_2)
DEFINE_KVM_GHCB_ACCESSORS(sw_scratch)
DEFINE_KVM_GHCB_ACCESSORS(xcr0)
DEFINE_KVM_GHCB_ACCESSORS(xss)
#endif

View File

@ -15,7 +15,7 @@
#include "kvm_onhyperv.h"
#include "svm_onhyperv.h"
int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu)
static int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu)
{
struct hv_vmcb_enlightenments *hve;
hpa_t partition_assist_page = hv_get_partition_assist_page(vcpu);
@ -35,3 +35,29 @@ int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu)
return 0;
}
__init void svm_hv_hardware_setup(void)
{
if (npt_enabled &&
ms_hyperv.nested_features & HV_X64_NESTED_ENLIGHTENED_TLB) {
pr_info(KBUILD_MODNAME ": Hyper-V enlightened NPT TLB flush enabled\n");
svm_x86_ops.flush_remote_tlbs = hv_flush_remote_tlbs;
svm_x86_ops.flush_remote_tlbs_range = hv_flush_remote_tlbs_range;
}
if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH) {
int cpu;
pr_info(KBUILD_MODNAME ": Hyper-V Direct TLB Flush enabled\n");
for_each_online_cpu(cpu) {
struct hv_vp_assist_page *vp_ap =
hv_get_vp_assist_page(cpu);
if (!vp_ap)
continue;
vp_ap->nested_control.features.directhypercall = 1;
}
svm_x86_ops.enable_l2_tlb_flush =
svm_hv_enable_l2_tlb_flush;
}
}

View File

@ -13,9 +13,7 @@
#include "kvm_onhyperv.h"
#include "svm/hyperv.h"
static struct kvm_x86_ops svm_x86_ops;
int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu);
__init void svm_hv_hardware_setup(void);
static inline bool svm_hv_is_enlightened_tlb_enabled(struct kvm_vcpu *vcpu)
{
@ -40,33 +38,6 @@ static inline void svm_hv_init_vmcb(struct vmcb *vmcb)
hve->hv_enlightenments_control.msr_bitmap = 1;
}
static inline __init void svm_hv_hardware_setup(void)
{
if (npt_enabled &&
ms_hyperv.nested_features & HV_X64_NESTED_ENLIGHTENED_TLB) {
pr_info(KBUILD_MODNAME ": Hyper-V enlightened NPT TLB flush enabled\n");
svm_x86_ops.flush_remote_tlbs = hv_flush_remote_tlbs;
svm_x86_ops.flush_remote_tlbs_range = hv_flush_remote_tlbs_range;
}
if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH) {
int cpu;
pr_info(KBUILD_MODNAME ": Hyper-V Direct TLB Flush enabled\n");
for_each_online_cpu(cpu) {
struct hv_vp_assist_page *vp_ap =
hv_get_vp_assist_page(cpu);
if (!vp_ap)
continue;
vp_ap->nested_control.features.directhypercall = 1;
}
svm_x86_ops.enable_l2_tlb_flush =
svm_hv_enable_l2_tlb_flush;
}
}
static inline void svm_hv_vmcb_dirty_nested_enlightenments(
struct kvm_vcpu *vcpu)
{

View File

@ -461,8 +461,9 @@ TRACE_EVENT(kvm_inj_virq,
#define kvm_trace_sym_exc \
EXS(DE), EXS(DB), EXS(BP), EXS(OF), EXS(BR), EXS(UD), EXS(NM), \
EXS(DF), EXS(TS), EXS(NP), EXS(SS), EXS(GP), EXS(PF), \
EXS(MF), EXS(AC), EXS(MC)
EXS(DF), EXS(TS), EXS(NP), EXS(SS), EXS(GP), EXS(PF), EXS(MF), \
EXS(AC), EXS(MC), EXS(XM), EXS(VE), EXS(CP), \
EXS(HV), EXS(VC), EXS(SX)
/*
* Tracepoint for kvm interrupt injection:

View File

@ -20,9 +20,6 @@ extern int __read_mostly pt_mode;
#define PT_MODE_SYSTEM 0
#define PT_MODE_HOST_GUEST 1
#define PMU_CAP_FW_WRITES (1ULL << 13)
#define PMU_CAP_LBR_FMT 0x3f
struct nested_vmx_msrs {
/*
* We only store the "true" versions of the VMX capability MSRs. We
@ -76,6 +73,11 @@ static inline bool cpu_has_vmx_basic_inout(void)
return vmcs_config.basic & VMX_BASIC_INOUT;
}
static inline bool cpu_has_vmx_basic_no_hw_errcode_cc(void)
{
return vmcs_config.basic & VMX_BASIC_NO_HW_ERROR_CODE_CC;
}
static inline bool cpu_has_virtual_nmis(void)
{
return vmcs_config.pin_based_exec_ctrl & PIN_BASED_VIRTUAL_NMIS &&
@ -103,6 +105,10 @@ static inline bool cpu_has_load_perf_global_ctrl(void)
return vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
}
static inline bool cpu_has_load_cet_ctrl(void)
{
return (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_CET_STATE);
}
static inline bool cpu_has_vmx_mpx(void)
{
return vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS;

View File

@ -188,18 +188,18 @@ static int vt_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return vmx_get_msr(vcpu, msr_info);
}
static void vt_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
static void vt_recalc_intercepts(struct kvm_vcpu *vcpu)
{
/*
* TDX doesn't allow VMM to configure interception of MSR accesses.
* TDX guest requests MSR accesses by calling TDVMCALL. The MSR
* filters will be applied when handling the TDVMCALL for RDMSR/WRMSR
* if the userspace has set any.
* TDX doesn't allow VMM to configure interception of instructions or
* MSR accesses. TDX guest requests MSR accesses by calling TDVMCALL.
* The MSR filters will be applied when handling the TDVMCALL for
* RDMSR/WRMSR if the userspace has set any.
*/
if (is_td_vcpu(vcpu))
return;
vmx_recalc_msr_intercepts(vcpu);
vmx_recalc_intercepts(vcpu);
}
static int vt_complete_emulated_msr(struct kvm_vcpu *vcpu, int err)
@ -996,7 +996,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.apic_init_signal_blocked = vt_op(apic_init_signal_blocked),
.migrate_timers = vmx_migrate_timers,
.recalc_msr_intercepts = vt_op(recalc_msr_intercepts),
.recalc_intercepts = vt_op(recalc_intercepts),
.complete_emulated_msr = vt_op(complete_emulated_msr),
.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,

View File

@ -721,6 +721,24 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_MPERF, MSR_TYPE_R);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_U_CET, MSR_TYPE_RW);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_S_CET, MSR_TYPE_RW);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_PL0_SSP, MSR_TYPE_RW);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_PL1_SSP, MSR_TYPE_RW);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_PL2_SSP, MSR_TYPE_RW);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_PL3_SSP, MSR_TYPE_RW);
kvm_vcpu_unmap(vcpu, &map);
vmx->nested.force_msr_bitmap_recalc = false;
@ -997,7 +1015,7 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
__func__, i, e.index, e.reserved);
goto fail;
}
if (kvm_set_msr_with_filter(vcpu, e.index, e.value)) {
if (kvm_emulate_msr_write(vcpu, e.index, e.value)) {
pr_debug_ratelimited(
"%s cannot write MSR (%u, 0x%x, 0x%llx)\n",
__func__, i, e.index, e.value);
@ -1033,7 +1051,7 @@ static bool nested_vmx_get_vmexit_msr_value(struct kvm_vcpu *vcpu,
}
}
if (kvm_get_msr_with_filter(vcpu, msr_index, data)) {
if (kvm_emulate_msr_read(vcpu, msr_index, data)) {
pr_debug_ratelimited("%s cannot read MSR (0x%x)\n", __func__,
msr_index);
return false;
@ -1272,9 +1290,10 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
{
const u64 feature_bits = VMX_BASIC_DUAL_MONITOR_TREATMENT |
VMX_BASIC_INOUT |
VMX_BASIC_TRUE_CTLS;
VMX_BASIC_TRUE_CTLS |
VMX_BASIC_NO_HW_ERROR_CODE_CC;
const u64 reserved_bits = GENMASK_ULL(63, 56) |
const u64 reserved_bits = GENMASK_ULL(63, 57) |
GENMASK_ULL(47, 45) |
BIT_ULL(31);
@ -2520,6 +2539,32 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
}
}
static void vmcs_read_cet_state(struct kvm_vcpu *vcpu, u64 *s_cet,
u64 *ssp, u64 *ssp_tbl)
{
if (guest_cpu_cap_has(vcpu, X86_FEATURE_IBT) ||
guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK))
*s_cet = vmcs_readl(GUEST_S_CET);
if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
*ssp = vmcs_readl(GUEST_SSP);
*ssp_tbl = vmcs_readl(GUEST_INTR_SSP_TABLE);
}
}
static void vmcs_write_cet_state(struct kvm_vcpu *vcpu, u64 s_cet,
u64 ssp, u64 ssp_tbl)
{
if (guest_cpu_cap_has(vcpu, X86_FEATURE_IBT) ||
guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK))
vmcs_writel(GUEST_S_CET, s_cet);
if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
vmcs_writel(GUEST_SSP, ssp);
vmcs_writel(GUEST_INTR_SSP_TABLE, ssp_tbl);
}
}
static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
{
struct hv_enlightened_vmcs *hv_evmcs = nested_vmx_evmcs(vmx);
@ -2636,6 +2681,10 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE)
vmcs_write_cet_state(&vmx->vcpu, vmcs12->guest_s_cet,
vmcs12->guest_ssp, vmcs12->guest_ssp_tbl);
set_cr4_guest_host_mask(vmx);
}
@ -2675,6 +2724,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
vmx_guest_debugctl_write(vcpu, vmx->nested.pre_vmenter_debugctl);
}
if (!vmx->nested.nested_run_pending ||
!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE))
vmcs_write_cet_state(vcpu, vmx->nested.pre_vmenter_s_cet,
vmx->nested.pre_vmenter_ssp,
vmx->nested.pre_vmenter_ssp_tbl);
if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
vmcs_write64(GUEST_BNDCFGS, vmx->nested.pre_vmenter_bndcfgs);
@ -2770,8 +2826,8 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) &&
kvm_pmu_has_perf_global_ctrl(vcpu_to_pmu(vcpu)) &&
WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
vmcs12->guest_ia32_perf_global_ctrl))) {
WARN_ON_ONCE(__kvm_emulate_msr_write(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
vmcs12->guest_ia32_perf_global_ctrl))) {
*entry_failure_code = ENTRY_FAIL_DEFAULT;
return -EINVAL;
}
@ -2949,7 +3005,6 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
u8 vector = intr_info & INTR_INFO_VECTOR_MASK;
u32 intr_type = intr_info & INTR_INFO_INTR_TYPE_MASK;
bool has_error_code = intr_info & INTR_INFO_DELIVER_CODE_MASK;
bool should_have_error_code;
bool urg = nested_cpu_has2(vmcs12,
SECONDARY_EXEC_UNRESTRICTED_GUEST);
bool prot_mode = !urg || vmcs12->guest_cr0 & X86_CR0_PE;
@ -2966,12 +3021,19 @@ static int nested_check_vm_entry_controls(struct kvm_vcpu *vcpu,
CC(intr_type == INTR_TYPE_OTHER_EVENT && vector != 0))
return -EINVAL;
/* VM-entry interruption-info field: deliver error code */
should_have_error_code =
intr_type == INTR_TYPE_HARD_EXCEPTION && prot_mode &&
x86_exception_has_error_code(vector);
if (CC(has_error_code != should_have_error_code))
return -EINVAL;
/*
* Cannot deliver error code in real mode or if the interrupt
* type is not hardware exception. For other cases, do the
* consistency check only if the vCPU doesn't enumerate
* VMX_BASIC_NO_HW_ERROR_CODE_CC.
*/
if (!prot_mode || intr_type != INTR_TYPE_HARD_EXCEPTION) {
if (CC(has_error_code))
return -EINVAL;
} else if (!nested_cpu_has_no_hw_errcode_cc(vcpu)) {
if (CC(has_error_code != x86_exception_has_error_code(vector)))
return -EINVAL;
}
/* VM-entry exception error code */
if (CC(has_error_code &&
@ -3038,6 +3100,16 @@ static bool is_l1_noncanonical_address_on_vmexit(u64 la, struct vmcs12 *vmcs12)
return !__is_canonical_address(la, l1_address_bits_on_exit);
}
static int nested_vmx_check_cet_state_common(struct kvm_vcpu *vcpu, u64 s_cet,
u64 ssp, u64 ssp_tbl)
{
if (CC(!kvm_is_valid_u_s_cet(vcpu, s_cet)) || CC(!IS_ALIGNED(ssp, 4)) ||
CC(is_noncanonical_msr_address(ssp_tbl, vcpu)))
return -EINVAL;
return 0;
}
static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
@ -3048,6 +3120,9 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
CC(!kvm_vcpu_is_legal_cr3(vcpu, vmcs12->host_cr3)))
return -EINVAL;
if (CC(vmcs12->host_cr4 & X86_CR4_CET && !(vmcs12->host_cr0 & X86_CR0_WP)))
return -EINVAL;
if (CC(is_noncanonical_msr_address(vmcs12->host_ia32_sysenter_esp, vcpu)) ||
CC(is_noncanonical_msr_address(vmcs12->host_ia32_sysenter_eip, vcpu)))
return -EINVAL;
@ -3104,6 +3179,27 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
return -EINVAL;
}
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_CET_STATE) {
if (nested_vmx_check_cet_state_common(vcpu, vmcs12->host_s_cet,
vmcs12->host_ssp,
vmcs12->host_ssp_tbl))
return -EINVAL;
/*
* IA32_S_CET and SSP must be canonical if the host will
* enter 64-bit mode after VM-exit; otherwise, higher
* 32-bits must be all 0s.
*/
if (ia32e) {
if (CC(is_noncanonical_msr_address(vmcs12->host_s_cet, vcpu)) ||
CC(is_noncanonical_msr_address(vmcs12->host_ssp, vcpu)))
return -EINVAL;
} else {
if (CC(vmcs12->host_s_cet >> 32) || CC(vmcs12->host_ssp >> 32))
return -EINVAL;
}
}
return 0;
}
@ -3162,6 +3258,9 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
CC(!nested_guest_cr4_valid(vcpu, vmcs12->guest_cr4)))
return -EINVAL;
if (CC(vmcs12->guest_cr4 & X86_CR4_CET && !(vmcs12->guest_cr0 & X86_CR0_WP)))
return -EINVAL;
if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) &&
(CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
CC(!vmx_is_valid_debugctl(vcpu, vmcs12->guest_ia32_debugctl, false))))
@ -3211,6 +3310,23 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
CC((vmcs12->guest_bndcfgs & MSR_IA32_BNDCFGS_RSVD))))
return -EINVAL;
if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE) {
if (nested_vmx_check_cet_state_common(vcpu, vmcs12->guest_s_cet,
vmcs12->guest_ssp,
vmcs12->guest_ssp_tbl))
return -EINVAL;
/*
* Guest SSP must have 63:N bits identical, rather than
* be canonical (i.e., 63:N-1 bits identical), where N is
* the CPU's maximum linear-address width. Similar to
* is_noncanonical_msr_address(), use the host's
* linear-address width.
*/
if (CC(!__is_canonical_address(vmcs12->guest_ssp, max_host_virt_addr_bits() + 1)))
return -EINVAL;
}
if (nested_check_guest_non_reg_state(vmcs12))
return -EINVAL;
@ -3544,6 +3660,12 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
vmx->nested.pre_vmenter_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
if (!vmx->nested.nested_run_pending ||
!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE))
vmcs_read_cet_state(vcpu, &vmx->nested.pre_vmenter_s_cet,
&vmx->nested.pre_vmenter_ssp,
&vmx->nested.pre_vmenter_ssp_tbl);
/*
* Overwrite vmcs01.GUEST_CR3 with L1's CR3 if EPT is disabled *and*
* nested early checks are disabled. In the event of a "late" VM-Fail,
@ -3690,7 +3812,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
return 1;
}
kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED);
kvm_pmu_branch_retired(vcpu);
if (CC(evmptrld_status == EVMPTRLD_VMFAIL))
return nested_vmx_failInvalid(vcpu);
@ -4627,6 +4749,10 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
if (vmcs12->vm_exit_controls & VM_EXIT_SAVE_IA32_EFER)
vmcs12->guest_ia32_efer = vcpu->arch.efer;
vmcs_read_cet_state(&vmx->vcpu, &vmcs12->guest_s_cet,
&vmcs12->guest_ssp,
&vmcs12->guest_ssp_tbl);
}
/*
@ -4752,14 +4878,26 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
if (vmcs12->vm_exit_controls & VM_EXIT_CLEAR_BNDCFGS)
vmcs_write64(GUEST_BNDCFGS, 0);
/*
* Load CET state from host state if VM_EXIT_LOAD_CET_STATE is set.
* otherwise CET state should be retained across VM-exit, i.e.,
* guest values should be propagated from vmcs12 to vmcs01.
*/
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_CET_STATE)
vmcs_write_cet_state(vcpu, vmcs12->host_s_cet, vmcs12->host_ssp,
vmcs12->host_ssp_tbl);
else
vmcs_write_cet_state(vcpu, vmcs12->guest_s_cet, vmcs12->guest_ssp,
vmcs12->guest_ssp_tbl);
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_PAT) {
vmcs_write64(GUEST_IA32_PAT, vmcs12->host_ia32_pat);
vcpu->arch.pat = vmcs12->host_ia32_pat;
}
if ((vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL) &&
kvm_pmu_has_perf_global_ctrl(vcpu_to_pmu(vcpu)))
WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
vmcs12->host_ia32_perf_global_ctrl));
WARN_ON_ONCE(__kvm_emulate_msr_write(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
vmcs12->host_ia32_perf_global_ctrl));
/* Set L1 segment info according to Intel SDM
27.5.2 Loading Host Segment and Descriptor-Table Registers */
@ -4937,7 +5075,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
goto vmabort;
}
if (kvm_set_msr_with_filter(vcpu, h.index, h.value)) {
if (kvm_emulate_msr_write(vcpu, h.index, h.value)) {
pr_debug_ratelimited(
"%s WRMSR failed (%u, 0x%x, 0x%llx)\n",
__func__, j, h.index, h.value);
@ -6216,19 +6354,26 @@ static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12,
union vmx_exit_reason exit_reason)
{
u32 msr_index = kvm_rcx_read(vcpu);
u32 msr_index;
gpa_t bitmap;
if (!nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS))
return true;
if (exit_reason.basic == EXIT_REASON_MSR_READ_IMM ||
exit_reason.basic == EXIT_REASON_MSR_WRITE_IMM)
msr_index = vmx_get_exit_qual(vcpu);
else
msr_index = kvm_rcx_read(vcpu);
/*
* The MSR_BITMAP page is divided into four 1024-byte bitmaps,
* for the four combinations of read/write and low/high MSR numbers.
* First we need to figure out which of the four to use:
*/
bitmap = vmcs12->msr_bitmap;
if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
if (exit_reason.basic == EXIT_REASON_MSR_WRITE ||
exit_reason.basic == EXIT_REASON_MSR_WRITE_IMM)
bitmap += 2048;
if (msr_index >= 0xc0000000) {
msr_index -= 0xc0000000;
@ -6527,6 +6672,8 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu,
return nested_cpu_has2(vmcs12, SECONDARY_EXEC_DESC);
case EXIT_REASON_MSR_READ:
case EXIT_REASON_MSR_WRITE:
case EXIT_REASON_MSR_READ_IMM:
case EXIT_REASON_MSR_WRITE_IMM:
return nested_vmx_exit_handled_msr(vcpu, vmcs12, exit_reason);
case EXIT_REASON_INVALID_STATE:
return true;
@ -6561,14 +6708,17 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu,
return nested_cpu_has2(vmcs12, SECONDARY_EXEC_WBINVD_EXITING);
case EXIT_REASON_XSETBV:
return true;
case EXIT_REASON_XSAVES: case EXIT_REASON_XRSTORS:
case EXIT_REASON_XSAVES:
case EXIT_REASON_XRSTORS:
/*
* This should never happen, since it is not possible to
* set XSS to a non-zero value---neither in L1 nor in L2.
* If if it were, XSS would have to be checked against
* the XSS exit bitmap in vmcs12.
* Always forward XSAVES/XRSTORS to L1 as KVM doesn't utilize
* XSS-bitmap, and always loads vmcs02 with vmcs12's XSS-bitmap
* verbatim, i.e. any exit is due to L1's bitmap. WARN if
* XSAVES isn't enabled, as the CPU is supposed to inject #UD
* in that case, before consulting the XSS-bitmap.
*/
return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_XSAVES);
WARN_ON_ONCE(!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_XSAVES));
return true;
case EXIT_REASON_UMWAIT:
case EXIT_REASON_TPAUSE:
return nested_cpu_has2(vmcs12,
@ -7029,13 +7179,17 @@ static void nested_vmx_setup_exit_ctls(struct vmcs_config *vmcs_conf,
VM_EXIT_HOST_ADDR_SPACE_SIZE |
#endif
VM_EXIT_LOAD_IA32_PAT | VM_EXIT_SAVE_IA32_PAT |
VM_EXIT_CLEAR_BNDCFGS;
VM_EXIT_CLEAR_BNDCFGS | VM_EXIT_LOAD_CET_STATE;
msrs->exit_ctls_high |=
VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR |
VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER |
VM_EXIT_SAVE_VMX_PREEMPTION_TIMER | VM_EXIT_ACK_INTR_ON_EXIT |
VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
!kvm_cpu_cap_has(X86_FEATURE_IBT))
msrs->exit_ctls_high &= ~VM_EXIT_LOAD_CET_STATE;
/* We support free control of debug control saving. */
msrs->exit_ctls_low &= ~VM_EXIT_SAVE_DEBUG_CONTROLS;
}
@ -7051,11 +7205,16 @@ static void nested_vmx_setup_entry_ctls(struct vmcs_config *vmcs_conf,
#ifdef CONFIG_X86_64
VM_ENTRY_IA32E_MODE |
#endif
VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_BNDCFGS;
VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_BNDCFGS |
VM_ENTRY_LOAD_CET_STATE;
msrs->entry_ctls_high |=
(VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER |
VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL);
if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
!kvm_cpu_cap_has(X86_FEATURE_IBT))
msrs->entry_ctls_high &= ~VM_ENTRY_LOAD_CET_STATE;
/* We support free control of debug control loading. */
msrs->entry_ctls_low &= ~VM_ENTRY_LOAD_DEBUG_CONTROLS;
}
@ -7205,6 +7364,8 @@ static void nested_vmx_setup_basic(struct nested_vmx_msrs *msrs)
msrs->basic |= VMX_BASIC_TRUE_CTLS;
if (cpu_has_vmx_basic_inout())
msrs->basic |= VMX_BASIC_INOUT;
if (cpu_has_vmx_basic_no_hw_errcode_cc())
msrs->basic |= VMX_BASIC_NO_HW_ERROR_CODE_CC;
}
static void nested_vmx_setup_cr_fixed(struct nested_vmx_msrs *msrs)

View File

@ -309,6 +309,11 @@ static inline bool nested_cr4_valid(struct kvm_vcpu *vcpu, unsigned long val)
__kvm_is_valid_cr4(vcpu, val);
}
static inline bool nested_cpu_has_no_hw_errcode_cc(struct kvm_vcpu *vcpu)
{
return to_vmx(vcpu)->nested.msrs.basic & VMX_BASIC_NO_HW_ERROR_CODE_CC;
}
/* No difference in the restrictions on guest and host CR4 in VMX operation. */
#define nested_guest_cr4_valid nested_cr4_valid
#define nested_host_cr4_valid nested_cr4_valid

View File

@ -138,7 +138,7 @@ static inline u64 vcpu_get_perf_capabilities(struct kvm_vcpu *vcpu)
static inline bool fw_writes_is_enabled(struct kvm_vcpu *vcpu)
{
return (vcpu_get_perf_capabilities(vcpu) & PMU_CAP_FW_WRITES) != 0;
return (vcpu_get_perf_capabilities(vcpu) & PERF_CAP_FW_WRITES) != 0;
}
static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr)
@ -478,8 +478,8 @@ static __always_inline u64 intel_get_fixed_pmc_eventsel(unsigned int index)
};
u64 eventsel;
BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_perf_ids) != KVM_MAX_NR_INTEL_FIXED_COUTNERS);
BUILD_BUG_ON(index >= KVM_MAX_NR_INTEL_FIXED_COUTNERS);
BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_perf_ids) != KVM_MAX_NR_INTEL_FIXED_COUNTERS);
BUILD_BUG_ON(index >= KVM_MAX_NR_INTEL_FIXED_COUNTERS);
/*
* Yell if perf reports support for a fixed counter but perf doesn't
@ -536,29 +536,44 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
kvm_pmu_cap.num_counters_gp);
eax.split.bit_width = min_t(int, eax.split.bit_width,
kvm_pmu_cap.bit_width_gp);
pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1;
pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(eax.split.bit_width) - 1;
eax.split.mask_length = min_t(int, eax.split.mask_length,
kvm_pmu_cap.events_mask_len);
pmu->available_event_types = ~entry->ebx &
((1ull << eax.split.mask_length) - 1);
pmu->available_event_types = ~entry->ebx & (BIT_ULL(eax.split.mask_length) - 1);
if (pmu->version == 1) {
pmu->nr_arch_fixed_counters = 0;
} else {
pmu->nr_arch_fixed_counters = min_t(int, edx.split.num_counters_fixed,
kvm_pmu_cap.num_counters_fixed);
edx.split.bit_width_fixed = min_t(int, edx.split.bit_width_fixed,
kvm_pmu_cap.bit_width_fixed);
pmu->counter_bitmask[KVM_PMC_FIXED] =
((u64)1 << edx.split.bit_width_fixed) - 1;
entry = kvm_find_cpuid_entry_index(vcpu, 7, 0);
if (entry &&
(boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) &&
(entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM))) {
pmu->reserved_bits ^= HSW_IN_TX;
pmu->raw_event_mask |= (HSW_IN_TX|HSW_IN_TX_CHECKPOINTED);
}
perf_capabilities = vcpu_get_perf_capabilities(vcpu);
if (intel_pmu_lbr_is_compatible(vcpu) &&
(perf_capabilities & PERF_CAP_LBR_FMT))
memcpy(&lbr_desc->records, &vmx_lbr_caps, sizeof(vmx_lbr_caps));
else
lbr_desc->records.nr = 0;
if (lbr_desc->records.nr)
bitmap_set(pmu->all_valid_pmc_idx, INTEL_PMC_IDX_FIXED_VLBR, 1);
if (pmu->version == 1)
return;
pmu->nr_arch_fixed_counters = min_t(int, edx.split.num_counters_fixed,
kvm_pmu_cap.num_counters_fixed);
edx.split.bit_width_fixed = min_t(int, edx.split.bit_width_fixed,
kvm_pmu_cap.bit_width_fixed);
pmu->counter_bitmask[KVM_PMC_FIXED] = BIT_ULL(edx.split.bit_width_fixed) - 1;
intel_pmu_enable_fixed_counter_bits(pmu, INTEL_FIXED_0_KERNEL |
INTEL_FIXED_0_USER |
INTEL_FIXED_0_ENABLE_PMI);
counter_rsvd = ~(((1ull << pmu->nr_arch_gp_counters) - 1) |
(((1ull << pmu->nr_arch_fixed_counters) - 1) << KVM_FIXED_PMC_BASE_IDX));
counter_rsvd = ~((BIT_ULL(pmu->nr_arch_gp_counters) - 1) |
((BIT_ULL(pmu->nr_arch_fixed_counters) - 1) << KVM_FIXED_PMC_BASE_IDX));
pmu->global_ctrl_rsvd = counter_rsvd;
/*
@ -573,29 +588,6 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
pmu->global_status_rsvd &=
~MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI;
entry = kvm_find_cpuid_entry_index(vcpu, 7, 0);
if (entry &&
(boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) &&
(entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM))) {
pmu->reserved_bits ^= HSW_IN_TX;
pmu->raw_event_mask |= (HSW_IN_TX|HSW_IN_TX_CHECKPOINTED);
}
bitmap_set(pmu->all_valid_pmc_idx,
0, pmu->nr_arch_gp_counters);
bitmap_set(pmu->all_valid_pmc_idx,
INTEL_PMC_MAX_GENERIC, pmu->nr_arch_fixed_counters);
perf_capabilities = vcpu_get_perf_capabilities(vcpu);
if (intel_pmu_lbr_is_compatible(vcpu) &&
(perf_capabilities & PMU_CAP_LBR_FMT))
memcpy(&lbr_desc->records, &vmx_lbr_caps, sizeof(vmx_lbr_caps));
else
lbr_desc->records.nr = 0;
if (lbr_desc->records.nr)
bitmap_set(pmu->all_valid_pmc_idx, INTEL_PMC_IDX_FIXED_VLBR, 1);
if (perf_capabilities & PERF_CAP_PEBS_FORMAT) {
if (perf_capabilities & PERF_CAP_PEBS_BASELINE) {
pmu->pebs_enable_rsvd = counter_rsvd;
@ -603,8 +595,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
pmu->pebs_data_cfg_rsvd = ~0xff00000full;
intel_pmu_enable_fixed_counter_bits(pmu, ICL_FIXED_0_ADAPTIVE);
} else {
pmu->pebs_enable_rsvd =
~((1ull << pmu->nr_arch_gp_counters) - 1);
pmu->pebs_enable_rsvd = ~(BIT_ULL(pmu->nr_arch_gp_counters) - 1);
}
}
}
@ -625,7 +616,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu)
pmu->gp_counters[i].current_config = 0;
}
for (i = 0; i < KVM_MAX_NR_INTEL_FIXED_COUTNERS; i++) {
for (i = 0; i < KVM_MAX_NR_INTEL_FIXED_COUNTERS; i++) {
pmu->fixed_counters[i].type = KVM_PMC_FIXED;
pmu->fixed_counters[i].vcpu = vcpu;
pmu->fixed_counters[i].idx = i + KVM_FIXED_PMC_BASE_IDX;
@ -762,7 +753,7 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu)
int bit, hw_idx;
kvm_for_each_pmc(pmu, pmc, bit, (unsigned long *)&pmu->global_ctrl) {
if (!pmc_speculative_in_use(pmc) ||
if (!pmc_is_locally_enabled(pmc) ||
!pmc_is_globally_enabled(pmc) || !pmc->perf_event)
continue;

View File

@ -620,6 +620,11 @@ int tdx_vm_init(struct kvm *kvm)
struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
kvm->arch.has_protected_state = true;
/*
* TDX Module doesn't allow the hypervisor to modify the EOI-bitmap,
* i.e. all EOIs are accelerated and never trigger exits.
*/
kvm->arch.has_protected_eoi = true;
kvm->arch.has_private_mem = true;
kvm->arch.disabled_quirks |= KVM_X86_QUIRK_IGNORE_GUEST_PAT;
@ -1994,6 +1999,8 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
* handle retries locally in their EPT violation handlers.
*/
while (1) {
struct kvm_memory_slot *slot;
ret = __vmx_handle_ept_violation(vcpu, gpa, exit_qual);
if (ret != RET_PF_RETRY || !local_retry)
@ -2007,6 +2014,15 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
break;
}
/*
* Bail if the memslot is invalid, i.e. is being deleted, as
* faulting in will never succeed and this task needs to drop
* SRCU in order to let memslot deletion complete.
*/
slot = kvm_vcpu_gfn_to_memslot(vcpu, gpa_to_gfn(gpa));
if (slot && slot->flags & KVM_MEMSLOT_INVALID)
break;
cond_resched();
}
return ret;
@ -2472,7 +2488,7 @@ static int __tdx_td_init(struct kvm *kvm, struct td_params *td_params,
/* TDVPS = TDVPR(4K page) + TDCX(multiple 4K pages), -1 for TDVPR. */
kvm_tdx->td.tdcx_nr_pages = tdx_sysinfo->td_ctrl.tdvps_base_size / PAGE_SIZE - 1;
tdcs_pages = kcalloc(kvm_tdx->td.tdcs_nr_pages, sizeof(*kvm_tdx->td.tdcs_pages),
GFP_KERNEL | __GFP_ZERO);
GFP_KERNEL);
if (!tdcs_pages)
goto free_tdr;
@ -3460,12 +3476,11 @@ static int __init __tdx_bringup(void)
if (r)
goto tdx_bringup_err;
r = -EINVAL;
/* Get TDX global information for later use */
tdx_sysinfo = tdx_get_sysinfo();
if (WARN_ON_ONCE(!tdx_sysinfo)) {
r = -EINVAL;
if (WARN_ON_ONCE(!tdx_sysinfo))
goto get_sysinfo_err;
}
/* Check TDX module and KVM capabilities */
if (!tdx_get_supported_attrs(&tdx_sysinfo->td_conf) ||
@ -3508,14 +3523,11 @@ static int __init __tdx_bringup(void)
if (td_conf->max_vcpus_per_td < num_present_cpus()) {
pr_err("Disable TDX: MAX_VCPU_PER_TD (%u) smaller than number of logical CPUs (%u).\n",
td_conf->max_vcpus_per_td, num_present_cpus());
r = -EINVAL;
goto get_sysinfo_err;
}
if (misc_cg_set_capacity(MISC_CG_RES_TDX, tdx_get_nr_guest_keyids())) {
r = -EINVAL;
if (misc_cg_set_capacity(MISC_CG_RES_TDX, tdx_get_nr_guest_keyids()))
goto get_sysinfo_err;
}
/*
* Leave hardware virtualization enabled after TDX is enabled

View File

@ -139,6 +139,9 @@ const unsigned short vmcs12_field_offsets[] = {
FIELD(GUEST_PENDING_DBG_EXCEPTIONS, guest_pending_dbg_exceptions),
FIELD(GUEST_SYSENTER_ESP, guest_sysenter_esp),
FIELD(GUEST_SYSENTER_EIP, guest_sysenter_eip),
FIELD(GUEST_S_CET, guest_s_cet),
FIELD(GUEST_SSP, guest_ssp),
FIELD(GUEST_INTR_SSP_TABLE, guest_ssp_tbl),
FIELD(HOST_CR0, host_cr0),
FIELD(HOST_CR3, host_cr3),
FIELD(HOST_CR4, host_cr4),
@ -151,5 +154,8 @@ const unsigned short vmcs12_field_offsets[] = {
FIELD(HOST_IA32_SYSENTER_EIP, host_ia32_sysenter_eip),
FIELD(HOST_RSP, host_rsp),
FIELD(HOST_RIP, host_rip),
FIELD(HOST_S_CET, host_s_cet),
FIELD(HOST_SSP, host_ssp),
FIELD(HOST_INTR_SSP_TABLE, host_ssp_tbl),
};
const unsigned int nr_vmcs12_fields = ARRAY_SIZE(vmcs12_field_offsets);

View File

@ -117,7 +117,13 @@ struct __packed vmcs12 {
natural_width host_ia32_sysenter_eip;
natural_width host_rsp;
natural_width host_rip;
natural_width paddingl[8]; /* room for future expansion */
natural_width host_s_cet;
natural_width host_ssp;
natural_width host_ssp_tbl;
natural_width guest_s_cet;
natural_width guest_ssp;
natural_width guest_ssp_tbl;
natural_width paddingl[2]; /* room for future expansion */
u32 pin_based_vm_exec_control;
u32 cpu_based_vm_exec_control;
u32 exception_bitmap;
@ -294,6 +300,12 @@ static inline void vmx_check_vmcs12_offsets(void)
CHECK_OFFSET(host_ia32_sysenter_eip, 656);
CHECK_OFFSET(host_rsp, 664);
CHECK_OFFSET(host_rip, 672);
CHECK_OFFSET(host_s_cet, 680);
CHECK_OFFSET(host_ssp, 688);
CHECK_OFFSET(host_ssp_tbl, 696);
CHECK_OFFSET(guest_s_cet, 704);
CHECK_OFFSET(guest_ssp, 712);
CHECK_OFFSET(guest_ssp_tbl, 720);
CHECK_OFFSET(pin_based_vm_exec_control, 744);
CHECK_OFFSET(cpu_based_vm_exec_control, 748);
CHECK_OFFSET(exception_bitmap, 752);

View File

@ -1344,22 +1344,35 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)
}
#ifdef CONFIG_X86_64
static u64 vmx_read_guest_kernel_gs_base(struct vcpu_vmx *vmx)
static u64 vmx_read_guest_host_msr(struct vcpu_vmx *vmx, u32 msr, u64 *cache)
{
preempt_disable();
if (vmx->vt.guest_state_loaded)
rdmsrq(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
*cache = read_msr(msr);
preempt_enable();
return vmx->msr_guest_kernel_gs_base;
return *cache;
}
static void vmx_write_guest_host_msr(struct vcpu_vmx *vmx, u32 msr, u64 data,
u64 *cache)
{
preempt_disable();
if (vmx->vt.guest_state_loaded)
wrmsrns(msr, data);
preempt_enable();
*cache = data;
}
static u64 vmx_read_guest_kernel_gs_base(struct vcpu_vmx *vmx)
{
return vmx_read_guest_host_msr(vmx, MSR_KERNEL_GS_BASE,
&vmx->msr_guest_kernel_gs_base);
}
static void vmx_write_guest_kernel_gs_base(struct vcpu_vmx *vmx, u64 data)
{
preempt_disable();
if (vmx->vt.guest_state_loaded)
wrmsrq(MSR_KERNEL_GS_BASE, data);
preempt_enable();
vmx->msr_guest_kernel_gs_base = data;
vmx_write_guest_host_msr(vmx, MSR_KERNEL_GS_BASE, data,
&vmx->msr_guest_kernel_gs_base);
}
#endif
@ -2093,6 +2106,15 @@ int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
else
msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
break;
case MSR_IA32_S_CET:
msr_info->data = vmcs_readl(GUEST_S_CET);
break;
case MSR_KVM_INTERNAL_GUEST_SSP:
msr_info->data = vmcs_readl(GUEST_SSP);
break;
case MSR_IA32_INT_SSP_TAB:
msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
break;
case MSR_IA32_DEBUGCTLMSR:
msr_info->data = vmx_guest_debugctl_read();
break;
@ -2127,7 +2149,7 @@ u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
(host_initiated || guest_cpu_cap_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT)))
debugctl |= DEBUGCTLMSR_BUS_LOCK_DETECT;
if ((kvm_caps.supported_perf_cap & PMU_CAP_LBR_FMT) &&
if ((kvm_caps.supported_perf_cap & PERF_CAP_LBR_FMT) &&
(host_initiated || intel_pmu_lbr_is_enabled(vcpu)))
debugctl |= DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
@ -2411,10 +2433,19 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
else
vmx->pt_desc.guest.addr_a[index / 2] = data;
break;
case MSR_IA32_S_CET:
vmcs_writel(GUEST_S_CET, data);
break;
case MSR_KVM_INTERNAL_GUEST_SSP:
vmcs_writel(GUEST_SSP, data);
break;
case MSR_IA32_INT_SSP_TAB:
vmcs_writel(GUEST_INTR_SSP_TABLE, data);
break;
case MSR_IA32_PERF_CAPABILITIES:
if (data & PMU_CAP_LBR_FMT) {
if ((data & PMU_CAP_LBR_FMT) !=
(kvm_caps.supported_perf_cap & PMU_CAP_LBR_FMT))
if (data & PERF_CAP_LBR_FMT) {
if ((data & PERF_CAP_LBR_FMT) !=
(kvm_caps.supported_perf_cap & PERF_CAP_LBR_FMT))
return 1;
if (!cpuid_model_is_consistent(vcpu))
return 1;
@ -2584,6 +2615,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
{ VM_ENTRY_LOAD_IA32_EFER, VM_EXIT_LOAD_IA32_EFER },
{ VM_ENTRY_LOAD_BNDCFGS, VM_EXIT_CLEAR_BNDCFGS },
{ VM_ENTRY_LOAD_IA32_RTIT_CTL, VM_EXIT_CLEAR_IA32_RTIT_CTL },
{ VM_ENTRY_LOAD_CET_STATE, VM_EXIT_LOAD_CET_STATE },
};
memset(vmcs_conf, 0, sizeof(*vmcs_conf));
@ -4068,8 +4100,10 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
}
}
void vmx_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
static void vmx_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
{
bool intercept;
if (!cpu_has_vmx_msr_bitmap())
return;
@ -4115,12 +4149,34 @@ void vmx_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
vmx_set_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W,
!guest_cpu_cap_has(vcpu, X86_FEATURE_FLUSH_L1D));
if (kvm_cpu_cap_has(X86_FEATURE_SHSTK)) {
intercept = !guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL0_SSP, MSR_TYPE_RW, intercept);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL1_SSP, MSR_TYPE_RW, intercept);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL2_SSP, MSR_TYPE_RW, intercept);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL3_SSP, MSR_TYPE_RW, intercept);
}
if (kvm_cpu_cap_has(X86_FEATURE_SHSTK) || kvm_cpu_cap_has(X86_FEATURE_IBT)) {
intercept = !guest_cpu_cap_has(vcpu, X86_FEATURE_IBT) &&
!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_U_CET, MSR_TYPE_RW, intercept);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_S_CET, MSR_TYPE_RW, intercept);
}
/*
* x2APIC and LBR MSR intercepts are modified on-demand and cannot be
* filtered by userspace.
*/
}
void vmx_recalc_intercepts(struct kvm_vcpu *vcpu)
{
vmx_recalc_msr_intercepts(vcpu);
}
static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
int vector)
{
@ -4270,6 +4326,21 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
if (cpu_has_load_ia32_efer())
vmcs_write64(HOST_IA32_EFER, kvm_host.efer);
/*
* Supervisor shadow stack is not enabled on host side, i.e.,
* host IA32_S_CET.SHSTK_EN bit is guaranteed to 0 now, per SDM
* description(RDSSP instruction), SSP is not readable in CPL0,
* so resetting the two registers to 0s at VM-Exit does no harm
* to kernel execution. When execution flow exits to userspace,
* SSP is reloaded from IA32_PL3_SSP. Check SDM Vol.2A/B Chapter
* 3 and 4 for details.
*/
if (cpu_has_load_cet_ctrl()) {
vmcs_writel(HOST_S_CET, kvm_host.s_cet);
vmcs_writel(HOST_SSP, 0);
vmcs_writel(HOST_INTR_SSP_TABLE, 0);
}
}
void set_cr4_guest_host_mask(struct vcpu_vmx *vmx)
@ -4304,7 +4375,7 @@ static u32 vmx_pin_based_exec_ctrl(struct vcpu_vmx *vmx)
return pin_based_exec_ctrl;
}
static u32 vmx_vmentry_ctrl(void)
static u32 vmx_get_initial_vmentry_ctrl(void)
{
u32 vmentry_ctrl = vmcs_config.vmentry_ctrl;
@ -4321,7 +4392,7 @@ static u32 vmx_vmentry_ctrl(void)
return vmentry_ctrl;
}
static u32 vmx_vmexit_ctrl(void)
static u32 vmx_get_initial_vmexit_ctrl(void)
{
u32 vmexit_ctrl = vmcs_config.vmexit_ctrl;
@ -4351,19 +4422,13 @@ void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
pin_controls_set(vmx, vmx_pin_based_exec_ctrl(vmx));
if (kvm_vcpu_apicv_active(vcpu)) {
secondary_exec_controls_setbit(vmx,
SECONDARY_EXEC_APIC_REGISTER_VIRT |
SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
if (enable_ipiv)
tertiary_exec_controls_setbit(vmx, TERTIARY_EXEC_IPI_VIRT);
} else {
secondary_exec_controls_clearbit(vmx,
SECONDARY_EXEC_APIC_REGISTER_VIRT |
SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
if (enable_ipiv)
tertiary_exec_controls_clearbit(vmx, TERTIARY_EXEC_IPI_VIRT);
}
secondary_exec_controls_changebit(vmx,
SECONDARY_EXEC_APIC_REGISTER_VIRT |
SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY,
kvm_vcpu_apicv_active(vcpu));
if (enable_ipiv)
tertiary_exec_controls_changebit(vmx, TERTIARY_EXEC_IPI_VIRT,
kvm_vcpu_apicv_active(vcpu));
vmx_update_msr_bitmap_x2apic(vcpu);
}
@ -4686,10 +4751,10 @@ static void init_vmcs(struct vcpu_vmx *vmx)
if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat);
vm_exit_controls_set(vmx, vmx_vmexit_ctrl());
vm_exit_controls_set(vmx, vmx_get_initial_vmexit_ctrl());
/* 22.2.1, 20.8.1 */
vm_entry_controls_set(vmx, vmx_vmentry_ctrl());
vm_entry_controls_set(vmx, vmx_get_initial_vmentry_ctrl());
vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits();
vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits);
@ -4817,6 +4882,14 @@ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0); /* 22.2.1 */
if (kvm_cpu_cap_has(X86_FEATURE_SHSTK)) {
vmcs_writel(GUEST_SSP, 0);
vmcs_writel(GUEST_INTR_SSP_TABLE, 0);
}
if (kvm_cpu_cap_has(X86_FEATURE_IBT) ||
kvm_cpu_cap_has(X86_FEATURE_SHSTK))
vmcs_writel(GUEST_S_CET, 0);
kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu);
vpid_sync_context(vmx->vpid);
@ -6010,6 +6083,23 @@ static int handle_notify(struct kvm_vcpu *vcpu)
return 1;
}
static int vmx_get_msr_imm_reg(struct kvm_vcpu *vcpu)
{
return vmx_get_instr_info_reg(vmcs_read32(VMX_INSTRUCTION_INFO));
}
static int handle_rdmsr_imm(struct kvm_vcpu *vcpu)
{
return kvm_emulate_rdmsr_imm(vcpu, vmx_get_exit_qual(vcpu),
vmx_get_msr_imm_reg(vcpu));
}
static int handle_wrmsr_imm(struct kvm_vcpu *vcpu)
{
return kvm_emulate_wrmsr_imm(vcpu, vmx_get_exit_qual(vcpu),
vmx_get_msr_imm_reg(vcpu));
}
/*
* The exit handlers return 1 if the exit was handled fully and guest execution
* may resume. Otherwise they set the kvm_run parameter to indicate what needs
@ -6068,6 +6158,8 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
[EXIT_REASON_ENCLS] = handle_encls,
[EXIT_REASON_BUS_LOCK] = handle_bus_lock_vmexit,
[EXIT_REASON_NOTIFY] = handle_notify,
[EXIT_REASON_MSR_READ_IMM] = handle_rdmsr_imm,
[EXIT_REASON_MSR_WRITE_IMM] = handle_wrmsr_imm,
};
static const int kvm_vmx_max_exit_handlers =
@ -6272,6 +6364,10 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
if (vmcs_read32(VM_EXIT_MSR_STORE_COUNT) > 0)
vmx_dump_msrs("guest autostore", &vmx->msr_autostore.guest);
if (vmentry_ctl & VM_ENTRY_LOAD_CET_STATE)
pr_err("S_CET = 0x%016lx, SSP = 0x%016lx, SSP TABLE = 0x%016lx\n",
vmcs_readl(GUEST_S_CET), vmcs_readl(GUEST_SSP),
vmcs_readl(GUEST_INTR_SSP_TABLE));
pr_err("*** Host State ***\n");
pr_err("RIP = 0x%016lx RSP = 0x%016lx\n",
vmcs_readl(HOST_RIP), vmcs_readl(HOST_RSP));
@ -6302,6 +6398,10 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
vmcs_read64(HOST_IA32_PERF_GLOBAL_CTRL));
if (vmcs_read32(VM_EXIT_MSR_LOAD_COUNT) > 0)
vmx_dump_msrs("host autoload", &vmx->msr_autoload.host);
if (vmexit_ctl & VM_EXIT_LOAD_CET_STATE)
pr_err("S_CET = 0x%016lx, SSP = 0x%016lx, SSP TABLE = 0x%016lx\n",
vmcs_readl(HOST_S_CET), vmcs_readl(HOST_SSP),
vmcs_readl(HOST_INTR_SSP_TABLE));
pr_err("*** Control State ***\n");
pr_err("CPUBased=0x%08x SecondaryExec=0x%08x TertiaryExec=0x%016llx\n",
@ -6502,6 +6602,8 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
#ifdef CONFIG_MITIGATION_RETPOLINE
if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
return kvm_emulate_wrmsr(vcpu);
else if (exit_reason.basic == EXIT_REASON_MSR_WRITE_IMM)
return handle_wrmsr_imm(vcpu);
else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER)
return handle_preemption_timer(vcpu);
else if (exit_reason.basic == EXIT_REASON_INTERRUPT_WINDOW)
@ -7177,11 +7279,16 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu,
switch (vmx_get_exit_reason(vcpu).basic) {
case EXIT_REASON_MSR_WRITE:
return handle_fastpath_set_msr_irqoff(vcpu);
return handle_fastpath_wrmsr(vcpu);
case EXIT_REASON_MSR_WRITE_IMM:
return handle_fastpath_wrmsr_imm(vcpu, vmx_get_exit_qual(vcpu),
vmx_get_msr_imm_reg(vcpu));
case EXIT_REASON_PREEMPTION_TIMER:
return handle_fastpath_preemption_timer(vcpu, force_immediate_exit);
case EXIT_REASON_HLT:
return handle_fastpath_hlt(vcpu);
case EXIT_REASON_INVD:
return handle_fastpath_invd(vcpu);
default:
return EXIT_FASTPATH_NONE;
}
@ -7648,6 +7755,8 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
cr4_fixed1_update(X86_CR4_PKE, ecx, feature_bit(PKU));
cr4_fixed1_update(X86_CR4_UMIP, ecx, feature_bit(UMIP));
cr4_fixed1_update(X86_CR4_LA57, ecx, feature_bit(LA57));
cr4_fixed1_update(X86_CR4_CET, ecx, feature_bit(SHSTK));
cr4_fixed1_update(X86_CR4_CET, edx, feature_bit(IBT));
entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
cr4_fixed1_update(X86_CR4_LAM_SUP, eax, feature_bit(LAM));
@ -7782,16 +7891,13 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
vmx->msr_ia32_feature_control_valid_bits &=
~FEAT_CTL_SGX_LC_ENABLED;
/* Recalc MSR interception to account for feature changes. */
vmx_recalc_msr_intercepts(vcpu);
/* Refresh #PF interception to account for MAXPHYADDR changes. */
vmx_update_exception_bitmap(vcpu);
}
static __init u64 vmx_get_perf_capabilities(void)
{
u64 perf_cap = PMU_CAP_FW_WRITES;
u64 perf_cap = PERF_CAP_FW_WRITES;
u64 host_perf_cap = 0;
if (!enable_pmu)
@ -7811,7 +7917,7 @@ static __init u64 vmx_get_perf_capabilities(void)
if (!vmx_lbr_caps.has_callstack)
memset(&vmx_lbr_caps, 0, sizeof(vmx_lbr_caps));
else if (vmx_lbr_caps.nr)
perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
perf_cap |= host_perf_cap & PERF_CAP_LBR_FMT;
}
if (vmx_pebs_supported()) {
@ -7879,7 +7985,6 @@ static __init void vmx_set_cpu_caps(void)
kvm_cpu_cap_set(X86_FEATURE_UMIP);
/* CPUID 0xD.1 */
kvm_caps.supported_xss = 0;
if (!cpu_has_vmx_xsaves())
kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
@ -7891,6 +7996,18 @@ static __init void vmx_set_cpu_caps(void)
if (cpu_has_vmx_waitpkg())
kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
/*
* Disable CET if unrestricted_guest is unsupported as KVM doesn't
* enforce CET HW behaviors in emulator. On platforms with
* VMX_BASIC[bit56] == 0, inject #CP at VMX entry with error code
* fails, so disable CET in this case too.
*/
if (!cpu_has_load_cet_ctrl() || !enable_unrestricted_guest ||
!cpu_has_vmx_basic_no_hw_errcode_cc()) {
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
}
}
static bool vmx_is_io_intercepted(struct kvm_vcpu *vcpu,
@ -8340,8 +8457,6 @@ __init int vmx_hardware_setup(void)
vmx_setup_user_return_msrs();
if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0)
return -EIO;
if (boot_cpu_has(X86_FEATURE_NX))
kvm_enable_efer_bits(EFER_NX);
@ -8371,6 +8486,14 @@ __init int vmx_hardware_setup(void)
return -EOPNOTSUPP;
}
/*
* Shadow paging doesn't have a (further) performance penalty
* from GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable it
* by default
*/
if (!enable_ept)
allow_smaller_maxphyaddr = true;
if (!cpu_has_vmx_ept_ad_bits() || !enable_ept)
enable_ept_ad_bits = 0;
@ -8496,6 +8619,13 @@ __init int vmx_hardware_setup(void)
setup_default_sgx_lepubkeyhash();
vmx_set_cpu_caps();
/*
* Configure nested capabilities after core CPU capabilities so that
* nested support can be conditional on base support, e.g. so that KVM
* can hide/show features based on kvm_cpu_cap_has().
*/
if (nested) {
nested_vmx_setup_ctls_msrs(&vmcs_config, vmx_capability.ept);
@ -8504,8 +8634,6 @@ __init int vmx_hardware_setup(void)
return r;
}
vmx_set_cpu_caps();
r = alloc_kvm_area();
if (r && nested)
nested_vmx_hardware_unsetup();
@ -8532,7 +8660,9 @@ __init int vmx_hardware_setup(void)
*/
if (!static_cpu_has(X86_FEATURE_SELFSNOOP))
kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
kvm_caps.inapplicable_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
kvm_caps.inapplicable_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
return r;
}
@ -8565,11 +8695,18 @@ int __init vmx_init(void)
return -EOPNOTSUPP;
/*
* Note, hv_init_evmcs() touches only VMX knobs, i.e. there's nothing
* to unwind if a later step fails.
* Note, VMCS and eVMCS configuration only touch VMX knobs/variables,
* i.e. there's nothing to unwind if a later step fails.
*/
hv_init_evmcs();
/*
* Parse the VMCS config and VMX capabilities before anything else, so
* that the information is available to all setup flows.
*/
if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0)
return -EIO;
r = kvm_x86_vendor_init(&vt_init_ops);
if (r)
return r;
@ -8593,14 +8730,6 @@ int __init vmx_init(void)
vmx_check_vmcs12_offsets();
/*
* Shadow paging doesn't have a (further) performance penalty
* from GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable it
* by default
*/
if (!enable_ept)
allow_smaller_maxphyaddr = true;
return 0;
err_l1d_flush:

View File

@ -181,6 +181,9 @@ struct nested_vmx {
*/
u64 pre_vmenter_debugctl;
u64 pre_vmenter_bndcfgs;
u64 pre_vmenter_s_cet;
u64 pre_vmenter_ssp;
u64 pre_vmenter_ssp_tbl;
/* to migrate it to L1 if L2 writes to L1's CR8 directly */
int l1_tpr_threshold;
@ -484,7 +487,8 @@ static inline u8 vmx_get_rvi(void)
VM_ENTRY_LOAD_IA32_EFER | \
VM_ENTRY_LOAD_BNDCFGS | \
VM_ENTRY_PT_CONCEAL_PIP | \
VM_ENTRY_LOAD_IA32_RTIT_CTL)
VM_ENTRY_LOAD_IA32_RTIT_CTL | \
VM_ENTRY_LOAD_CET_STATE)
#define __KVM_REQUIRED_VMX_VM_EXIT_CONTROLS \
(VM_EXIT_SAVE_DEBUG_CONTROLS | \
@ -506,7 +510,8 @@ static inline u8 vmx_get_rvi(void)
VM_EXIT_LOAD_IA32_EFER | \
VM_EXIT_CLEAR_BNDCFGS | \
VM_EXIT_PT_CONCEAL_PIP | \
VM_EXIT_CLEAR_IA32_RTIT_CTL)
VM_EXIT_CLEAR_IA32_RTIT_CTL | \
VM_EXIT_LOAD_CET_STATE)
#define KVM_REQUIRED_VMX_PIN_BASED_VM_EXEC_CONTROL \
(PIN_BASED_EXT_INTR_MASK | \
@ -608,6 +613,14 @@ static __always_inline void lname##_controls_clearbit(struct vcpu_vmx *vmx, u##b
{ \
BUILD_BUG_ON(!(val & (KVM_REQUIRED_VMX_##uname | KVM_OPTIONAL_VMX_##uname))); \
lname##_controls_set(vmx, lname##_controls_get(vmx) & ~val); \
} \
static __always_inline void lname##_controls_changebit(struct vcpu_vmx *vmx, u##bits val, \
bool set) \
{ \
if (set) \
lname##_controls_setbit(vmx, val); \
else \
lname##_controls_clearbit(vmx, val); \
}
BUILD_CONTROLS_SHADOW(vm_entry, VM_ENTRY_CONTROLS, 32)
BUILD_CONTROLS_SHADOW(vm_exit, VM_EXIT_CONTROLS, 32)
@ -706,6 +719,11 @@ static inline bool vmx_guest_state_valid(struct kvm_vcpu *vcpu)
void dump_vmcs(struct kvm_vcpu *vcpu);
static inline int vmx_get_instr_info_reg(u32 vmx_instr_info)
{
return (vmx_instr_info >> 3) & 0xf;
}
static inline int vmx_get_instr_info_reg2(u32 vmx_instr_info)
{
return (vmx_instr_info >> 28) & 0xf;

View File

@ -52,7 +52,7 @@ void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
int trig_mode, int vector);
void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu);
bool vmx_has_emulated_msr(struct kvm *kvm, u32 index);
void vmx_recalc_msr_intercepts(struct kvm_vcpu *vcpu);
void vmx_recalc_intercepts(struct kvm_vcpu *vcpu);
void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
int vmx_get_feature_msr(u32 msr, u64 *data);

File diff suppressed because it is too large Load Diff

View File

@ -50,6 +50,7 @@ struct kvm_host_values {
u64 efer;
u64 xcr0;
u64 xss;
u64 s_cet;
u64 arch_capabilities;
};
@ -101,6 +102,16 @@ do { \
#define KVM_SVM_DEFAULT_PLE_WINDOW_MAX USHRT_MAX
#define KVM_SVM_DEFAULT_PLE_WINDOW 3000
/*
* KVM's internal, non-ABI indices for synthetic MSRs. The values themselves
* are arbitrary and have no meaning, the only requirement is that they don't
* conflict with "real" MSRs that KVM supports. Use values at the upper end
* of KVM's reserved paravirtual MSR range to minimize churn, i.e. these values
* will be usable until KVM exhausts its supply of paravirtual MSR indices.
*/
#define MSR_KVM_INTERNAL_GUEST_SSP 0x4b564dff
static inline unsigned int __grow_ple_window(unsigned int val,
unsigned int base, unsigned int modifier, unsigned int max)
{
@ -431,14 +442,15 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu,
int kvm_mtrr_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data);
int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
bool kvm_vector_hashing_enabled(void);
void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code);
int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
void *insn, int insn_len);
int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
int emulation_type, void *insn, int insn_len);
fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
fastpath_t handle_fastpath_wrmsr(struct kvm_vcpu *vcpu);
fastpath_t handle_fastpath_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg);
fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu);
fastpath_t handle_fastpath_invd(struct kvm_vcpu *vcpu);
extern struct kvm_caps kvm_caps;
extern struct kvm_host_values kvm_host;
@ -668,6 +680,9 @@ static inline bool __kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
__reserved_bits |= X86_CR4_PCIDE; \
if (!__cpu_has(__c, X86_FEATURE_LAM)) \
__reserved_bits |= X86_CR4_LAM_SUP; \
if (!__cpu_has(__c, X86_FEATURE_SHSTK) && \
!__cpu_has(__c, X86_FEATURE_IBT)) \
__reserved_bits |= X86_CR4_CET; \
__reserved_bits; \
})
@ -699,4 +714,27 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl,
int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
#define CET_US_RESERVED_BITS GENMASK(9, 6)
#define CET_US_SHSTK_MASK_BITS GENMASK(1, 0)
#define CET_US_IBT_MASK_BITS (GENMASK_ULL(5, 2) | GENMASK_ULL(63, 10))
#define CET_US_LEGACY_BITMAP_BASE(data) ((data) >> 12)
static inline bool kvm_is_valid_u_s_cet(struct kvm_vcpu *vcpu, u64 data)
{
if (data & CET_US_RESERVED_BITS)
return false;
if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
(data & CET_US_SHSTK_MASK_BITS))
return false;
if (!guest_cpu_cap_has(vcpu, X86_FEATURE_IBT) &&
(data & CET_US_IBT_MASK_BITS))
return false;
if (!IS_ALIGNED(CET_US_LEGACY_BITMAP_BASE(data), 4))
return false;
/* IBT can be suppressed iff the TRACKER isn't WAIT_ENDBR. */
if ((data & CET_SUPPRESS) && (data & CET_WAIT_ENDBR))
return false;
return true;
}
#endif

View File

@ -354,7 +354,7 @@ static int vfio_ap_validate_nib(struct kvm_vcpu *vcpu, dma_addr_t *nib)
if (!*nib)
return -EINVAL;
if (kvm_is_error_hva(gfn_to_hva(vcpu->kvm, *nib >> PAGE_SHIFT)))
if (!kvm_s390_is_gpa_in_memslot(vcpu->kvm, *nib))
return -EINVAL;
return 0;

View File

@ -3,6 +3,23 @@
#ifndef __KVM_TYPES_H__
#define __KVM_TYPES_H__
#include <linux/bits.h>
#include <linux/export.h>
#include <linux/types.h>
#include <asm/kvm_types.h>
#ifdef KVM_SUB_MODULES
#define EXPORT_SYMBOL_FOR_KVM_INTERNAL(symbol) \
EXPORT_SYMBOL_FOR_MODULES(symbol, __stringify(KVM_SUB_MODULES))
#else
#define EXPORT_SYMBOL_FOR_KVM_INTERNAL(symbol)
#endif
#ifndef __ASSEMBLER__
#include <linux/mutex.h>
#include <linux/spinlock_types.h>
struct kvm;
struct kvm_async_pf;
struct kvm_device_ops;
@ -19,13 +36,6 @@ struct kvm_memslots;
enum kvm_mr_change;
#include <linux/bits.h>
#include <linux/mutex.h>
#include <linux/types.h>
#include <linux/spinlock_types.h>
#include <asm/kvm_types.h>
/*
* Address types:
*
@ -116,5 +126,6 @@ struct kvm_vcpu_stat_generic {
};
#define KVM_STATS_NAME_SIZE 48
#endif /* !__ASSEMBLER__ */
#endif /* __KVM_TYPES_H__ */

View File

@ -87,6 +87,7 @@ TEST_GEN_PROGS_x86 += x86/kvm_clock_test
TEST_GEN_PROGS_x86 += x86/kvm_pv_test
TEST_GEN_PROGS_x86 += x86/kvm_buslock_test
TEST_GEN_PROGS_x86 += x86/monitor_mwait_test
TEST_GEN_PROGS_x86 += x86/msrs_test
TEST_GEN_PROGS_x86 += x86/nested_emulation_test
TEST_GEN_PROGS_x86 += x86/nested_exceptions_test
TEST_GEN_PROGS_x86 += x86/platform_info_test

View File

@ -1362,6 +1362,11 @@ static inline bool kvm_is_unrestricted_guest_enabled(void)
return get_kvm_intel_param_bool("unrestricted_guest");
}
static inline bool kvm_is_ignore_msrs(void)
{
return get_kvm_param_bool("ignore_msrs");
}
uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr,
int *level);
uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr);

View File

@ -0,0 +1,489 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <asm/msr-index.h>
#include <stdint.h>
#include "kvm_util.h"
#include "processor.h"
/* Use HYPERVISOR for MSRs that are emulated unconditionally (as is HYPERVISOR). */
#define X86_FEATURE_NONE X86_FEATURE_HYPERVISOR
struct kvm_msr {
const struct kvm_x86_cpu_feature feature;
const struct kvm_x86_cpu_feature feature2;
const char *name;
const u64 reset_val;
const u64 write_val;
const u64 rsvd_val;
const u32 index;
const bool is_kvm_defined;
};
#define ____MSR_TEST(msr, str, val, rsvd, reset, feat, f2, is_kvm) \
{ \
.index = msr, \
.name = str, \
.write_val = val, \
.rsvd_val = rsvd, \
.reset_val = reset, \
.feature = X86_FEATURE_ ##feat, \
.feature2 = X86_FEATURE_ ##f2, \
.is_kvm_defined = is_kvm, \
}
#define __MSR_TEST(msr, str, val, rsvd, reset, feat) \
____MSR_TEST(msr, str, val, rsvd, reset, feat, feat, false)
#define MSR_TEST_NON_ZERO(msr, val, rsvd, reset, feat) \
__MSR_TEST(msr, #msr, val, rsvd, reset, feat)
#define MSR_TEST(msr, val, rsvd, feat) \
__MSR_TEST(msr, #msr, val, rsvd, 0, feat)
#define MSR_TEST2(msr, val, rsvd, feat, f2) \
____MSR_TEST(msr, #msr, val, rsvd, 0, feat, f2, false)
/*
* Note, use a page aligned value for the canonical value so that the value
* is compatible with MSRs that use bits 11:0 for things other than addresses.
*/
static const u64 canonical_val = 0x123456789000ull;
/*
* Arbitrary value with bits set in every byte, but not all bits set. This is
* also a non-canonical value, but that's coincidental (any 64-bit value with
* an alternating 0s/1s pattern will be non-canonical).
*/
static const u64 u64_val = 0xaaaa5555aaaa5555ull;
#define MSR_TEST_CANONICAL(msr, feat) \
__MSR_TEST(msr, #msr, canonical_val, NONCANONICAL, 0, feat)
#define MSR_TEST_KVM(msr, val, rsvd, feat) \
____MSR_TEST(KVM_REG_ ##msr, #msr, val, rsvd, 0, feat, feat, true)
/*
* The main struct must be scoped to a function due to the use of structures to
* define features. For the global structure, allocate enough space for the
* foreseeable future without getting too ridiculous, to minimize maintenance
* costs (bumping the array size every time an MSR is added is really annoying).
*/
static struct kvm_msr msrs[128];
static int idx;
static bool ignore_unsupported_msrs;
static u64 fixup_rdmsr_val(u32 msr, u64 want)
{
/*
* AMD CPUs drop bits 63:32 on some MSRs that Intel CPUs support. KVM
* is supposed to emulate that behavior based on guest vendor model
* (which is the same as the host vendor model for this test).
*/
if (!host_cpu_is_amd)
return want;
switch (msr) {
case MSR_IA32_SYSENTER_ESP:
case MSR_IA32_SYSENTER_EIP:
case MSR_TSC_AUX:
return want & GENMASK_ULL(31, 0);
default:
return want;
}
}
static void __rdmsr(u32 msr, u64 want)
{
u64 val;
u8 vec;
vec = rdmsr_safe(msr, &val);
__GUEST_ASSERT(!vec, "Unexpected %s on RDMSR(0x%x)", ex_str(vec), msr);
__GUEST_ASSERT(val == want, "Wanted 0x%lx from RDMSR(0x%x), got 0x%lx",
want, msr, val);
}
static void __wrmsr(u32 msr, u64 val)
{
u8 vec;
vec = wrmsr_safe(msr, val);
__GUEST_ASSERT(!vec, "Unexpected %s on WRMSR(0x%x, 0x%lx)",
ex_str(vec), msr, val);
__rdmsr(msr, fixup_rdmsr_val(msr, val));
}
static void guest_test_supported_msr(const struct kvm_msr *msr)
{
__rdmsr(msr->index, msr->reset_val);
__wrmsr(msr->index, msr->write_val);
GUEST_SYNC(fixup_rdmsr_val(msr->index, msr->write_val));
__rdmsr(msr->index, msr->reset_val);
}
static void guest_test_unsupported_msr(const struct kvm_msr *msr)
{
u64 val;
u8 vec;
/*
* KVM's ABI with respect to ignore_msrs is a mess and largely beyond
* repair, just skip the unsupported MSR tests.
*/
if (ignore_unsupported_msrs)
goto skip_wrmsr_gp;
/*
* {S,U}_CET exist if IBT or SHSTK is supported, but with bits that are
* writable only if their associated feature is supported. Skip the
* RDMSR #GP test if the secondary feature is supported, but perform
* the WRMSR #GP test as the to-be-written value is tied to the primary
* feature. For all other MSRs, simply do nothing.
*/
if (this_cpu_has(msr->feature2)) {
if (msr->index != MSR_IA32_U_CET &&
msr->index != MSR_IA32_S_CET)
goto skip_wrmsr_gp;
goto skip_rdmsr_gp;
}
vec = rdmsr_safe(msr->index, &val);
__GUEST_ASSERT(vec == GP_VECTOR, "Wanted #GP on RDMSR(0x%x), got %s",
msr->index, ex_str(vec));
skip_rdmsr_gp:
vec = wrmsr_safe(msr->index, msr->write_val);
__GUEST_ASSERT(vec == GP_VECTOR, "Wanted #GP on WRMSR(0x%x, 0x%lx), got %s",
msr->index, msr->write_val, ex_str(vec));
skip_wrmsr_gp:
GUEST_SYNC(0);
}
void guest_test_reserved_val(const struct kvm_msr *msr)
{
/* Skip reserved value checks as well, ignore_msrs is trully a mess. */
if (ignore_unsupported_msrs)
return;
/*
* If the CPU will truncate the written value (e.g. SYSENTER on AMD),
* expect success and a truncated value, not #GP.
*/
if (!this_cpu_has(msr->feature) ||
msr->rsvd_val == fixup_rdmsr_val(msr->index, msr->rsvd_val)) {
u8 vec = wrmsr_safe(msr->index, msr->rsvd_val);
__GUEST_ASSERT(vec == GP_VECTOR,
"Wanted #GP on WRMSR(0x%x, 0x%lx), got %s",
msr->index, msr->rsvd_val, ex_str(vec));
} else {
__wrmsr(msr->index, msr->rsvd_val);
__wrmsr(msr->index, msr->reset_val);
}
}
static void guest_main(void)
{
for (;;) {
const struct kvm_msr *msr = &msrs[READ_ONCE(idx)];
if (this_cpu_has(msr->feature))
guest_test_supported_msr(msr);
else
guest_test_unsupported_msr(msr);
if (msr->rsvd_val)
guest_test_reserved_val(msr);
GUEST_SYNC(msr->reset_val);
}
}
static bool has_one_reg;
static bool use_one_reg;
#define KVM_X86_MAX_NR_REGS 1
static bool vcpu_has_reg(struct kvm_vcpu *vcpu, u64 reg)
{
struct {
struct kvm_reg_list list;
u64 regs[KVM_X86_MAX_NR_REGS];
} regs = {};
int r, i;
/*
* If KVM_GET_REG_LIST succeeds with n=0, i.e. there are no supported
* regs, then the vCPU obviously doesn't support the reg.
*/
r = __vcpu_ioctl(vcpu, KVM_GET_REG_LIST, &regs.list);
if (!r)
return false;
TEST_ASSERT_EQ(errno, E2BIG);
/*
* KVM x86 is expected to support enumerating a relative small number
* of regs. The majority of registers supported by KVM_{G,S}ET_ONE_REG
* are enumerated via other ioctls, e.g. KVM_GET_MSR_INDEX_LIST. For
* simplicity, hardcode the maximum number of regs and manually update
* the test as necessary.
*/
TEST_ASSERT(regs.list.n <= KVM_X86_MAX_NR_REGS,
"KVM reports %llu regs, test expects at most %u regs, stale test?",
regs.list.n, KVM_X86_MAX_NR_REGS);
vcpu_ioctl(vcpu, KVM_GET_REG_LIST, &regs.list);
for (i = 0; i < regs.list.n; i++) {
if (regs.regs[i] == reg)
return true;
}
return false;
}
static void host_test_kvm_reg(struct kvm_vcpu *vcpu)
{
bool has_reg = vcpu_cpuid_has(vcpu, msrs[idx].feature);
u64 reset_val = msrs[idx].reset_val;
u64 write_val = msrs[idx].write_val;
u64 rsvd_val = msrs[idx].rsvd_val;
u32 reg = msrs[idx].index;
u64 val;
int r;
if (!use_one_reg)
return;
TEST_ASSERT_EQ(vcpu_has_reg(vcpu, KVM_X86_REG_KVM(reg)), has_reg);
if (!has_reg) {
r = __vcpu_get_reg(vcpu, KVM_X86_REG_KVM(reg), &val);
TEST_ASSERT(r && errno == EINVAL,
"Expected failure on get_reg(0x%x)", reg);
rsvd_val = 0;
goto out;
}
val = vcpu_get_reg(vcpu, KVM_X86_REG_KVM(reg));
TEST_ASSERT(val == reset_val, "Wanted 0x%lx from get_reg(0x%x), got 0x%lx",
reset_val, reg, val);
vcpu_set_reg(vcpu, KVM_X86_REG_KVM(reg), write_val);
val = vcpu_get_reg(vcpu, KVM_X86_REG_KVM(reg));
TEST_ASSERT(val == write_val, "Wanted 0x%lx from get_reg(0x%x), got 0x%lx",
write_val, reg, val);
out:
r = __vcpu_set_reg(vcpu, KVM_X86_REG_KVM(reg), rsvd_val);
TEST_ASSERT(r, "Expected failure on set_reg(0x%x, 0x%lx)", reg, rsvd_val);
}
static void host_test_msr(struct kvm_vcpu *vcpu, u64 guest_val)
{
u64 reset_val = msrs[idx].reset_val;
u32 msr = msrs[idx].index;
u64 val;
if (!kvm_cpu_has(msrs[idx].feature))
return;
val = vcpu_get_msr(vcpu, msr);
TEST_ASSERT(val == guest_val, "Wanted 0x%lx from get_msr(0x%x), got 0x%lx",
guest_val, msr, val);
if (use_one_reg)
vcpu_set_reg(vcpu, KVM_X86_REG_MSR(msr), reset_val);
else
vcpu_set_msr(vcpu, msr, reset_val);
val = vcpu_get_msr(vcpu, msr);
TEST_ASSERT(val == reset_val, "Wanted 0x%lx from get_msr(0x%x), got 0x%lx",
reset_val, msr, val);
if (!has_one_reg)
return;
val = vcpu_get_reg(vcpu, KVM_X86_REG_MSR(msr));
TEST_ASSERT(val == reset_val, "Wanted 0x%lx from get_reg(0x%x), got 0x%lx",
reset_val, msr, val);
}
static void do_vcpu_run(struct kvm_vcpu *vcpu)
{
struct ucall uc;
for (;;) {
vcpu_run(vcpu);
switch (get_ucall(vcpu, &uc)) {
case UCALL_SYNC:
host_test_msr(vcpu, uc.args[1]);
return;
case UCALL_PRINTF:
pr_info("%s", uc.buffer);
break;
case UCALL_ABORT:
REPORT_GUEST_ASSERT(uc);
case UCALL_DONE:
TEST_FAIL("Unexpected UCALL_DONE");
default:
TEST_FAIL("Unexpected ucall: %lu", uc.cmd);
}
}
}
static void vcpus_run(struct kvm_vcpu **vcpus, const int NR_VCPUS)
{
int i;
for (i = 0; i < NR_VCPUS; i++)
do_vcpu_run(vcpus[i]);
}
#define MISC_ENABLES_RESET_VAL (MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL | MSR_IA32_MISC_ENABLE_BTS_UNAVAIL)
static void test_msrs(void)
{
const struct kvm_msr __msrs[] = {
MSR_TEST_NON_ZERO(MSR_IA32_MISC_ENABLE,
MISC_ENABLES_RESET_VAL | MSR_IA32_MISC_ENABLE_FAST_STRING,
MSR_IA32_MISC_ENABLE_FAST_STRING, MISC_ENABLES_RESET_VAL, NONE),
MSR_TEST_NON_ZERO(MSR_IA32_CR_PAT, 0x07070707, 0, 0x7040600070406, NONE),
/*
* TSC_AUX is supported if RDTSCP *or* RDPID is supported. Add
* entries for each features so that TSC_AUX doesn't exists for
* the "unsupported" vCPU, and obviously to test both cases.
*/
MSR_TEST2(MSR_TSC_AUX, 0x12345678, u64_val, RDTSCP, RDPID),
MSR_TEST2(MSR_TSC_AUX, 0x12345678, u64_val, RDPID, RDTSCP),
MSR_TEST(MSR_IA32_SYSENTER_CS, 0x1234, 0, NONE),
/*
* SYSENTER_{ESP,EIP} are technically non-canonical on Intel,
* but KVM doesn't emulate that behavior on emulated writes,
* i.e. this test will observe different behavior if the MSR
* writes are handed by hardware vs. KVM. KVM's behavior is
* intended (though far from ideal), so don't bother testing
* non-canonical values.
*/
MSR_TEST(MSR_IA32_SYSENTER_ESP, canonical_val, 0, NONE),
MSR_TEST(MSR_IA32_SYSENTER_EIP, canonical_val, 0, NONE),
MSR_TEST_CANONICAL(MSR_FS_BASE, LM),
MSR_TEST_CANONICAL(MSR_GS_BASE, LM),
MSR_TEST_CANONICAL(MSR_KERNEL_GS_BASE, LM),
MSR_TEST_CANONICAL(MSR_LSTAR, LM),
MSR_TEST_CANONICAL(MSR_CSTAR, LM),
MSR_TEST(MSR_SYSCALL_MASK, 0xffffffff, 0, LM),
MSR_TEST2(MSR_IA32_S_CET, CET_SHSTK_EN, CET_RESERVED, SHSTK, IBT),
MSR_TEST2(MSR_IA32_S_CET, CET_ENDBR_EN, CET_RESERVED, IBT, SHSTK),
MSR_TEST2(MSR_IA32_U_CET, CET_SHSTK_EN, CET_RESERVED, SHSTK, IBT),
MSR_TEST2(MSR_IA32_U_CET, CET_ENDBR_EN, CET_RESERVED, IBT, SHSTK),
MSR_TEST_CANONICAL(MSR_IA32_PL0_SSP, SHSTK),
MSR_TEST(MSR_IA32_PL0_SSP, canonical_val, canonical_val | 1, SHSTK),
MSR_TEST_CANONICAL(MSR_IA32_PL1_SSP, SHSTK),
MSR_TEST(MSR_IA32_PL1_SSP, canonical_val, canonical_val | 1, SHSTK),
MSR_TEST_CANONICAL(MSR_IA32_PL2_SSP, SHSTK),
MSR_TEST(MSR_IA32_PL2_SSP, canonical_val, canonical_val | 1, SHSTK),
MSR_TEST_CANONICAL(MSR_IA32_PL3_SSP, SHSTK),
MSR_TEST(MSR_IA32_PL3_SSP, canonical_val, canonical_val | 1, SHSTK),
MSR_TEST_KVM(GUEST_SSP, canonical_val, NONCANONICAL, SHSTK),
};
const struct kvm_x86_cpu_feature feat_none = X86_FEATURE_NONE;
const struct kvm_x86_cpu_feature feat_lm = X86_FEATURE_LM;
/*
* Create three vCPUs, but run them on the same task, to validate KVM's
* context switching of MSR state. Don't pin the task to a pCPU to
* also validate KVM's handling of cross-pCPU migration. Use the full
* set of features for the first two vCPUs, but clear all features in
* third vCPU in order to test both positive and negative paths.
*/
const int NR_VCPUS = 3;
struct kvm_vcpu *vcpus[NR_VCPUS];
struct kvm_vm *vm;
int i;
kvm_static_assert(sizeof(__msrs) <= sizeof(msrs));
kvm_static_assert(ARRAY_SIZE(__msrs) <= ARRAY_SIZE(msrs));
memcpy(msrs, __msrs, sizeof(__msrs));
ignore_unsupported_msrs = kvm_is_ignore_msrs();
vm = vm_create_with_vcpus(NR_VCPUS, guest_main, vcpus);
sync_global_to_guest(vm, msrs);
sync_global_to_guest(vm, ignore_unsupported_msrs);
/*
* Clear features in the "unsupported features" vCPU. This needs to be
* done before the first vCPU run as KVM's ABI is that guest CPUID is
* immutable once the vCPU has been run.
*/
for (idx = 0; idx < ARRAY_SIZE(__msrs); idx++) {
/*
* Don't clear LM; selftests are 64-bit only, and KVM doesn't
* honor LM=0 for MSRs that are supposed to exist if and only
* if the vCPU is a 64-bit model. Ditto for NONE; clearing a
* fake feature flag will result in false failures.
*/
if (memcmp(&msrs[idx].feature, &feat_lm, sizeof(feat_lm)) &&
memcmp(&msrs[idx].feature, &feat_none, sizeof(feat_none)))
vcpu_clear_cpuid_feature(vcpus[2], msrs[idx].feature);
}
for (idx = 0; idx < ARRAY_SIZE(__msrs); idx++) {
struct kvm_msr *msr = &msrs[idx];
if (msr->is_kvm_defined) {
for (i = 0; i < NR_VCPUS; i++)
host_test_kvm_reg(vcpus[i]);
continue;
}
/*
* Verify KVM_GET_SUPPORTED_CPUID and KVM_GET_MSR_INDEX_LIST
* are consistent with respect to MSRs whose existence is
* enumerated via CPUID. Skip the check for FS/GS.base MSRs,
* as they aren't reported in the save/restore list since their
* state is managed via SREGS.
*/
TEST_ASSERT(msr->index == MSR_FS_BASE || msr->index == MSR_GS_BASE ||
kvm_msr_is_in_save_restore_list(msr->index) ==
(kvm_cpu_has(msr->feature) || kvm_cpu_has(msr->feature2)),
"%s %s in save/restore list, but %s according to CPUID", msr->name,
kvm_msr_is_in_save_restore_list(msr->index) ? "is" : "isn't",
(kvm_cpu_has(msr->feature) || kvm_cpu_has(msr->feature2)) ?
"supported" : "unsupported");
sync_global_to_guest(vm, idx);
vcpus_run(vcpus, NR_VCPUS);
vcpus_run(vcpus, NR_VCPUS);
}
kvm_vm_free(vm);
}
int main(void)
{
has_one_reg = kvm_has_cap(KVM_CAP_ONE_REG);
test_msrs();
if (has_one_reg) {
use_one_reg = true;
test_msrs();
}
}

View File

@ -14,10 +14,10 @@
#define NUM_BRANCH_INSNS_RETIRED (NUM_LOOPS)
/*
* Number of instructions in each loop. 1 CLFLUSH/CLFLUSHOPT/NOP, 1 MFENCE,
* 1 LOOP.
* Number of instructions in each loop. 1 ENTER, 1 CLFLUSH/CLFLUSHOPT/NOP,
* 1 MFENCE, 1 MOV, 1 LEAVE, 1 LOOP.
*/
#define NUM_INSNS_PER_LOOP 4
#define NUM_INSNS_PER_LOOP 6
/*
* Number of "extra" instructions that will be counted, i.e. the number of
@ -226,9 +226,11 @@ do { \
__asm__ __volatile__("wrmsr\n\t" \
" mov $" __stringify(NUM_LOOPS) ", %%ecx\n\t" \
"1:\n\t" \
FEP "enter $0, $0\n\t" \
clflush "\n\t" \
"mfence\n\t" \
"mov %[m], %%eax\n\t" \
FEP "leave\n\t" \
FEP "loop 1b\n\t" \
FEP "mov %%edi, %%ecx\n\t" \
FEP "xor %%eax, %%eax\n\t" \

View File

@ -525,7 +525,7 @@ bool kvm_irq_has_notifier(struct kvm *kvm, unsigned irqchip, unsigned pin)
return false;
}
EXPORT_SYMBOL_GPL(kvm_irq_has_notifier);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_irq_has_notifier);
void kvm_notify_acked_gsi(struct kvm *kvm, int gsi)
{

View File

@ -702,7 +702,7 @@ out:
fput(file);
return r;
}
EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gmem_get_pfn);
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_POPULATE
long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages,
@ -716,7 +716,8 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
long i;
lockdep_assert_held(&kvm->slots_lock);
if (npages < 0)
if (WARN_ON_ONCE(npages <= 0))
return -EINVAL;
slot = gfn_to_memslot(kvm, start_gfn);
@ -784,5 +785,5 @@ put_folio_and_exit:
fput(file);
return ret && !i ? ret : i;
}
EXPORT_SYMBOL_GPL(kvm_gmem_populate);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gmem_populate);
#endif

View File

@ -77,22 +77,22 @@ MODULE_LICENSE("GPL");
/* Architectures should define their poll value according to the halt latency */
unsigned int halt_poll_ns = KVM_HALT_POLL_NS_DEFAULT;
module_param(halt_poll_ns, uint, 0644);
EXPORT_SYMBOL_GPL(halt_poll_ns);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns);
/* Default doubles per-vcpu halt_poll_ns. */
unsigned int halt_poll_ns_grow = 2;
module_param(halt_poll_ns_grow, uint, 0644);
EXPORT_SYMBOL_GPL(halt_poll_ns_grow);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns_grow);
/* The start value to grow halt_poll_ns from */
unsigned int halt_poll_ns_grow_start = 10000; /* 10us */
module_param(halt_poll_ns_grow_start, uint, 0644);
EXPORT_SYMBOL_GPL(halt_poll_ns_grow_start);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns_grow_start);
/* Default halves per-vcpu halt_poll_ns. */
unsigned int halt_poll_ns_shrink = 2;
module_param(halt_poll_ns_shrink, uint, 0644);
EXPORT_SYMBOL_GPL(halt_poll_ns_shrink);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(halt_poll_ns_shrink);
/*
* Allow direct access (from KVM or the CPU) without MMU notifier protection
@ -170,7 +170,7 @@ void vcpu_load(struct kvm_vcpu *vcpu)
kvm_arch_vcpu_load(vcpu, cpu);
put_cpu();
}
EXPORT_SYMBOL_GPL(vcpu_load);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(vcpu_load);
void vcpu_put(struct kvm_vcpu *vcpu)
{
@ -180,7 +180,7 @@ void vcpu_put(struct kvm_vcpu *vcpu)
__this_cpu_write(kvm_running_vcpu, NULL);
preempt_enable();
}
EXPORT_SYMBOL_GPL(vcpu_put);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(vcpu_put);
/* TODO: merge with kvm_arch_vcpu_should_kick */
static bool kvm_request_needs_ipi(struct kvm_vcpu *vcpu, unsigned req)
@ -288,7 +288,7 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req)
return called;
}
EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_make_all_cpus_request);
void kvm_flush_remote_tlbs(struct kvm *kvm)
{
@ -309,7 +309,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
|| kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
++kvm->stat.generic.remote_tlb_flush;
}
EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_flush_remote_tlbs);
void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages)
{
@ -499,7 +499,7 @@ void kvm_destroy_vcpus(struct kvm *kvm)
atomic_set(&kvm->online_vcpus, 0);
}
EXPORT_SYMBOL_GPL(kvm_destroy_vcpus);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_destroy_vcpus);
#ifdef CONFIG_KVM_GENERIC_MMU_NOTIFIER
static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
@ -1365,7 +1365,7 @@ void kvm_put_kvm_no_destroy(struct kvm *kvm)
{
WARN_ON(refcount_dec_and_test(&kvm->users_count));
}
EXPORT_SYMBOL_GPL(kvm_put_kvm_no_destroy);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_put_kvm_no_destroy);
static int kvm_vm_release(struct inode *inode, struct file *filp)
{
@ -1397,7 +1397,7 @@ out_unlock:
}
return -EINTR;
}
EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_trylock_all_vcpus);
int kvm_lock_all_vcpus(struct kvm *kvm)
{
@ -1422,7 +1422,7 @@ out_unlock:
}
return r;
}
EXPORT_SYMBOL_GPL(kvm_lock_all_vcpus);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_lock_all_vcpus);
void kvm_unlock_all_vcpus(struct kvm *kvm)
{
@ -1434,7 +1434,7 @@ void kvm_unlock_all_vcpus(struct kvm *kvm)
kvm_for_each_vcpu(i, vcpu, kvm)
mutex_unlock(&vcpu->mutex);
}
EXPORT_SYMBOL_GPL(kvm_unlock_all_vcpus);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_unlock_all_vcpus);
/*
* Allocation size is twice as large as the actual dirty bitmap size.
@ -2142,7 +2142,7 @@ int kvm_set_internal_memslot(struct kvm *kvm,
return kvm_set_memory_region(kvm, mem);
}
EXPORT_SYMBOL_GPL(kvm_set_internal_memslot);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_set_internal_memslot);
static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region2 *mem)
@ -2201,7 +2201,7 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
*is_dirty = 1;
return 0;
}
EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_dirty_log);
#else /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
/**
@ -2636,7 +2636,7 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
{
return __gfn_to_memslot(kvm_memslots(kvm), gfn);
}
EXPORT_SYMBOL_GPL(gfn_to_memslot);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(gfn_to_memslot);
struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn)
{
@ -2670,6 +2670,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
return NULL;
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_gfn_to_memslot);
bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
{
@ -2677,7 +2678,7 @@ bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
return kvm_is_visible_memslot(memslot);
}
EXPORT_SYMBOL_GPL(kvm_is_visible_gfn);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_is_visible_gfn);
bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn)
{
@ -2685,7 +2686,7 @@ bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn)
return kvm_is_visible_memslot(memslot);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_is_visible_gfn);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_is_visible_gfn);
unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn)
{
@ -2742,19 +2743,19 @@ unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot,
{
return gfn_to_hva_many(slot, gfn, NULL);
}
EXPORT_SYMBOL_GPL(gfn_to_hva_memslot);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(gfn_to_hva_memslot);
unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn)
{
return gfn_to_hva_many(gfn_to_memslot(kvm, gfn), gfn, NULL);
}
EXPORT_SYMBOL_GPL(gfn_to_hva);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(gfn_to_hva);
unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn)
{
return gfn_to_hva_many(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, NULL);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_hva);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_gfn_to_hva);
/*
* Return the hva of a @gfn and the R/W attribute if possible.
@ -2818,7 +2819,7 @@ void kvm_release_page_clean(struct page *page)
kvm_set_page_accessed(page);
put_page(page);
}
EXPORT_SYMBOL_GPL(kvm_release_page_clean);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_release_page_clean);
void kvm_release_page_dirty(struct page *page)
{
@ -2828,7 +2829,7 @@ void kvm_release_page_dirty(struct page *page)
kvm_set_page_dirty(page);
kvm_release_page_clean(page);
}
EXPORT_SYMBOL_GPL(kvm_release_page_dirty);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_release_page_dirty);
static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page,
struct follow_pfnmap_args *map, bool writable)
@ -3072,7 +3073,7 @@ kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn,
return kvm_follow_pfn(&kfp);
}
EXPORT_SYMBOL_GPL(__kvm_faultin_pfn);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_faultin_pfn);
int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn,
struct page **pages, int nr_pages)
@ -3089,7 +3090,7 @@ int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn,
return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages);
}
EXPORT_SYMBOL_GPL(kvm_prefetch_pages);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_prefetch_pages);
/*
* Don't use this API unless you are absolutely, positively certain that KVM
@ -3111,7 +3112,7 @@ struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write)
(void)kvm_follow_pfn(&kfp);
return refcounted_page;
}
EXPORT_SYMBOL_GPL(__gfn_to_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(__gfn_to_page);
int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map,
bool writable)
@ -3145,7 +3146,7 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map,
return map->hva ? 0 : -EFAULT;
}
EXPORT_SYMBOL_GPL(__kvm_vcpu_map);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_vcpu_map);
void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map)
{
@ -3173,7 +3174,7 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map)
map->page = NULL;
map->pinned_page = NULL;
}
EXPORT_SYMBOL_GPL(kvm_vcpu_unmap);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_unmap);
static int next_segment(unsigned long len, int offset)
{
@ -3209,7 +3210,7 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
return __kvm_read_guest_page(slot, gfn, data, offset, len);
}
EXPORT_SYMBOL_GPL(kvm_read_guest_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest_page);
int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data,
int offset, int len)
@ -3218,7 +3219,7 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data,
return __kvm_read_guest_page(slot, gfn, data, offset, len);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_read_guest_page);
int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len)
{
@ -3238,7 +3239,7 @@ int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len)
}
return 0;
}
EXPORT_SYMBOL_GPL(kvm_read_guest);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest);
int kvm_vcpu_read_guest(struct kvm_vcpu *vcpu, gpa_t gpa, void *data, unsigned long len)
{
@ -3258,7 +3259,7 @@ int kvm_vcpu_read_guest(struct kvm_vcpu *vcpu, gpa_t gpa, void *data, unsigned l
}
return 0;
}
EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_read_guest);
static int __kvm_read_guest_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
void *data, int offset, unsigned long len)
@ -3289,7 +3290,7 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa,
return __kvm_read_guest_atomic(slot, gfn, data, offset, len);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_read_guest_atomic);
/* Copy @len bytes from @data into guest memory at '(@gfn * PAGE_SIZE) + @offset' */
static int __kvm_write_guest_page(struct kvm *kvm,
@ -3319,7 +3320,7 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn,
return __kvm_write_guest_page(kvm, slot, gfn, data, offset, len);
}
EXPORT_SYMBOL_GPL(kvm_write_guest_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest_page);
int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
const void *data, int offset, int len)
@ -3328,7 +3329,7 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
return __kvm_write_guest_page(vcpu->kvm, slot, gfn, data, offset, len);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_write_guest_page);
int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data,
unsigned long len)
@ -3349,7 +3350,7 @@ int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data,
}
return 0;
}
EXPORT_SYMBOL_GPL(kvm_write_guest);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest);
int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
unsigned long len)
@ -3370,7 +3371,7 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
}
return 0;
}
EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_write_guest);
static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
struct gfn_to_hva_cache *ghc,
@ -3419,7 +3420,7 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
struct kvm_memslots *slots = kvm_memslots(kvm);
return __kvm_gfn_to_hva_cache_init(slots, ghc, gpa, len);
}
EXPORT_SYMBOL_GPL(kvm_gfn_to_hva_cache_init);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_gfn_to_hva_cache_init);
int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
void *data, unsigned int offset,
@ -3450,14 +3451,14 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
return 0;
}
EXPORT_SYMBOL_GPL(kvm_write_guest_offset_cached);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest_offset_cached);
int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
void *data, unsigned long len)
{
return kvm_write_guest_offset_cached(kvm, ghc, data, 0, len);
}
EXPORT_SYMBOL_GPL(kvm_write_guest_cached);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_write_guest_cached);
int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
void *data, unsigned int offset,
@ -3487,14 +3488,14 @@ int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
return 0;
}
EXPORT_SYMBOL_GPL(kvm_read_guest_offset_cached);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest_offset_cached);
int kvm_read_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
void *data, unsigned long len)
{
return kvm_read_guest_offset_cached(kvm, ghc, data, 0, len);
}
EXPORT_SYMBOL_GPL(kvm_read_guest_cached);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_read_guest_cached);
int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len)
{
@ -3514,7 +3515,7 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len)
}
return 0;
}
EXPORT_SYMBOL_GPL(kvm_clear_guest);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_clear_guest);
void mark_page_dirty_in_slot(struct kvm *kvm,
const struct kvm_memory_slot *memslot,
@ -3539,7 +3540,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm,
set_bit_le(rel_gfn, memslot->dirty_bitmap);
}
}
EXPORT_SYMBOL_GPL(mark_page_dirty_in_slot);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(mark_page_dirty_in_slot);
void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
{
@ -3548,7 +3549,7 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
memslot = gfn_to_memslot(kvm, gfn);
mark_page_dirty_in_slot(kvm, memslot, gfn);
}
EXPORT_SYMBOL_GPL(mark_page_dirty);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(mark_page_dirty);
void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn)
{
@ -3557,7 +3558,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn)
memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
mark_page_dirty_in_slot(vcpu->kvm, memslot, gfn);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_mark_page_dirty);
void kvm_sigset_activate(struct kvm_vcpu *vcpu)
{
@ -3794,7 +3795,7 @@ out:
trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu));
}
EXPORT_SYMBOL_GPL(kvm_vcpu_halt);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_halt);
bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu)
{
@ -3806,7 +3807,7 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu)
return false;
}
EXPORT_SYMBOL_GPL(kvm_vcpu_wake_up);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_wake_up);
#ifndef CONFIG_S390
/*
@ -3858,7 +3859,7 @@ void __kvm_vcpu_kick(struct kvm_vcpu *vcpu, bool wait)
out:
put_cpu();
}
EXPORT_SYMBOL_GPL(__kvm_vcpu_kick);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_vcpu_kick);
#endif /* !CONFIG_S390 */
int kvm_vcpu_yield_to(struct kvm_vcpu *target)
@ -3881,7 +3882,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target)
return ret;
}
EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_yield_to);
/*
* Helper that checks whether a VCPU is eligible for directed yield.
@ -4036,7 +4037,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
/* Ensure vcpu is not eligible during next spinloop */
kvm_vcpu_set_dy_eligible(me, false);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_vcpu_on_spin);
static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff)
{
@ -5018,7 +5019,7 @@ bool kvm_are_all_memslots_empty(struct kvm *kvm)
return true;
}
EXPORT_SYMBOL_GPL(kvm_are_all_memslots_empty);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_are_all_memslots_empty);
static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
struct kvm_enable_cap *cap)
@ -5473,7 +5474,7 @@ bool file_is_kvm(struct file *file)
{
return file && file->f_op == &kvm_vm_fops;
}
EXPORT_SYMBOL_GPL(file_is_kvm);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(file_is_kvm);
static int kvm_dev_ioctl_create_vm(unsigned long type)
{
@ -5568,10 +5569,10 @@ static struct miscdevice kvm_dev = {
#ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING
bool enable_virt_at_load = true;
module_param(enable_virt_at_load, bool, 0444);
EXPORT_SYMBOL_GPL(enable_virt_at_load);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_virt_at_load);
__visible bool kvm_rebooting;
EXPORT_SYMBOL_GPL(kvm_rebooting);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_rebooting);
static DEFINE_PER_CPU(bool, virtualization_enabled);
static DEFINE_MUTEX(kvm_usage_lock);
@ -5722,7 +5723,7 @@ err_cpuhp:
--kvm_usage_count;
return r;
}
EXPORT_SYMBOL_GPL(kvm_enable_virtualization);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_enable_virtualization);
void kvm_disable_virtualization(void)
{
@ -5735,7 +5736,7 @@ void kvm_disable_virtualization(void)
cpuhp_remove_state(CPUHP_AP_KVM_ONLINE);
kvm_arch_disable_virtualization();
}
EXPORT_SYMBOL_GPL(kvm_disable_virtualization);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_disable_virtualization);
static int kvm_init_virtualization(void)
{
@ -5884,7 +5885,7 @@ int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
r = __kvm_io_bus_write(vcpu, bus, &range, val);
return r < 0 ? r : 0;
}
EXPORT_SYMBOL_GPL(kvm_io_bus_write);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_io_bus_write);
int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx,
gpa_t addr, int len, const void *val, long cookie)
@ -5953,7 +5954,7 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
r = __kvm_io_bus_read(vcpu, bus, &range, val);
return r < 0 ? r : 0;
}
EXPORT_SYMBOL_GPL(kvm_io_bus_read);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_io_bus_read);
static void __free_bus(struct rcu_head *rcu)
{
@ -6077,7 +6078,7 @@ out_unlock:
return iodev;
}
EXPORT_SYMBOL_GPL(kvm_io_bus_get_dev);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_io_bus_get_dev);
static int kvm_debugfs_open(struct inode *inode, struct file *file,
int (*get)(void *, u64 *), int (*set)(void *, u64),
@ -6414,7 +6415,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void)
return vcpu;
}
EXPORT_SYMBOL_GPL(kvm_get_running_vcpu);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_running_vcpu);
/**
* kvm_get_running_vcpus - get the per-CPU array of currently running vcpus.
@ -6549,7 +6550,7 @@ err_cpu_kick_mask:
kmem_cache_destroy(kvm_vcpu_cache);
return r;
}
EXPORT_SYMBOL_GPL(kvm_init);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init);
void kvm_exit(void)
{
@ -6572,4 +6573,4 @@ void kvm_exit(void)
kvm_async_pf_deinit();
kvm_irqfd_exit();
}
EXPORT_SYMBOL_GPL(kvm_exit);
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_exit);