summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2024-10-30KVM: x86/mmu: Flush remote TLBs iff MMU-writable flag is cleared from RO SPTESean Christopherson
Don't force a remote TLB flush if KVM happens to effectively "refresh" a read-only SPTE that is still MMU-Writable, as KVM allows MMU-Writable SPTEs to have Writable TLB entries, even if the SPTE is !Writable. Remote TLBs need to be flushed only when creating a read-only SPTE for write-tracking, i.e. when installing a !MMU-Writable SPTE. In practice, especially now that KVM doesn't overwrite existing SPTEs when prefetching, KVM will rarely "refresh" a read-only, MMU-Writable SPTE, i.e. this is unlikely to eliminate many, if any, TLB flushes. But, more precisely flushing makes it easier to understand exactly when KVM does and doesn't need to flush. Note, x86 architecturally requires relevant TLB entries to be invalidated on a page fault, i.e. there is no risk of putting a vCPU into an infinite loop of read-only page faults. Cc: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/r/20241011021051.1557902-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30perf/x86/rapl: Clean up cpumask and hotplugKan Liang
The rapl pmu is die scope, which is supported by the generic perf_event subsystem now. Set the scope for the rapl PMU and remove all the cpumask and hotplug codes. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Oliver Sang <oliver.sang@intel.com> Tested-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com> Link: https://lore.kernel.org/r/20241010142604.770192-2-kan.liang@linux.intel.com
2024-10-30perf/x86/rapl: Move the pmu allocation out of CPU hotplugKan Liang
There are extra codes in the CPU hotplug function to allocate rapl pmus. The generic PMU hotplug support is hard to be applied. As long as the rapl pmus can be allocated upfront for each die/socket, the code doesn't need to be implemented in the CPU hotplug function. Move the code to the init_rapl_pmus(), and allocate a PMU for each possible die/socket. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Oliver Sang <oliver.sang@intel.com> Link: https://lore.kernel.org/r/20241010142604.770192-1-kan.liang@linux.intel.com
2024-10-30KVM: VMX: Remove the unused variable "gpa" in __invept()Yan Zhao
Remove the unused variable "gpa" in __invept(). The INVEPT instruction only supports two types: VMX_EPT_EXTENT_CONTEXT (1) and VMX_EPT_EXTENT_GLOBAL (2). Neither of these types requires a third variable "gpa". The "gpa" variable for __invept() is always set to 0 and was originally introduced for the old non-existent type VMX_EPT_EXTENT_INDIVIDUAL_ADDR (0). This type was removed by commit 2b3c5cbc0d81 ("kvm: don't use bit24 for detecting address-specific invalidation capability") and commit 63f3ac48133a ("KVM: VMX: clean up declaration of VPID/EPT invalidation types"). Since this variable is not useful for error handling either, remove it to avoid confusion. No functional changes expected. Cc: Yuan Yao <yuan.yao@intel.com> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/r/20241014045931.1061-1-yan.y.zhao@intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-10-30x86/mce: Add wrapper for struct mce to export vendor specific infoAvadhut Naik
Currently, exporting new additional machine check error information involves adding new fields for the same at the end of the struct mce. This additional information can then be consumed through mcelog or tracepoint. However, as new MSRs are being added (and will be added in the future) by CPU vendors on their newer CPUs with additional machine check error information to be exported, the size of struct mce will balloon on some CPUs, unnecessarily, since those fields are vendor-specific. Moreover, different CPU vendors may export the additional information in varying sizes. The problem particularly intensifies since struct mce is exposed to userspace as part of UAPI. It's bloating through vendor-specific data should be avoided to limit the information being sent out to userspace. Add a new structure mce_hw_err to wrap the existing struct mce. The same will prevent its ballooning since vendor-specifc data, if any, can now be exported through a union within the wrapper structure and through __dynamic_array in mce_record tracepoint. Furthermore, new internal kernel fields can be added to the wrapper struct without impacting the user space API. [ bp: Restore reverse x-mas tree order of function vars declarations. ] Suggested-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Avadhut Naik <avadhut.naik@amd.com> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Link: https://lore.kernel.org/r/20241022194158.110073-2-avadhut.naik@amd.com
2024-10-29of/fdt: add dt_phys arg to early_init_dt_scan and early_init_dt_verifyUsama Arif
__pa() is only intended to be used for linear map addresses and using it for initial_boot_params which is in fixmap for arm64 will give an incorrect value. Hence save the physical address when it is known at boot time when calling early_init_dt_scan for arm64 and use it at kexec time instead of converting the virtual address using __pa(). Note that arm64 doesn't need the FDT region reserved in the DT as the kernel explicitly reserves the passed in FDT. Therefore, only a debug warning is fixed with this change. Reported-by: Breno Leitao <leitao@debian.org> Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Usama Arif <usamaarif642@gmail.com> Fixes: ac10be5cdbfa ("arm64: Use common of_kexec_alloc_and_setup_fdt()") Link: https://lore.kernel.org/r/20241023171426.452688-1-usamaarif642@gmail.com Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
2024-10-29x86/amd_nb: Fix compile-testing without CONFIG_AMD_NBArnd Bergmann
node_to_amd_nb() is defined to NULL in non-AMD configs: drivers/platform/x86/amd/hsmp/plat.c: In function 'init_platform_device': drivers/platform/x86/amd/hsmp/plat.c:165:68: error: dereferencing 'void *' pointer [-Werror] 165 | sock->root = node_to_amd_nb(i)->root; | ^~ drivers/platform/x86/amd/hsmp/plat.c:165:68: error: request for member 'root' in something not a structure or union Users of the interface who also allow COMPILE_TEST will cause the above build error so provide an inline stub to fix that. [ bp: Massage commit message. ] Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20241029092329.3857004-1-arnd@kernel.org
2024-10-29x86/pvh: Avoid absolute symbol references in .head.textArd Biesheuvel
The .head.text section contains code that may execute from a different address than it was linked at. This is fragile, given that the x86 ABI can refer to global symbols via absolute or relative references, and the toolchain assumes that these are interchangeable, which they are not in this particular case. For this reason, all absolute symbol references are being removed from code that is emitted into .head.text. Subsequently, build time validation may be added that ensures that no absolute ELF relocations exist at all in that ELF section. In the case of the PVH code, the absolute references are in 32-bit code, which gets emitted with R_X86_64_32 relocations, and these are even more problematic going forward, as it prevents running the linker in PIE mode. So update the 64-bit code to avoid _pa(), and to only rely on relative symbol references: these are always 32-bits wide, even in 64-bit code, and are resolved by the linker at build time. Reviewed-by: Jason Andryuk <jason.andryuk@amd.com> Tested-by: Jason Andryuk <jason.andryuk@amd.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-ID: <20241009160438.3884381-12-ardb+git@google.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2024-10-29x86/xen: Avoid relocatable quantities in Xen ELF notesArd Biesheuvel
Xen puts virtual and physical addresses into ELF notes that are treated by the linker as relocatable by default. Doing so is not only pointless, given that the ELF notes are only intended for consumption by Xen before the kernel boots. It is also a KASLR leak, given that the kernel's ELF notes are exposed via the world readable /sys/kernel/notes. So emit these constants in a way that prevents the linker from marking them as relocatable. This involves place-relative relocations (which subtract their own virtual address from the symbol value) and linker provided absolute symbols that add the address of the place to the desired value. Tested-by: Jason Andryuk <jason.andryuk@amd.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com> Message-ID: <20241009160438.3884381-11-ardb+git@google.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2024-10-29x86/pvh: Omit needless clearing of phys_baseArd Biesheuvel
Since commit d9ec1158056b ("x86/boot/64: Use RIP_REL_REF() to assign 'phys_base'") phys_base is assigned directly rather than added to, so it is no longer necessary to clear it after use. Reviewed-by: Jason Andryuk <jason.andryuk@amd.com> Tested-by: Jason Andryuk <jason.andryuk@amd.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-ID: <20241009160438.3884381-10-ardb+git@google.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2024-10-29x86/pvh: Use correct size value in GDT descriptorArd Biesheuvel
The limit field in a GDT descriptor is an inclusive bound, and therefore one less than the size of the covered range. Reviewed-by: Jason Andryuk <jason.andryuk@amd.com> Tested-by: Jason Andryuk <jason.andryuk@amd.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-ID: <20241009160438.3884381-9-ardb+git@google.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2024-10-29x86/pvh: Call C code via the kernel virtual mappingArd Biesheuvel
Calling C code via a different mapping than it was linked at is problematic, because the compiler assumes that RIP-relative and absolute symbol references are interchangeable. GCC in particular may use RIP-relative per-CPU variable references even when not using -fpic. So call xen_prepare_pvh() via its kernel virtual mapping on x86_64, so that those RIP-relative references produce the correct values. This matches the pre-existing behavior for i386, which also invokes xen_prepare_pvh() via the kernel virtual mapping before invoking startup_32 with paging disabled again. Fixes: 7243b93345f7 ("xen/pvh: Bootstrap PVH guest") Tested-by: Jason Andryuk <jason.andryuk@amd.com> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-ID: <20241009160438.3884381-8-ardb+git@google.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2024-10-29drm/intel/pciids: rename i915_pciids.h to just pciids.hJani Nikula
In preparation of sharing the PCI ID macros between i915 and xe, rename i915_pciids.h to pciids.h. Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Tvrtko Ursulin <tursulin@ursulin.net> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/835143845faa5310e4bb58405a8a0848392bbf06.1729590029.git.jani.nikula@intel.com Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2024-10-28x86/traps: move kmsan check after instrumentation_beginSabyrzhan Tasbolatov
During x86_64 kernel build with CONFIG_KMSAN, the objtool warns following: AR built-in.a AR vmlinux.a LD vmlinux.o vmlinux.o: warning: objtool: handle_bug+0x4: call to kmsan_unpoison_entry_regs() leaves .noinstr.text section OBJCOPY modules.builtin.modinfo GEN modules.builtin MODPOST Module.symvers CC .vmlinux.export.o Moving kmsan_unpoison_entry_regs() _after_ instrumentation_begin() fixes the warning. There is decode_bug(regs->ip, &imm) is left before KMSAN unpoisoining, but it has the return condition and if we include it after instrumentation_begin() it results the warning "return with instrumentation enabled", hence, I'm concerned that regs will not be KMSAN unpoisoned if `ud_type == BUG_NONE` is true. Link: https://lkml.kernel.org/r/20241016152407.3149001-1-snovitoll@gmail.com Fixes: ba54d194f8da ("x86/traps: avoid KMSAN bugs originating from handle_bug()") Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com> Reviewed-by: Alexander Potapenko <glider@google.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-10-28asm-generic: provide generic page_to_phys and phys_to_page implementationsChristoph Hellwig
page_to_phys is duplicated by all architectures, and from some strange reason placed in <asm/io.h> where it doesn't fit at all. phys_to_page is only provided by a few architectures despite having a lot of open coded users. Provide generic versions in <asm-generic/memory_model.h> to make these helpers more easily usable. Note with this patch powerpc loses the CONFIG_DEBUG_VIRTUAL pfn_valid check. It will be added back in a generic version later. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-10-28x86/sev: Convert shared memory back to private on kexecAshish Kalra
SNP guests allocate shared buffers to perform I/O. It is done by allocating pages normally from the buddy allocator and converting them to shared with set_memory_decrypted(). The second, kexec-ed, kernel has no idea what memory is converted this way. It only sees E820_TYPE_RAM. Accessing shared memory via private mapping will cause unrecoverable RMP page-faults. On kexec, walk direct mapping and convert all shared memory back to private. It makes all RAM private again and second kernel may use it normally. Additionally, for SNP guests, convert all bss decrypted section pages back to private. The conversion occurs in two steps: stopping new conversions and unsharing all memory. In the case of normal kexec, the stopping of conversions takes place while scheduling is still functioning. This allows for waiting until any ongoing conversions are finished. The second step is carried out when all CPUs except one are inactive and interrupts are disabled. This prevents any conflicts with code that may access shared memory. Co-developed-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/05a8c15fb665dbb062b04a8cb3d592a63f235937.1722520012.git.ashish.kalra@amd.com
2024-10-28x86/mm: Refactor __set_clr_pte_enc()Ashish Kalra
Refactor __set_clr_pte_enc() and add two new helper functions to set/clear PTE C-bit from early SEV/SNP initialization code and later during shutdown/kexec especially when all CPUs are stopped and interrupts are disabled and set_memory_xx() interfaces can't be used. Co-developed-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/5df4aa450447f28294d1c5a890e27b63ed4ded36.1722520012.git.ashish.kalra@amd.com
2024-10-28x86/boot: Skip video memory access in the decompressor for SEV-ES/SNPAshish Kalra
Accessing guest video memory/RAM in the decompressor causes guest termination as the boot stage2 #VC handler for SEV-ES/SNP systems does not support MMIO handling. This issue is observed during a SEV-ES/SNP guest kexec as kexec -c adds screen_info to the boot parameters passed to the second kernel, which causes console output to be dumped to both video and serial. As the decompressor output gets cleared really fast, it is preferable to get the console output only on serial, hence, skip accessing the video RAM during decompressor stage to prevent guest termination. Serial console output during decompressor stage works as boot stage2 #VC handler already supports handling port I/O. [ bp: Massage. ] Suggested-by: Borislav Petkov (AMD) <bp@alien8.de> Suggested-by: Thomas Lendacky <thomas.lendacky@amd.com> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/8a55ea86524c686e575d273311acbe57ce8cee23.1722520012.git.ashish.kalra@amd.com
2024-10-28x86/mce/intel: Use MCG_BANKCNT_MASK instead of 0xffQiuxu Zhuo
Use the predefined MCG_BANKCNT_MASK macro instead of the hardcoded 0xff to mask the bank number bits. No functional changes intended. Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20241025024602.24318-3-qiuxu.zhuo@intel.com
2024-10-28x86/mce/mcelog: Use xchg() to get and clear the flagsQiuxu Zhuo
Using xchg() to atomically get and clear the MCE log buffer flags, streamlines the code and reduces the text size by 20 bytes. $ size dev-mcelog.o.* text data bss dec hex filename 3013 360 160 3533 dcd dev-mcelog.o.old 2993 360 160 3513 db9 dev-mcelog.o.new No functional changes intended. Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20241025024602.24318-2-qiuxu.zhuo@intel.com
2024-10-28x86/cpu: Fix formatting of cpuid_bits[] in scattered.cBorislav Petkov (AMD)
Realign initializers to accomodate for longer X86_FEATURE define names. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2024-10-28x86/cpufeatures: Add X86_FEATURE_AMD_WORKLOAD_CLASS feature bitPerry Yuan
Add a new feature bit that indicates support for workload-based heuristic feedback to OS for scheduling decisions. When the bit set, threads are classified during runtime into enumerated classes. The classes represent thread performance/power characteristics that may benefit from special scheduling behaviors. Signed-off-by: Perry Yuan <perry.yuan@amd.com> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Link: https://lore.kernel.org/r/20241028020251.8085-4-mario.limonciello@amd.com
2024-10-28crypto: x86/aegis128 - remove unneeded RETsEric Biggers
Remove returns that are immediately followed by another return. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - remove unneeded FRAME_BEGIN and FRAME_ENDEric Biggers
Stop using FRAME_BEGIN and FRAME_END in the AEGIS assembly functions, since all these functions are now leaf functions. This eliminates some unnecessary instructions. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - take advantage of block-aligned lenEric Biggers
Update a caller of aegis128_aesni_ad() to round down the length to a block boundary. After that, aegis128_aesni_ad(), aegis128_aesni_enc(), and aegis128_aesni_dec() are only passed whole blocks. Update the assembly code to take advantage of that, which eliminates some unneeded instructions. For aegis128_aesni_enc() and aegis128_aesni_dec(), the length is also always nonzero, so stop checking for zero length. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - optimize partial block handling using SSE4.1Eric Biggers
Optimize the code that loads and stores partial blocks, taking advantage of SSE4.1. The code is adapted from that in aes-gcm-aesni-x86_64.S. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - improve assembly function prototypesEric Biggers
Adjust the prototypes of the AEGIS assembly functions: - Use proper types instead of 'void *', when applicable. - Move the length parameter to after the buffers it describes rather than before, to match the usual convention. Also shorten its name to just len (which is the name used in the assembly code). - Declare register aliases at the beginning of each function rather than once per file. This was necessary because len was moved, but also it allows adding some aliases where raw registers were used before. - Put assoclen and cryptlen in the correct order when declaring the finalization function in the .c file. - Remove the unnecessary "crypto_" prefix. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - optimize length block preparation using SSE4.1Eric Biggers
Start using SSE4.1 instructions in the AES-NI AEGIS code, with the first use case being preparing the length block in fewer instructions. In practice this does not reduce the set of CPUs on which the code can run, because all Intel and AMD CPUs with AES-NI also have SSE4.1. Upgrade the existing SSE2 feature check to SSE4.1, though it seems this check is not strictly necessary; the aesni-intel module has been getting away with using SSE4.1 despite checking for AES-NI only. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - don't bother with special code for aligned dataEric Biggers
Remove the AEGIS assembly code paths that were "optimized" to operate on 16-byte aligned data using movdqa, and instead just use the code paths that use movdqu and can handle data with any alignment. This does not reduce performance. movdqa is basically a historical artifact; on aligned data, movdqu and movdqa have had the same performance since Intel Nehalem (2008) and AMD Bulldozer (2011). And code that requires AES-NI cannot run on CPUs older than those anyway. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - eliminate some indirect callsEric Biggers
Instead of using a struct of function pointers to decide whether to call the encryption or decryption assembly functions, use a conditional branch on a bool. Force-inline the functions to avoid actually generating the branch. This improves performance slightly since indirect calls are slow. Remove the now-unnecessary CFI stubs. Note that just force-inlining the existing functions might cause the compiler to optimize out the indirect branches, but that would not be a reliable way to do it and the CFI stubs would still be required. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - remove no-op init and exit functionsEric Biggers
Don't bother providing empty stubs for the init and exit methods in struct aead_alg, since they are optional anyway. Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: x86/aegis128 - access 32-bit arguments as 32-bitEric Biggers
Fix the AEGIS assembly code to access 'unsigned int' arguments as 32-bit values instead of 64-bit, since the upper bits of the corresponding 64-bit registers are not guaranteed to be zero. Note: there haven't been any reports of this bug actually causing incorrect behavior. Neither gcc nor clang guarantee zero-extension to 64 bits, but zero-extension is likely to happen in practice because most instructions that operate on 32-bit registers zero-extend to 64 bits. Fixes: 1d373d4e8e15 ("crypto: x86 - Add optimized AEGIS implementations") Cc: stable@vger.kernel.org Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-27Merge tag 'x86_urgent_for_v6.12_rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Borislav Petkov: - Prevent a certain range of pages which get marked as hypervisor-only, to get allocated to a CoCo (SNP) guest which cannot use them and thus fail booting - Fix the microcode loader on AMD to pay attention to the stepping of a patch and to handle the case where a BIOS config option splits the machine into logical NUMA nodes per L3 cache slice - Disable LAM from being built by default due to security concerns * tag 'x86_urgent_for_v6.12_rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/sev: Ensure that RMP table fixups are reserved x86/microcode/AMD: Split load_microcode_amd() x86/microcode/AMD: Pay attention to the stepping dynamically x86/lam: Disable ADDRESS_MASKING in most cases
2024-10-26x86/cpu: Use str_yes_no() helper in show_cpuinfo_misc()Thorsten Blum
Remove hard-coded strings by using the str_yes_no() helper function. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20241026110808.78074-1-thorsten.blum@linux.dev
2024-10-26crypto: x86/crc32c - eliminate jump table and excessive unrollingEric Biggers
crc32c-pcl-intel-asm_64.S has a loop with 1 to 127 iterations fully unrolled and uses a jump table to jump into the correct location. This optimization is misguided, as it bloats the binary code size and introduces an indirect call. x86_64 CPUs can predict loops well, so it is fine to just use a loop instead. Loop bookkeeping instructions can compete with the crc instructions for the ALUs, but this is easily mitigated by unrolling the loop by a smaller amount, such as 4 times. Therefore, re-roll the loop and make related tweaks to the code. This reduces the binary code size of crc_pclmul() from 4546 bytes to 418 bytes, a 91% reduction. In general it also makes the code faster, with some large improvements seen when retpoline is enabled. More detailed performance results are shown below. They are given as percent improvement in throughput (negative means regressed) for CPU microarchitecture vs. input length in bytes. E.g. an improvement from 40 GB/s to 50 GB/s would be listed as 25%. Table 1: Results with retpoline enabled (the default): | 512 | 833 | 1024 | 2000 | 3173 | 4096 | ---------------------+-------+-------+-------+------ +-------+-------+ Intel Haswell | 35.0% | 20.7% | 17.8% | 9.7% | -0.2% | 4.4% | Intel Emerald Rapids | 66.8% | 45.2% | 36.3% | 19.3% | 0.0% | 5.4% | AMD Zen 2 | 29.5% | 17.2% | 13.5% | 8.6% | -0.5% | 2.8% | Table 2: Results with retpoline disabled: | 512 | 833 | 1024 | 2000 | 3173 | 4096 | ---------------------+-------+-------+-------+------ +-------+-------+ Intel Haswell | 3.3% | 4.8% | 4.5% | 0.9% | -2.9% | 0.3% | Intel Emerald Rapids | 7.5% | 6.4% | 5.2% | 2.3% | -0.0% | 0.6% | AMD Zen 2 | 11.8% | 1.4% | 0.2% | 1.3% | -0.9% | -0.2% | Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-26crypto: x86/crc32c - access 32-bit arguments as 32-bitEric Biggers
Fix crc32c-pcl-intel-asm_64.S to access 32-bit arguments as 32-bit values instead of 64-bit, since the upper bits of the corresponding 64-bit registers are not guaranteed to be zero. Also update the type of the length argument to be unsigned int rather than int, as the assembly code treats it as unsigned. Note: there haven't been any reports of this bug actually causing incorrect behavior. Neither gcc nor clang guarantee zero-extension to 64 bits, but zero-extension is likely to happen in practice because most instructions that operate on 32-bit registers zero-extend to 64 bits. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-26crypto: x86/crc32c - simplify code for handling fewer than 200 bytesEric Biggers
The assembly code in crc32c-pcl-intel-asm_64.S is invoked only for lengths >= 512, due to the overhead of saving and restoring FPU state. Therefore, it is unnecessary for this code to be excessively "optimized" for lengths < 200. Eliminate the excessive unrolling of this part of the code and use a more straightforward qword-at-a-time loop. Note: the part of the code in question is not entirely redundant, as it is still used to process any remainder mod 24, as well as any remaining data when fewer than 200 bytes remain after least one 3072-byte chunk. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-25x86/amd: Use heterogeneous core topology for identifying boost numeratorMario Limonciello
AMD heterogeneous designs include two types of cores: * Performance * Efficiency Each core type has different highest performance values configured by the platform. Drivers such as amd_pstate need to identify the type of core to correctly set an appropriate boost numerator to calculate the maximum frequency. X86_FEATURE_AMD_HETEROGENEOUS_CORES is used to identify whether the SoC supports heterogeneous core type by reading CPUID leaf Fn_0x80000026. On performance cores the scaling factor of 196 is used. On efficiency cores the scaling factor is the value reported as the highest perf. Efficiency cores have the same preferred core rankings. Suggested-by: Perry Yuan <perry.yuan@amd.com> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20241025171459.1093-6-mario.limonciello@amd.com
2024-10-25x86/cpu: Add CPU type to struct cpuinfo_topologyPawan Gupta
Sometimes it is required to take actions based on if a CPU is a performance or efficiency core. As an example, intel_pstate driver uses the Intel core-type to determine CPU scaling. Also, some CPU vulnerabilities only affect a specific CPU type, like RFDS only affects Intel Atom. Hybrid systems that have variants P+E, P-only(Core) and E-only(Atom), it is not straightforward to identify which variant is affected by a type specific vulnerability. Such processors do have CPUID field that can uniquely identify them. Like, P+E, P-only and E-only enumerates CPUID.1A.CORE_TYPE identification, while P+E additionally enumerates CPUID.7.HYBRID. Based on this information, it is possible for boot CPU to identify if a system has mixed CPU types. Add a new field hw_cpu_type to struct cpuinfo_topology that stores the hardware specific CPU type. This saves the overhead of IPIs to get the CPU type of a different CPU. CPU type is populated early in the boot process, before vulnerabilities are enumerated. Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Co-developed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20241025171459.1093-5-mario.limonciello@amd.com
2024-10-25x86/cpu: Enable SD_ASYM_PACKING for PKG domain on AMDPerry Yuan
Enable the SD_ASYM_PACKING domain flag for the PKG domain on AMD heterogeneous processors. This flag is beneficial for processors with one or more CCDs and relies on x86_sched_itmt_flags(). Signed-off-by: Perry Yuan <perry.yuan@amd.com> Co-developed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Link: https://lore.kernel.org/r/20241025171459.1093-4-mario.limonciello@amd.com
2024-10-25x86/cpufeatures: Add X86_FEATURE_AMD_HETEROGENEOUS_CORESPerry Yuan
CPUID leaf 0x80000026 advertises core types with different efficiency rankings. Bit 30 indicates the heterogeneous core topology feature, if the bit set, it means not all instances at the current hierarchical level have the same core topology. This is described in the AMD64 Architecture Programmers Manual Volume 2 and 3, doc ID #25493 and #25494. Signed-off-by: Perry Yuan <perry.yuan@amd.com> Co-developed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20241025171459.1093-3-mario.limonciello@amd.com
2024-10-25x86/cpufeatures: Rename X86_FEATURE_FAST_CPPC to have AMD prefixMario Limonciello
This feature is an AMD unique feature of some processors, so put AMD into the name. Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20241025171459.1093-2-mario.limonciello@amd.com
2024-10-25KVM: x86/mmu: Don't mark "struct page" accessed when zapping SPTEsSean Christopherson
Don't mark pages/folios as accessed in the primary MMU when zapping SPTEs, as doing so relies on kvm_pfn_to_refcounted_page(), and generally speaking is unnecessary and wasteful. KVM participates in page aging via mmu_notifiers, so there's no need to push "accessed" updates to the primary MMU. And if KVM zaps a SPTe in response to an mmu_notifier, marking it accessed _after_ the primary MMU has decided to zap the page is likely to go unnoticed, i.e. odds are good that, if the page is being zapped for reclaim, the page will be swapped out regardless of whether or not KVM marks the page accessed. Dropping x86's use of kvm_set_pfn_accessed() also paves the way for removing kvm_pfn_to_refcounted_page() and all its users. Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-83-seanjc@google.com>
2024-10-25KVM: VMX: Use __kvm_faultin_page() to get APIC access page/pfnSean Christopherson
Use __kvm_faultin_page() get the APIC access page so that KVM can precisely release the refcounted page, i.e. to remove yet another user of kvm_pfn_to_refcounted_page(). While the path isn't handling a guest page fault, the semantics are effectively the same; KVM just happens to be mapping the pfn into a VMCS field instead of a secondary MMU. Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-52-seanjc@google.com>
2024-10-25KVM: VMX: Hold mmu_lock until page is released when updating APIC access pageSean Christopherson
Hold mmu_lock across kvm_release_pfn_clean() when refreshing the APIC access page address to ensure that KVM doesn't mark a page/folio as accessed after it has been unmapped. Practically speaking marking a folio accesses is benign in this scenario, as KVM does hold a reference (it's really just marking folios dirty that is problematic), but there's no reason not to be paranoid (moving the APIC access page isn't a hot path), and no reason to be different from other mmu_notifier-protected flows in KVM. Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-51-seanjc@google.com>
2024-10-25KVM: Move x86's API to release a faultin page to common KVMSean Christopherson
Move KVM x86's helper that "finishes" the faultin process to common KVM so that the logic can be shared across all architectures. Note, not all architectures implement a fast page fault path, but the gist of the comment applies to all architectures. Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-50-seanjc@google.com>
2024-10-25KVM: x86/mmu: Don't mark unused faultin pages as accessedSean Christopherson
When finishing guest page faults, don't mark pages as accessed if KVM is resuming the guest _without_ installing a mapping, i.e. if the page isn't being used. While it's possible that marking the page accessed could avoid minor thrashing due to reclaiming a page that the guest is about to access, it's far more likely that the gfn=>pfn mapping was was invalidated, e.g. due a memslot change, or because the corresponding VMA is being modified. Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-49-seanjc@google.com>
2024-10-25KVM: x86/mmu: Put refcounted pages instead of blindly releasing pfnsSean Christopherson
Now that all x86 page fault paths precisely track refcounted pages, use Use kvm_page_fault.refcounted_page to put references to struct page memory when finishing page faults. This is a baby step towards eliminating kvm_pfn_to_refcounted_page(). Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-48-seanjc@google.com>
2024-10-25KVM: guest_memfd: Provide "struct page" as output from kvm_gmem_get_pfn()Sean Christopherson
Provide the "struct page" associated with a guest_memfd pfn as an output from __kvm_gmem_get_pfn() so that KVM guest page fault handlers can directly put the page instead of having to rely on kvm_pfn_to_refcounted_page(). Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-47-seanjc@google.com>
2024-10-25KVM: x86/mmu: Convert page fault paths to kvm_faultin_pfn()Sean Christopherson
Convert KVM x86 to use the recently introduced __kvm_faultin_pfn(). Opportunstically capture the refcounted_page grabbed by KVM for use in future changes. No functional change intended. Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-45-seanjc@google.com>