summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2025-03-19x86/cpu: Fix the description of X86_MATCH_VFM_STEPS()Pawan Gupta
The comments needs to reflect an implementation change. No functional change. Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250311-add-cpu-type-v8-1-e8514dcaaff2@linux.intel.com
2025-03-19x86/cpufeatures: Use AWK to generate {REQUIRED|DISABLED}_MASK_BIT_SET in ↵Xin Li (Intel)
<asm/cpufeaturemasks.h> Generate the {REQUIRED|DISABLED}_MASK_BIT_SET macros in the newly added AWK script that generates <asm/cpufeaturemasks.h>. Suggested-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Xin Li (Intel) <xin@zytor.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Brian Gerst <brgerst@gmail.com> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250228082338.73859-6-xin@zytor.com
2025-03-19x86/cpufeatures: Remove {disabled,required}-features.hXin Li (Intel)
The functionalities of {disabled,required}-features.h have been replaced with the auto-generated generated/<asm/cpufeaturemasks.h> header. Thus they are no longer needed and can be removed. None of the macros defined in {disabled,required}-features.h is used in tools, delete them too. Signed-off-by: Xin Li (Intel) <xin@zytor.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250305184725.3341760-4-xin@zytor.com
2025-03-19x86/cpufeatures: Generate the <asm/cpufeaturemasks.h> header based on build ↵H. Peter Anvin (Intel)
config Introduce an AWK script to auto-generate the <asm/cpufeaturemasks.h> header with required and disabled feature masks based on <asm/cpufeatures.h> and the current build config. Thus for any CPU feature with a build config, e.g., X86_FRED, simply add: config X86_DISABLED_FEATURE_FRED def_bool y depends on !X86_FRED to arch/x86/Kconfig.cpufeatures, instead of adding a conditional CPU feature disable flag, e.g., DISABLE_FRED. Lastly, the generated required and disabled feature masks will be added to their corresponding feature masks for this particular compile-time configuration. [ Xin: build integration improvements ] [ mingo: Improved changelog and comments ] Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> Signed-off-by: Xin Li (Intel) <xin@zytor.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250305184725.3341760-3-xin@zytor.com
2025-03-19x86/cpufeatures: Add {REQUIRED,DISABLED} feature configsH. Peter Anvin (Intel)
Required and disabled feature masks completely rely on build configs, i.e., once a build config is fixed, so are the feature masks. To prepare for auto-generating the <asm/cpufeaturemasks.h> header with required and disabled feature masks based on a build config, add feature Kconfig items: - X86_REQUIRED_FEATURE_x - X86_DISABLED_FEATURE_x each of which may be set to "y" if and only if its preconditions from current build config are met. Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> Signed-off-by: Xin Li (Intel) <xin@zytor.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250228082338.73859-3-xin@zytor.com
2025-03-19x86/mm/ident_map: Fix theoretical virtual address overflow to zeroKirill A. Shutemov
The current calculation of the 'next' virtual address in the page table initialization functions in arch/x86/mm/ident_map.c doesn't protect against wrapping to zero. This is a theoretical issue that cannot happen currently, the problematic case is possible only if the user sets a high enough x86_mapping_info::offset value - which no current code in the upstream kernel does. ( The wrapping to zero only occurs if the top PGD entry is accessed. There are no such users upstream. Only hibernate_64.c uses x86_mapping_info::offset, and it operates on the direct mapping range, which is not the top PGD entry. ) Should such an overflow happen, it can result in page table corruption and a hang. To future-proof this code, replace the manual 'next' calculation with p?d_addr_end() which handles wrapping correctly. [ Backporter's note: there's no need to backport this patch. ] Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
2025-03-19x86/acpi: Replace manual page table initialization with ↵Kirill A. Shutemov
kernel_ident_mapping_init() The init_transition_pgtable() functions maps the page with asm_acpi_mp_play_dead() into an identity mapping. Replace open-coded manual page table initialization with kernel_ident_mapping_init() to avoid code duplication. Use x86_mapping_info::offset to get the page mapped at the correct location. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20241016111458.846228-3-kirill.shutemov@linux.intel.com
2025-03-19x86/mm: Always set the ASID valid bit for the INVLPGB instructionTom Lendacky
When executing the INVLPGB instruction on a bare-metal host or hypervisor, if the ASID valid bit is not set, the instruction will flush the TLB entries that match the specified criteria for any ASID, not just the those of the host. If virtual machines are running on the system, this may result in inadvertent flushes of guest TLB entries. When executing the INVLPGB instruction in a guest and the INVLPGB instruction is not intercepted by the hypervisor, the hardware will replace the requested ASID with the guest ASID and set the ASID valid bit before doing the broadcast invalidation. Thus a guest is only able to flush its own TLB entries. So to limit the host TLB flushing reach, always set the ASID valid bit using an ASID value of 0 (which represents the host/hypervisor). This will will result in the desired effect in both host and guest. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250304120449.GHZ8bsYYyEBOKQIxBm@fat_crate.local
2025-03-19x86/mm: Enable AMD translation cache extensionsRik van Riel
With AMD TCE (translation cache extensions) only the intermediate mappings that cover the address range zapped by INVLPG / INVLPGB get invalidated, rather than all intermediate mappings getting zapped at every TLB invalidation. This can help reduce the TLB miss rate, by keeping more intermediate mappings in the cache. From the AMD manual: Translation Cache Extension (TCE) Bit. Bit 15, read/write. Setting this bit to 1 changes how the INVLPG, INVLPGB, and INVPCID instructions operate on TLB entries. When this bit is 0, these instructions remove the target PTE from the TLB as well as all upper-level table entries that are cached in the TLB, whether or not they are associated with the target PTE. When this bit is set, these instructions will remove the target PTE and only those upper-level entries that lead to the target PTE in the page table hierarchy, leaving unrelated upper-level entries intact. [ bp: use cpu_has()... I know, it is a mess. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-13-riel@surriel.com
2025-03-19x86/mm: Enable broadcast TLB invalidation for multi-threaded processesRik van Riel
There is not enough room in the 12-bit ASID address space to hand out broadcast ASIDs to every process. Only hand out broadcast ASIDs to processes when they are observed to be simultaneously running on 4 or more CPUs. This also allows single threaded process to continue using the cheaper, local TLB invalidation instructions like INVLPGB. Due to the structure of flush_tlb_mm_range(), the INVLPGB flushing is done in a generically named broadcast_tlb_flush() function which can later also be used for Intel RAR. Combined with the removal of unnecessary lru_add_drain calls() (see https://lore.kernel.org/r/20241219153253.3da9e8aa@fangorn) this results in a nice performance boost for the will-it-scale tlb_flush2_threads test on an AMD Milan system with 36 cores: - vanilla kernel: 527k loops/second - lru_add_drain removal: 731k loops/second - only INVLPGB: 527k loops/second - lru_add_drain + INVLPGB: 1157k loops/second Profiling with only the INVLPGB changes showed while TLB invalidation went down from 40% of the total CPU time to only around 4% of CPU time, the contention simply moved to the LRU lock. Fixing both at the same time about doubles the number of iterations per second from this case. Comparing will-it-scale tlb_flush2_threads with several different numbers of threads on a 72 CPU AMD Milan shows similar results. The number represents the total number of loops per second across all the threads: threads tip INVLPGB 1 315k 304k 2 423k 424k 4 644k 1032k 8 652k 1267k 16 737k 1368k 32 759k 1199k 64 636k 1094k 72 609k 993k 1 and 2 thread performance is similar with and without INVLPGB, because INVLPGB is only used on processes using 4 or more CPUs simultaneously. The number is the median across 5 runs. Some numbers closer to real world performance can be found at Phoronix, thanks to Michael: https://www.phoronix.com/news/AMD-INVLPGB-Linux-Benefits [ bp: - Massage - :%s/\<static_cpu_has\>/cpu_feature_enabled/cgi - :%s/\<clear_asid_transition\>/mm_clear_asid_transition/cgi - Fold in a 0day bot fix: https://lore.kernel.org/oe-kbuild-all/202503040000.GtiWUsBm-lkp@intel.com ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Nadav Amit <nadav.amit@gmail.com> Link: https://lore.kernel.org/r/20250226030129.530345-11-riel@surriel.com
2025-03-19x86/mm: Add global ASID process exit helpersRik van Riel
A global ASID is allocated for the lifetime of a process. Free the global ASID at process exit time. [ bp: Massage, create helpers, hide details inside them. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-10-riel@surriel.com
2025-03-19x86/mm: Handle global ASID context switch and TLB flushRik van Riel
Do context switch and TLB flush support for processes that use a global ASID and PCID across all CPUs. At both context switch time and TLB flush time, it needs to be checked whether a task is switching to a global ASID, and, if so, reload the TLB with the new ASID as appropriate. In both code paths, the TLB flush is avoided if a global ASID is used, because the global ASIDs are always kept up to date across CPUs, even when the process is not running on a CPU. [ bp: - Massage - :%s/\<static_cpu_has\>/cpu_feature_enabled/cgi ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-9-riel@surriel.com
2025-03-19x86/mm: Add global ASID allocation helper functionsRik van Riel
Add functions to manage global ASID space. Multithreaded processes that are simultaneously active on 4 or more CPUs can get a global ASID, resulting in the same PCID being used for that process on every CPU. This in turn will allow the kernel to use hardware-assisted TLB flushing through AMD INVLPGB or Intel RAR for these processes. [ bp: - Extend use_global_asid() comment - s/X86_BROADCAST_TLB_FLUSH/BROADCAST_TLB_FLUSH/g - other touchups ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-8-riel@surriel.com
2025-03-19x86/mm: Use broadcast TLB flushing in page reclaimRik van Riel
Page reclaim tracks only the CPU(s) where the TLB needs to be flushed, rather than all the individual mappings that may be getting invalidated. Use broadcast TLB flushing when that is available. [ bp: Massage commit message. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-7-riel@surriel.com
2025-03-19x86/mm: Use INVLPGB for kernel TLB flushesRik van Riel
Use broadcast TLB invalidation for kernel addresses when available. Remove the need to send IPIs for kernel TLB flushes. [ bp: Integrate dhansen's comments additions, merge the flush_tlb_all() change into this one too. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-5-riel@surriel.com
2025-03-19x86/mm: Add INVLPGB support codeRik van Riel
Add helper functions and definitions needed to use broadcast TLB invalidation on AMD CPUs. [ bp: - Cleanup commit message - Improve and expand comments - push the preemption guards inside the invlpgb* helpers - merge improvements from dhansen - add !CONFIG_BROADCAST_TLB_FLUSH function stubs because Clang can't do DCE properly yet and looks at the inline asm and complains about it getting a u64 argument on 32-bit code ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-4-riel@surriel.com
2025-03-19x86/mm: Add INVLPGB feature and Kconfig entryRik van Riel
In addition, the CPU advertises the maximum number of pages that can be shot down with one INVLPGB instruction in CPUID. Save that information for later use. [ bp: use cpu_has(), typos, massage. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-3-riel@surriel.com
2025-03-19x86/mm: Consolidate full flush threshold decisionRik van Riel
Reduce code duplication by consolidating the decision point for whether to do individual invalidations or a full flush inside get_flush_tlb_info(). Suggested-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Dave Hansen <dave.hansen@intel.com> Link: https://lore.kernel.org/r/20250226030129.530345-2-riel@surriel.com
2025-03-19x86/mm: Check return value from memblock_phys_alloc_range()Philip Redkin
At least with CONFIG_PHYSICAL_START=0x100000, if there is < 4 MiB of contiguous free memory available at this point, the kernel will crash and burn because memblock_phys_alloc_range() returns 0 on failure, which leads memblock_phys_free() to throw the first 4 MiB of physical memory to the wolves. At a minimum it should fail gracefully with a meaningful diagnostic, but in fact everything seems to work fine without the weird reserve allocation. Signed-off-by: Philip Redkin <me@rarity.fan> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/94b3e98f-96a7-3560-1f76-349eb95ccf7f@rarity.fan
2025-03-19Merge tag 'v6.14-rc7' into x86/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-03-19MIPS: cm: Fix warning if MIPS_CM is disabledThomas Bogendoerfer
Commit e27fbe16af5c ("MIPS: cm: Detect CM quirks from device tree") introduced arch/mips/include/asm/mips-cm.h:119:13: error: ‘mips_cm_update_property’ defined but not used [-Werror=unused-function] Fix this by making empty function implementation inline Fixes: e27fbe16af5c ("MIPS: cm: Detect CM quirks from device tree") Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2025-03-19MIPS: Fix Macro nameAbhishek Tamboli
Add missing underscore in PCI_CFGA_BUS macro definition for consistency. Link: https://bugzilla.kernel.org/show_bug.cgi?id=219744 Signed-off-by: Abhishek Tamboli <abhishektamboli9@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2025-03-18mips: fix PCI_IOBASE definitionArnd Bergmann
After my previous patch, the ioport_map() function changed from the lib/iomap.c version to the asm-generic/io.h version, which requires a correct PCI_IOBASE definition. Unfortunately the types are also different, so add the correct definition for ioport_map() in asm/io.h and change the machine specific ones to have the correct type. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2025-03-18s390: Use inline qualifier for all EX_TABLE and ALTERNATIVE inline assembliesHeiko Carstens
Use asm_inline for all inline assemblies which make use of the EX_TABLE or ALTERNATIVE macros. These macros expand to many lines and the compiler assumes the number of lines within an inline assembly is the same as the number of instructions within an inline assembly. This has an effect on inlining and loop unrolling decisions. In order to avoid incorrect assumptions use asm_inline, which tells the compiler that an inline assembly has the smallest possible size. In order to avoid confusion when asm_inline should be used or not, since a couple of inline assemblies are quite large: the rule is to always use asm_inline whenever the EX_TABLE or ALTERNATIVE macro is used. In specific cases there may be reasons to not follow this guideline, but that should be documented with the corresponding code. Using the inline qualifier everywhere has only a small effect on the kernel image size: add/remove: 0/10 grow/shrink: 19/8 up/down: 1492/-1858 (-366) The only location where this seems to matter is load_unaligned_zeropad() from word-at-a-time.h where the compiler inlines more functions within the dcache code, which is indeed code where performance matters. Suggested-by: Juergen Christ <jchrist@linux.ibm.com> Reviewed-by: Juergen Christ <jchrist@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/kfence: Split kfence pool into 4k mappings in arch_kfence_init_pool()Vasily Gorbik
Since commit d08d4e7cd6bf ("s390/mm: use full 4KB page for 2KB PTE"), there is no longer any reason to avoid splitting the kfence pool into 4k mappings in arch_kfence_init_pool(). Remove the architecture-specific kfence_split_mapping(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/ptrace: Avoid KASAN false positives in regs_get_kernel_stack_nth()Vasily Gorbik
With recent ftrace changes, argument tracing has been added to the function tracer. As a result, ftrace opportunistically reads the first FTRACE_REGS_MAX_ARGS (i.e., 6) registers. On s390, only five arguments are passed in registers, and the 6-th is read from the stack. If a function has fewer than 6 arguments, the following KASAN report may be observed: BUG: KASAN: stack-out-of-bounds in regs_get_kernel_stack_nth+0xa8/0xb0 Read of size 8 at addr 00007f7fe066fdb8 by task swapper/31/0 CPU: 31 UID: 0 PID: 0 Comm: swapper/31 Not tainted 6.14.0-rc4-00006-g76fe0337c219 #16 Hardware name: IBM 3931 A01 704 (KVM/Linux) Call Trace: [<00007fffe0147224>] dump_stack_lvl+0x104/0x168 [<00007fffe011381c>] print_address_description.constprop.0+0x34/0x338 [<00007fffe0113b64>] print_report+0x44/0x138 [<00007fffe0ad9422>] kasan_report+0xc2/0x180 [<00007fffe0159ff8>] regs_get_kernel_stack_nth+0xa8/0xb0 [<00007fffe05ebeda>] trace_function+0x23a/0x4d0 [<00007fffe0615d32>] irqsoff_tracer_call+0xd2/0x110 [<00007fffe2b4e34c>] ftrace_common+0x1c/0x40 [<00007fffe0150826>] arch_cpu_idle_enter+0x6/0x10 [<00007fffe035a1c8>] do_idle+0x168/0x2e0 [<00007fffe035a9d0>] cpu_startup_entry+0x90/0xb0 [<00007fffe017d25a>] smp_start_secondary+0x3da/0x4e0 [<00007fffe2b4e20a>] restart_int_handler+0x72/0x88 no locks held by swapper/31/0. The buggy address belongs to stack of task swapper/31/0 and is located at offset 0 in frame: do_idle+0x0/0x2e0 This frame has 1 object: [32, 40) '__mask' The buggy address belongs to the virtual mapping at [00007f7fe0660000, 00007f7fe0671000) created by: dup_task_struct+0x66/0x4e0 The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x80f23 flags: 0x3ffff00000000000(node=0|zone=1|lastcpupid=0x1ffff) raw: 3ffff00000000000 0000000000000000 0000000000000122 0000000000000000 raw: 0000000000000000 0000000000000000 ffffffff00000001 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: 00007f7fe066fc80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00007f7fe066fd00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >00007f7fe066fd80: 00 00 00 00 00 00 00 f1 f1 f1 f1 00 f3 f3 f3 00 ^ 00007f7fe066fe00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00007f7fe066fe80: 00 f1 f1 f1 f1 00 f2 f2 f2 00 00 f3 f3 00 00 00 The function regs_get_kernel_stack_nth() verifies that the requested argument is located on the stack, making it safe to read even if it is not actually present. Make use of READ_ONCE_NOCHECK() helper to silence KASAN reports in this case. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/boot: Ignore vmlinux.mapWangYuli
When building with CONFIG_VMLINUX_MAP=y, a decompressor vmlinux.map file is generated in the boot directory. Add this file to .gitignore to ensure Git does not track it. Signed-off-by: WangYuli <wangyuli@uniontech.com> Link: https://lore.kernel.org/r/F884C733016D6715+20250311030824.675683-1-wangyuli@uniontech.com Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/sysctl: Remove "vm/allocate_pgste" sysctlHeiko Carstens
Remove the not needed "vm/allocate_pgste" sysctl. It has no effect anymore. However this is a user space visible change. It shouldn't cause any problems, however if it does this needs to be partially reverted. Note that some distributions set vm/allocate_pgste=1 in one of the various sysctl configuration files. Besides a warning about the (now) non-existent procfs file this doesn't cause any problems. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390: Remove 2k vs 4k page table leftoversHeiko Carstens
Since commit d08d4e7cd6bf ("s390/mm: use full 4KB page for 2KB PTE") always 4k page tables are allocated, however there is still some (now) obsolete code left which deals with switching from 2k to 4k page tables for qemu/kvm processes. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Remove the not needed code. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/tlb: Use mm_has_pgste() instead of mm_alloc_pgste()Heiko Carstens
An mm has pgstes only after s390_enable_sie() has been called, while mm_alloc_pgste() may be always true (e.g. via sysctl setting). Limit the calls to gmap_unlink() in pte_free_tlb() to those cases where there might be something to unlink. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/lowcore: Use lghi instead llilh to clear registerHeiko Carstens
lghi is the fastest way to clear a register. Use that intead of llilh. Suggested-by: Juergen Christ <jchrist@linux.ibm.com> Reviewed-by: Juergen Christ <jchrist@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/syscall: Merge __do_syscall() and do_syscall()Heiko Carstens
The compiler inlines do_syscall() into __do_syscall(). Therefore do this in C code as well, since this makes the code easier to understand. Also adjust and add various unlikely() and likely() annotations. Furthermore this allows to replace the separate exit_to_user_mode() and syscall_exit_to_user_mode_work() calls with a combined syscall_exit_to_user_mode() call which results in slightly better code. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/spinlock: Implement SPINLOCK_LOCKVAL with inline assemblyHeiko Carstens
Implement SPINLOCK_LOCKVAL with an inline assembly, which makes use of the ALTERNATIVE macro, to read spinlock_lockval from lowcore. Provide an alternative instruction with a different offset in case lowcore is relocated. This replaces sequences of two instructions with one instruction. Before: 10602a: a7 78 00 00 lhi %r7,0 10602e: a5 8e 00 00 llilh %r8,0 106032: 58 d0 83 ac l %r13,940(%r8) 106036: ba 7d b5 80 cs %r7,%r13,1408(%r11) After: 10602a: a7 88 00 00 lhi %r8,0 10602e: e3 70 03 ac 00 58 ly %r7,940 106034: ba 87 b5 80 cs %r8,%r7,1408(%r11) Kernel image size change: add/remove: 756/750 grow/shrink: 646/3435 up/down: 30778/-46326 (-15548) Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/smp: Implement raw_smp_processor_id() with inline assemblyHeiko Carstens
Implement raw_smp_processor_id() with an inline assembly, which makes use of the ALTERNATIVE macro, to read cpu_nr from lowcore. Provide an alternative instruction with a different offset in case lowcore is relocated. This replaces sequences of two instructions with one instruction. Before: 1000b6: a5 1e 00 00 llilh %r1,0 1000ba: 58 20 13 a0 l %r2,928(%r1) After: 1000b6: e3 20 03 a0 00 58 ly %r2,928 Kernel image size change: add/remove: 753/755 grow/shrink: 230/1510 up/down: 30538/-35832 (-5294) Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/current: Implement current with inline assemblyHeiko Carstens
Implement current with an inline assembly, which makes use of the ALTERNATIVE macro, to read current from lowcore. Provide an alternative instruction with a different offset in case lowcore is relocated. This replaces sequences of two instructions with one instruction. Before: 100076: a5 1e 00 00 llilh %r1,0 10007a: e3 40 13 40 00 04 lg %r4,832(%r1) After: 100076: e3 10 03 40 00 04 lg %r1,832 Kernel image size change: add/remove: 3/17 grow/shrink: 166/2204 up/down: 7122/-24594 (-17472) Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390/lowcore: Use inline qualifier for get_lowcore() inline assemblyHeiko Carstens
Use asm_inline to let the compiler know that the get_lowcore() inline assembly has the smallest possible size. The ALTERNATIVE construct is used to generate a single instruction, however the macro expands to multiple lines. GCC uses the number of lines of an inline assembly to count the number of instructions within an inline assembly, which then has an effect on inlining decisions. In order to avoid incorrect assumptions use asm_inline. The result is that more functions are inlined, which results in a small growth of the kernel image: add/remove: 59/480 grow/shrink: 854/647 up/down: 168780/-162394 (6386) Reviewed-by: Juergen Christ <jchrist@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18s390: Move s390 sysctls into their own file under arch/s390joel granados
Move s390 sysctls (spin_retry and userprocess_debug) into their own files under arch/s390. Create two new sysctl tables (2390_{fault,spin}_sysctl_table) which will be initialized with arch_initcall placing them after their original place in proc_root_init. This is part of a greater effort to move ctl tables into their respective subsystems which will reduce the merge conflicts in kernel/sysctl.c. Signed-off-by: joel granados <joel.granados@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20250306-jag-mv_ctltables-v2-6-71b243c8d3f8@kernel.org Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2025-03-18riscv: fgraph: Select HAVE_FUNCTION_GRAPH_TRACER depends on ↵Pu Lehui
HAVE_DYNAMIC_FTRACE_WITH_ARGS Currently, fgraph on riscv relies on the infrastructure of DYNAMIC_FTRACE_WITH_ARGS. However, DYNAMIC_FTRACE_WITH_ARGS may be turned off on riscv, which will cause the enabled fgraph to be abnormal. Therefore, let's select HAVE_FUNCTION_GRAPH_TRACER depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS. Fixes: a3ed4157b7d8 ("fgraph: Replace fgraph_ret_regs with ftrace_regs") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202503160820.dvqMpH0g-lkp@intel.com/ Signed-off-by: Pu Lehui <pulehui@huawei.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20250317031214.4138436-1-pulehui@huaweicloud.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18um: x86: clean up elf specific definitionsHajime Tazaki
The file arch/x86/um/asm/module.h is equivalent to the definition of asm-generic. Thus this commit cleans up to use it. Signed-off-by: Hajime Tazaki <thehajime@gmail.com> Link: https://patch.msgid.link/2d70a0ed79ee0a0bef80ad4790063f4833dd9bed.1737348399.git.thehajime@gmail.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2025-03-18riscv: Fix missing __free_pages() in check_vector_unaligned_access()Alexandre Ghiti
The locally allocated pages are never freed up, so add the corresponding __free_pages(). Fixes: e7c9d66e313b ("RISC-V: Report vector unaligned access speed hwprobe") Link: https://lore.kernel.org/r/20250228090613.345309-1-alexghiti@rivosinc.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: Fix the __riscv_copy_vec_words_unaligned implementationTingbo Liao
Correct the VEC_S macro definition to fix the implementation of vector words copy in the case of unalignment in RISC-V. Fixes: e7c9d66e313b ("RISC-V: Report vector unaligned access speed hwprobe") Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Tingbo Liao <tingbo.liao@starfivetech.com> Link: https://lore.kernel.org/r/20250228090801.8334-1-tingbo.liao@starfivetech.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: mm: Don't use %pK through printkThomas Weißschuh
Restricted pointers ("%pK") are not meant to be used through printk(). It can unintentionally expose security sensitive, raw pointer values. Use regular pointer formatting instead. Link: https://lore.kernel.org/lkml/20250113171731-dc10e3c1-da64-4af0-b767-7c7070468023@linutronix.de/ Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20250217-restricted-pointers-riscv-v1-1-72a078076a76@linutronix.de Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: remove redundant CMDLINE_FORCE checkZixian Zeng
Drop redundant CMDLINE_FORCE check as it's already done in function early_init_dt_scan_chosen(). Signed-off-by: Zixian Zeng <sycamoremoon376@gmail.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20250114-rebund-v1-1-5632b2d54d6c@gmail.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: ftrace: Add parentheses in macro definitions of make_call_t0 and ↵Juhan Jin
make_call_ra This patch adds parentheses to parameters caller and callee of macros make_call_t0 and make_call_ra. Every existing invocation of these two macros uses a single variable for each argument, so the absence of the parentheses seems okay. However, future invocations might use more complex expressions as arguments. For example, a future invocation might look like this: make_call_t0(a - b, c, call). Without parentheses in the macro definition, the macro invocation expands to: ... unsigned int offset = (unsigned long) c - (unsigned long) a - b; ... which is clearly wrong. The use of parentheses ensures arguments are correctly evaluated and potentially saves future users of make_call_t0 and make_call_ra debugging trouble. Fixes: 6724a76cff85 ("riscv: ftrace: Reduce the detour code size to half") Signed-off-by: Juhan Jin <juhan.jin@foxmail.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/tencent_AE90AA59903A628E87E9F80E563DA5BA5508@qq.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: migrate to the generic rule for built-in DTBMasahiro Yamada
Commit 654102df2ac2 ("kbuild: add generic support for built-in boot DTBs") introduced generic support for built-in DTBs. Select GENERIC_BUILTIN_DTB when built-in DTB support is enabled. To keep consistency across architectures, this commit also renames CONFIG_BUILTIN_DTB_SOURCE to CONFIG_BUILTIN_DTB_NAME. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20241222000836.2578171-1-masahiroy@kernel.org Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: tracing: Fix __write_overflow_field in ftrace_partial_regs()Charlie Jenkins
The size of &regs->a0 is unknown, causing the error: ../include/linux/fortify-string.h:571:25: warning: call to '__write_overflow_field' declared with attribute warning: detected write beyond size of field (1st parameter); maybe use struct_group()? [-Wattribute-warning] Fix this by wrapping the required registers in pt_regs with struct_group() and reference the group when doing the offending memcpy(). Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20250224-fix_ftrace_partial_regs-v1-1-54b906417e86@rivosinc.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: Remove duplicate CLINT_TIMER selectionsGeert Uytterhoeven
Since commit f862bbf4cdca696e ("riscv: Allow NOMMU kernels to run in S-mode") in v6.10, CLINT_TIMER is selected by the main RISCV symbol when RISCV_M_MODE is enabled. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/ce55529a42fa232cacd580e38866c60701f91095.1738764474.git.geert+renesas@glider.be Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: defconfig: Disable Renesas SoC supportGeert Uytterhoeven
Follow-up to commit e36ddf3226864e09 ("riscv: defconfig: Disable RZ/Five peripheral support") in v6.12-rc1: - Disable ARCH_RENESAS, too, as currently RZ/Five is the sole Renesas RISC-V SoC, - Drop no longer needed explicit disable of USB_XHCI_RCAR, which depends on ARCH_RENESAS. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/e8a2fb273c8c68bd6d526b924b4212f397195b28.1738764211.git.geert+renesas@glider.be Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18riscv: Fix a comment typo in set_mm_asid()Chin Yik Ming
s/verion/version Signed-off-by: Chin Yik Ming <yikming2222@gmail.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20241114212725.4172401-1-yikming2222@gmail.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18Merge patch series "Support SSTC while PM operations"Alexandre Ghiti
Nick Hu <nick.hu@sifive.com> says: When the cpu is going to be hotplug, stop the stimecmp to prevent pending interrupt. When the cpu is going to be suspended, save the stimecmp before entering the suspend state and restore it in the resume path. * patches from https://lore.kernel.org/r/20250219114135.27764-1-nick.hu@sifive.com: clocksource/drivers/timer-riscv: Stop stimecmp when cpu hotplug riscv: Add stimecmp save and restore Link: https://lore.kernel.org/r/20250219114135.27764-1-nick.hu@sifive.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>