summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2025-03-11cgroup/cpuset-v1: Add deprecation messages to memory_spread_page and ↵Michal Koutný
memory_spread_slab There is MPOL_INTERLEAVE for user explicit allocations. Deprecate spreading of allocations that users carry out unwittingly. Use straight warning level for slab spreading since such a knob is unnecessarily intertwined with slab allocator. Signed-off-by: Michal Koutný <mkoutny@suse.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-03-11cgroup/cpuset-v1: Add deprecation messages to sched_load_balance and ↵Michal Koutný
memory_pressure_enabled These two v1 feature have analogues in cgroup v2. Signed-off-by: Michal Koutný <mkoutny@suse.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-03-11stop-machine: Add comment for rcu_momentary_eqs()Paul E. McKenney
Add a comment to explain the purpose of the rcu_momentary_eqs() call from multi_cpu_stop(), which is to suppress false-positive RCU CPU stall warnings. Reported-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/87wmeuanti.ffs@tglx/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Valentin Schneider <vschneid@redhat.com> Cc: Neeraj Upadhyay <neeraj.upadhyay@kernel.org> Cc: Frederic Weisbecker <frederic@kernel.org>
2025-03-11printk: Check CON_SUSPEND when unblanking a consoleMarcos Paulo de Souza
The commit 9e70a5e109a4 ("printk: Add per-console suspended state") introduced the CON_SUSPENDED flag for consoles. The suspended consoles will stop receiving messages, so don't unblank suspended consoles because it won't be showing anything either way. Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/20250226-printk-renaming-v1-5-0b878577f2e6@suse.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-03-11printk: Rename console_start to console_resumeMarcos Paulo de Souza
The intent of console_start was to resume a previously suspended console, so rename it accordingly. Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/20250226-printk-renaming-v1-4-0b878577f2e6@suse.com [pmladek@suse.com: Fixed typo in the commit message. Updated also new drm_log.c.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-03-11printk: Rename console_stop to console_suspendMarcos Paulo de Souza
The intent of console_stop was in fact to suspend it, so rename the function accordingly. Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/20250226-printk-renaming-v1-3-0b878577f2e6@suse.com [pmladek@suse.com: Fixed typo in the commit message. Updated also new drm_log.c] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-03-11printk: Rename resume_console to console_resume_allMarcos Paulo de Souza
The function resume_console has a misleading name, since it resumes all consoles, so rename it accordingly. Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/20250226-printk-renaming-v1-2-0b878577f2e6@suse.com [pmladek@suse.com: Fixed typo in the commit message.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-03-11printk: Rename suspend_console to console_suspend_allMarcos Paulo de Souza
The function suspend_console has a misleading name, since it suspends all consoles, so rename it accordingly. Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/20250226-printk-renaming-v1-1-0b878577f2e6@suse.com [pmladek@suse.com: Fixed typo in the commit message.] Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-03-10perf/core: Remove optional 'size' arguments from strscpy() callsThorsten Blum
The 'size' parameter is optional and strscpy() automatically determines the length of the destination buffer using sizeof() if the argument is omitted. This makes the explicit sizeof() calls unnecessary. Furthermore, KSYM_NAME_LEN is equal to sizeof(name) and can also be removed. Remove them to shorten and simplify the code. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250310192336.442994-1-thorsten.blum@linux.dev
2025-03-10Merge 6.14-rc6 into driver-core-nextGreg Kroah-Hartman
We need the driver core fix in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-03-10sched/clock: Don't define sched_clock_irqtime as static keyYafang Shao
The sched_clock_irqtime was defined as a static key in: 8722903cbb8f ("sched: Define sched_clock_irqtime as static key") However, this change introduces a 'sleeping in atomic context' warning: arch/x86/kernel/tsc.c:1214 mark_tsc_unstable() warn: sleeping in atomic context As analyzed by Dan, the affected code path is as follows: vcpu_load() <- disables preempt -> kvm_arch_vcpu_load() -> mark_tsc_unstable() <- sleeps virt/kvm/kvm_main.c 166 void vcpu_load(struct kvm_vcpu *vcpu) 167 { 168 int cpu = get_cpu(); ^^^^^^^^^^ This get_cpu() disables preemption. 169 170 __this_cpu_write(kvm_running_vcpu, vcpu); 171 preempt_notifier_register(&vcpu->preempt_notifier); 172 kvm_arch_vcpu_load(vcpu, cpu); 173 put_cpu(); 174 } arch/x86/kvm/x86.c 4979 if (unlikely(vcpu->cpu != cpu) || kvm_check_tsc_unstable()) { 4980 s64 tsc_delta = !vcpu->arch.last_host_tsc ? 0 : 4981 rdtsc() - vcpu->arch.last_host_tsc; 4982 if (tsc_delta < 0) 4983 mark_tsc_unstable("KVM discovered backwards TSC"); arch/x86/kernel/tsc.c 1206 void mark_tsc_unstable(char *reason) 1207 { 1208 if (tsc_unstable) 1209 return; 1210 1211 tsc_unstable = 1; 1212 if (using_native_sched_clock()) 1213 clear_sched_clock_stable(); --> 1214 disable_sched_clock_irqtime(); ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ kernel/jump_label.c 245 void static_key_disable(struct static_key *key) 246 { 247 cpus_read_lock(); ^^^^^^^^^^^^^^^^ This lock has a might_sleep() in it which triggers the static checker warning. 248 static_key_disable_cpuslocked(key); 249 cpus_read_unlock(); 250 } Let revert this change for now as {disable,enable}_sched_clock_irqtime are used in many places, as pointed out by Sean, including the following: The code path in clocksource_watchdog(): clocksource_watchdog() | -> spin_lock(&watchdog_lock); | -> __clocksource_unstable() | -> clocksource.mark_unstable() == tsc_cs_mark_unstable() | -> disable_sched_clock_irqtime() And the code path in sched_clock_register(): /* Cannot register a sched_clock with interrupts on */ local_irq_save(flags); ... /* Enable IRQ time accounting if we have a fast enough sched_clock() */ if (irqtime > 0 || (irqtime == -1 && rate >= 1000000)) enable_sched_clock_irqtime(); local_irq_restore(flags); [ lkp@intel.com: reported a build error in the prev version ] [ mingo: cherry-picked it over into sched/urgent ] Closes: https://lore.kernel.org/kvm/37a79ba3-9ce0-479c-a5b0-2bd75d573ed3@stanley.mountain/ Fixes: 8722903cbb8f ("sched: Define sched_clock_irqtime as static key") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Debugged-by: Dan Carpenter <dan.carpenter@linaro.org> Debugged-by: Sean Christopherson <seanjc@google.com> Debugged-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20250205032438.14668-1-laoar.shao@gmail.com
2025-03-10module: Remove unnecessary size argument when calling strscpy()Thorsten Blum
The size parameter is optional and strscpy() automatically determines the length of the destination buffer using sizeof() if the argument is omitted. This makes the explicit sizeof() unnecessary. Remove it to shorten and simplify the code. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Link: https://lore.kernel.org/r/20250308194631.191670-2-thorsten.blum@linux.dev Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Replace deprecated strncpy() with strscpy()Thorsten Blum
strncpy() is deprecated for NUL-terminated destination buffers; use strscpy() instead. The destination buffer ownername is only used with "%s" format strings and must therefore be NUL-terminated, but not NUL- padded. No functional changes intended. Link: https://github.com/KSPP/linux/issues/90 Cc: linux-hardening@vger.kernel.org Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Link: https://lore.kernel.org/r/20250307113546.112237-2-thorsten.blum@linux.dev Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10params: Annotate struct module_param_attrs with __counted_by()Thorsten Blum
Add the __counted_by compiler attribute to the flexible array member attrs to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and CONFIG_FORTIFY_SOURCE. Increment num before adding a new param_attribute to the attrs array and adjust the array index accordingly. Increment num immediately after the first reallocation such that the reallocation for the NULL terminator only needs to add 1 (instead of 2) to mk->mp->num. Use struct_size() instead of manually calculating the size for the reallocation. Use krealloc_array() for the additional NULL terminator. Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20250213221352.2625-3-thorsten.blum@linux.dev Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10static_call: Use RCU in all users of __module_text_address().Sebastian Andrzej Siewior
__module_text_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. Replace the preempt_disable() section around __module_text_address() with RCU. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-28-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10kprobes: Use RCU in all users of __module_text_address().Sebastian Andrzej Siewior
__module_text_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. Replace the preempt_disable() section around __module_text_address() with RCU. Cc: David S. Miller <davem@davemloft.net> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Naveen N Rao <naveen@kernel.org> Cc: linux-trace-kernel@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250129084925.9ppBjGLC@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10bpf: Use RCU in all users of __module_text_address().Sebastian Andrzej Siewior
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. Replace the preempt_disable() section around __module_address() with RCU. Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Eduard Zingerman <eddyz87@gmail.com> Cc: Hao Luo <haoluo@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Matt Bobrowski <mattbobrowski@google.com> Cc: Song Liu <song@kernel.org> Cc: Stanislav Fomichev <sdf@fomichev.me> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Yonghong Song <yonghong.song@linux.dev> Cc: bpf@vger.kernel.org Cc: linux-trace-kernel@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20250129084751.tH6iidUO@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10jump_label: Use RCU in all users of __module_text_address().Sebastian Andrzej Siewior
__module_text_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. Replace the preempt_disable() section around __module_text_address() with RCU. Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Jason Baron <jbaron@akamai.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-25-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10jump_label: Use RCU in all users of __module_address().Sebastian Andrzej Siewior
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. Replace the preempt_disable() section around __module_address() with RCU. Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Jason Baron <jbaron@akamai.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-24-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10cfi: Use RCU while invoking __module_address().Sebastian Andrzej Siewior
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. The _notrace() variant was introduced in commit 14c4c8e41511a ("cfi: Use rcu_read_{un}lock_sched_notrace"). The recursive case where __cfi_slowpath_diag() could end up calling itself is no longer present, as all that logic is gone since commit 89245600941e ("cfi: Switch to -fsanitize=kcfi"). Sami Tolvanen said that KCFI checks don't perform function calls. Elliot Berman verified it with | modprobe -a dummy_stm stm_ftrace stm_p_basic | mkdir -p /sys/kernel/config/stp-policy/dummy_stm.0.my-policy/default | echo function > /sys/kernel/tracing/current_tracer | echo 1 > /sys/kernel/tracing/tracing_on | echo dummy_stm.0 > /sys/class/stm_source/ftrace/stm_source_link Replace the rcu_read_lock_sched_notrace() section around __module_address() with RCU. Cc: Elliot Berman <quic_eberman@quicinc.com> Cc: Kees Cook <kees@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: llvm@lists.linux.dev Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Elliot Berman <elliot.berman@oss.qualcomm.com> # sm8650-qrd [1] Link: https://lore.kernel.org/all/20241230185812429-0800.eberman@hu-eberman-lv.qualcomm.com [1] Link: https://lore.kernel.org/r/20250108090457.512198-22-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in all users of __module_text_address().Sebastian Andrzej Siewior
__module_text_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. Replace the preempt_disable() section around __module_text_address() with RCU. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-16-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in all users of __module_address().Sebastian Andrzej Siewior
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled. Replace the preempt_disable() section around __module_address() with RCU. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-15-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in search_module_extables().Sebastian Andrzej Siewior
search_module_extables() returns an exception_table_entry belonging to a module. The lookup via __module_address() can be performed with RCU protection. The returned exception_table_entry remains valid because the passed address usually belongs to a module that is currently executed. So the module can not be removed because "something else" holds a reference to it, ensuring that it can not be removed. Exceptions here are: - kprobe, acquires a reference on the module beforehand - MCE, invokes the function from within a timer and the RCU lifetime guarantees (of the timer) are sufficient. Therefore it is safe to return the exception_table_entry outside the RCU section which provided the module. Use RCU for the lookup in search_module_extables() and update the comment. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-14-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Allow __module_address() to be called from RCU section.Sebastian Andrzej Siewior
mod_find() uses either the modules list to find a module or a tree lookup (CONFIG_MODULES_TREE_LOOKUP). The list and the tree can both be iterated under RCU assumption (as well as RCU-sched). Remove module_assert_mutex_or_preempt() from __module_address() and entirely since __module_address() is the last user. Update comments. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-13-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in __is_module_percpu_address().Sebastian Andrzej Siewior
The modules list can be accessed under RCU assumption. Use RCU protection instead preempt_disable(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-12-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in find_symbol().Sebastian Andrzej Siewior
module_assert_mutex_or_preempt() is not needed in find_symbol(). The function checks for RCU-sched or the module_mutex to be acquired. The list_for_each_entry_rcu() below does the same check. Remove module_assert_mutex_or_preempt() from try_add_tainted_module(). Use RCU protection to invoke find_symbol() and update callers. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-11-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Remove module_assert_mutex_or_preempt() from try_add_tainted_module().Sebastian Andrzej Siewior
module_assert_mutex_or_preempt() is not needed in try_add_tainted_module(). The function checks for RCU-sched or the module_mutex to be acquired. The list_for_each_entry_rcu() below does the same check. Remove module_assert_mutex_or_preempt() from try_add_tainted_module(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-10-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in module_kallsyms_on_each_symbol().Sebastian Andrzej Siewior
module::kallsyms can be accessed under RCU assumption. Use rcu_dereference() to access module::kallsyms. Update callers. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-9-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in __find_kallsyms_symbol_value().Sebastian Andrzej Siewior
module::kallsyms can be accessed under RCU assumption. Use rcu_dereference() to access module::kallsyms. Update callers. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-8-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in find_module_all().Sebastian Andrzej Siewior
The modules list and module::kallsyms can be accessed under RCU assumption. Remove module_assert_mutex_or_preempt() from find_module_all() so it can be used under RCU protection without warnings. Update its callers to use RCU protection instead of preempt_disable(). Cc: Jiri Kosina <jikos@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: linux-trace-kernel@vger.kernel.org Cc: live-patching@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20250108090457.512198-7-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in module_get_kallsym().Sebastian Andrzej Siewior
The modules list and module::kallsyms can be accessed under RCU assumption. Iterate the modules with RCU protection, use rcu_dereference() to access the kallsyms pointer. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-6-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use RCU in find_kallsyms_symbol().Sebastian Andrzej Siewior
The modules list and module::kallsyms can be accessed under RCU assumption. Use rcu_dereference() to reference the kallsyms pointer in find_kallsyms_symbol(). Use a RCU section instead of preempt_disable in callers of find_kallsyms_symbol(). Keep the preempt-disable in module_address_lookup() due to __module_address(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-5-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Use proper RCU assignment in add_kallsyms().Sebastian Andrzej Siewior
add_kallsyms() assigns the RCU pointer module::kallsyms and setups the structures behind it which point to init-data. The module was not published yet, nothing can see the kallsyms pointer and the data behind it. Also module's init function was not yet invoked. There is no need to use rcu_dereference() here, it is just to keep checkers quiet. The whole RCU read section is also not needed. Use a local kallsyms pointer and setup the data structures. Assign that pointer to the data structure at the end via rcu_assign_pointer(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-4-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10module: Begin to move from RCU-sched to RCU.Sebastian Andrzej Siewior
The RCU usage in module was introduced in commit d72b37513cdfb ("Remove stop_machine during module load v2") and it claimed not to be RCU but similar. Then there was another improvement in commit e91defa26c527 ("module: don't use stop_machine on module load"). It become a mix of RCU and RCU-sched and was eventually fixed 0be964be0d450 ("module: Sanitize RCU usage and locking"). Later RCU & RCU-sched was merged in commit cb2f55369d3a9 ("modules: Replace synchronize_sched() and call_rcu_sched()") so that was aligned. Looking at it today, there is still leftovers. The preempt_disable() was used instead rcu_read_lock_sched(). The RCU & RCU-sched merge was not complete as there is still rcu_dereference_sched() for module::kallsyms. The RCU-list modules and unloaded_tainted_modules are always accessed under RCU protection or the module_mutex. The modules list iteration can always happen safely because the module will not disappear. Once the module is removed (free_module()) then after removing the module from the list, there is a synchronize_rcu() which waits until every RCU reader left the section. That means iterating over the list within a RCU-read section is enough, there is no need to disable preemption. module::kallsyms is first assigned in add_kallsyms() before the module is added to the list. At this point, it points to init data. This pointer is later updated and before the init code is removed there is also synchronize_rcu() in do_free_init(). That means A RCU read lock is enough for protection and rcu_dereference() can be safely used. Convert module code and its users step by step. Update comments and convert print_modules() to use RCU. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250108090457.512198-3-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-03-10genirq: Make a few functions staticThomas Gleixner
None of these functions are used outside of their source files. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/878qpe2gnx.ffs@tglx
2025-03-10kernel/watch_queue: Use pipe_buf() to retrieve the pipe bufferK Prateek Nayak
Use pipe_buf() helper to retrieve the pipe buffer in post_one_notification() replacing the open-coded the logic. Suggested-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lore.kernel.org/r/20250307052919.34542-3-kprateek.nayak@amd.com Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-08net: move misc netdev_lock flavors to a separate headerJakub Kicinski
Move the more esoteric helpers for netdev instance lock to a dedicated header. This avoids growing netdevice.h to infinity and makes rebuilding the kernel much faster (after touching the header with the helpers). The main netdev_lock() / netdev_unlock() functions are used in static inlines in netdevice.h and will probably be used most commonly, so keep them in netdevice.h. Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250307183006.2312761-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08vdso: Rework struct vdso_time_data and introduce struct vdso_clockAnna-Maria Behnsen
To support multiple PTP clocks, the VDSO data structure needs to be reworked. All clock specific data will end up in struct vdso_clock and in struct vdso_time_data there will be an array of VDSO clocks. Now that all preparatory changes are in place: Split the clock related struct members into a separate struct vdso_clock. Make sure all users are aware, that vdso_time_data is no longer initialized as an array and vdso_clock is now the array inside vdso_data. Remove the vdso_clock define, which mapped it to vdso_time_data for the transition. Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250303-vdso-clock-v1-19-c1b5c69a166f@linutronix.de
2025-03-08time/namespace: Prepare introduction of struct vdso_clockAnna-Maria Behnsen
To support multiple PTP clocks, the VDSO data structure needs to be reworked. All clock specific data will end up in struct vdso_clock and in struct vdso_time_data there will be array of VDSO clocks. At the moment, vdso_clock is simply a define which maps vdso_clock to vdso_time_data. To prepare for the rework of the data structures, replace the struct vdso_time_data pointer with a struct vdso_clock pointer where applicable. No functional change. Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250303-vdso-clock-v1-14-c1b5c69a166f@linutronix.de
2025-03-08vdso/namespace: Rename timens_setup_vdso_data() to reflect new vdso_clock structAnna-Maria Behnsen
To support multiple PTP clocks, the VDSO data structure needs to be reworked. All clock specific data will end up in struct vdso_clock and in struct vdso_time_data there will be array of VDSO clocks. At the moment, vdso_clock is simply a define which maps vdso_clock to vdso_time_data. For time namespaces, vdso_time_data needs to be set up. But only the clock related part of the vdso_data thats requires this setup. To reflect the future struct vdso_clock, rename timens_setup_vdso_data() to timns_setup_vdso_clock_data(). No functional change. Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250303-vdso-clock-v1-13-c1b5c69a166f@linutronix.de
2025-03-08vdso/vsyscall: Prepare introduction of struct vdso_clockAnna-Maria Behnsen
To support multiple PTP clocks, the VDSO data structure needs to be reworked. All clock specific data will end up in struct vdso_clock and in struct vdso_time_data there will be array of VDSO clocks. At the moment, vdso_clock is simply a define which maps vdso_clock to vdso_time_data. To prepare for the rework of the data structures, replace the struct vdso_time_data pointer with a struct vdso_clock pointer where applicable. No functional change. Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250303-vdso-clock-v1-12-c1b5c69a166f@linutronix.de
2025-03-07capability: Remove unused has_capabilityDr. David Alan Gilbert
The vanilla has_capability() function has been unused since 2018's commit dcb569cf6ac9 ("Smack: ptrace capability use fixes") Remove it. Fixup a comment in security/commoncap.c that referenced it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Serge Hallyn <sergeh@kernel.org>
2025-03-07ubsan/overflow: Rework integer overflow sanitizer option to turn on everythingKees Cook
Since we're going to approach integer overflow mitigation a type at a time, we need to enable all of the associated sanitizers, and then opt into types one at a time. Rename the existing "signed wrap" sanitizer to just the entire topic area: "integer wrap". Enable the implicit integer truncation sanitizers, with required callbacks and tests. Notably, this requires features (currently) only available in Clang, so we can depend on the cc-option tests to determine availability instead of doing version tests. Link: https://lore.kernel.org/r/20250307041914.937329-1-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-03-07bpf: fix a possible NULL deref in bpf_map_offload_map_alloc()Eric Dumazet
Call bpf_dev_offload_check() before netdev_lock_ops(). This is needed if attr->map_ifindex is not valid. Oops: general protection fault, probably for non-canonical address 0xdffffc0000000197: 0000 [#1] PREEMPT SMP KASAN PTI KASAN: null-ptr-deref in range [0x0000000000000cb8-0x0000000000000cbf] RIP: 0010:netdev_need_ops_lock include/linux/netdevice.h:2792 [inline] RIP: 0010:netdev_lock_ops include/linux/netdevice.h:2803 [inline] RIP: 0010:bpf_map_offload_map_alloc+0x19a/0x910 kernel/bpf/offload.c:533 Call Trace: <TASK> map_create+0x946/0x11c0 kernel/bpf/syscall.c:1455 __sys_bpf+0x6d3/0x820 kernel/bpf/syscall.c:5777 __do_sys_bpf kernel/bpf/syscall.c:5902 [inline] __se_sys_bpf kernel/bpf/syscall.c:5900 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5900 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 Fixes: 97246d6d21c2 ("net: hold netdev instance lock during ndo_bpf") Reported-by: syzbot+0c7bfd8cf3aecec92708@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/67caa2b1.050a0220.15b4b9.0077.GAE@google.com/T/#u Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250307074303.1497911-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-08locking/lockdep: Add kasan_check_byte() check in lock_acquire()Waiman Long
KASAN instrumentation of lockdep has been disabled, as we don't need KASAN to check the validity of lockdep internal data structures and incur unnecessary performance overhead. However, the lockdep_map pointer passed in externally may not be valid (e.g. use-after-free) and we run the risk of using garbage data resulting in false lockdep reports. Add kasan_check_byte() call in lock_acquire() for non kernel core data object to catch invalid lockdep_map and print out a KASAN report before any lockdep splat, if any. Suggested-by: Marco Elver <elver@google.com> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Marco Elver <elver@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20250214195242.2480920-1-longman@redhat.com Link: https://lore.kernel.org/r/20250307232717.1759087-7-boqun.feng@gmail.com
2025-03-08locking/lockdep: Disable KASAN instrumentation of lockdep.cWaiman Long
Both KASAN and LOCKDEP are commonly enabled in building a debug kernel. Each of them can significantly slow down the speed of a debug kernel. Enabling KASAN instrumentation of the LOCKDEP code will further slow things down. Since LOCKDEP is a high overhead debugging tool, it will never get enabled in a production kernel. The LOCKDEP code is also pretty mature and is unlikely to get major changes. There is also a possibility of recursion similar to KCSAN. To evaluate the performance impact of disabling KASAN instrumentation of lockdep.c, the time to do a parallel build of the Linux defconfig kernel was used as the benchmark. Two x86-64 systems (Skylake & Zen 2) and an arm64 system were used as test beds. Two sets of non-RT and RT kernels with similar configurations except mainly CONFIG_PREEMPT_RT were used for evaluation. For the Skylake system: Kernel Run time Sys time ------ -------- -------- Non-debug kernel (baseline) 0m47.642s 4m19.811s [CONFIG_KASAN_INLINE=y] Debug kernel 2m11.108s (x2.8) 38m20.467s (x8.9) Debug kernel (patched) 1m49.602s (x2.3) 31m28.501s (x7.3) Debug kernel (patched + mitigations=off) 1m30.988s (x1.9) 26m41.993s (x6.2) RT kernel (baseline) 0m54.871s 7m15.340s [CONFIG_KASAN_INLINE=n] RT debug kernel 6m07.151s (x6.7) 135m47.428s (x18.7) RT debug kernel (patched) 3m42.434s (x4.1) 74m51.636s (x10.3) RT debug kernel (patched + mitigations=off) 2m40.383s (x2.9) 57m54.369s (x8.0) [CONFIG_KASAN_INLINE=y] RT debug kernel 3m22.155s (x3.7) 77m53.018s (x10.7) RT debug kernel (patched) 2m36.700s (x2.9) 54m31.195s (x7.5) RT debug kernel (patched + mitigations=off) 2m06.110s (x2.3) 45m49.493s (x6.3) For the Zen 2 system: Kernel Run time Sys time ------ -------- -------- Non-debug kernel (baseline) 1m42.806s 39m48.714s [CONFIG_KASAN_INLINE=y] Debug kernel 4m04.524s (x2.4) 125m35.904s (x3.2) Debug kernel (patched) 3m56.241s (x2.3) 127m22.378s (x3.2) Debug kernel (patched + mitigations=off) 2m38.157s (x1.5) 92m35.680s (x2.3) RT kernel (baseline) 1m51.500s 14m56.322s [CONFIG_KASAN_INLINE=n] RT debug kernel 16m04.962s (x8.7) 244m36.463s (x16.4) RT debug kernel (patched) 9m09.073s (x4.9) 129m28.439s (x8.7) RT debug kernel (patched + mitigations=off) 3m31.662s (x1.9) 51m01.391s (x3.4) For the arm64 system: Kernel Run time Sys time ------ -------- -------- Non-debug kernel (baseline) 1m56.844s 8m47.150s Debug kernel 3m54.774s (x2.0) 92m30.098s (x10.5) Debug kernel (patched) 3m32.429s (x1.8) 77m40.779s (x8.8) RT kernel (baseline) 4m01.641s 18m16.777s [CONFIG_KASAN_INLINE=n] RT debug kernel 19m32.977s (x4.9) 304m23.965s (x16.7) RT debug kernel (patched) 16m28.354s (x4.1) 234m18.149s (x12.8) Turning the mitigations off doesn't seems to have any noticeable impact on the performance of the arm64 system. So the mitigation=off entries aren't included. For the x86 CPUs, CPU mitigations has a much bigger impact on performance, especially the RT debug kernel with CONFIG_KASAN_INLINE=n. The SRSO mitigation in Zen 2 has an especially big impact on the debug kernel. It is also the majority of the slowdown with mitigations on. It is because the patched RET instruction slows down function returns. A lot of helper functions that are normally compiled out or inlined may become real function calls in the debug kernel. With !CONFIG_KASAN_INLINE, the KASAN instrumentation inserts a lot of __asan_loadX*() and __kasan_check_read() function calls to memory access portion of the code. The lockdep's __lock_acquire() function, for instance, has 66 __asan_loadX*() and 6 __kasan_check_read() calls added with KASAN instrumentation. Of course, the actual numbers may vary depending on the compiler used and the exact version of the lockdep code. With the Skylake test system, the parallel kernel build times reduction of the RT debug kernel with this patch are: CONFIG_KASAN_INLINE=n: -37% CONFIG_KASAN_INLINE=y: -22% The time reduction is less with CONFIG_KASAN_INLINE=y, but it is still significant. Setting CONFIG_KASAN_INLINE=y can result in a significant performance improvement. The major drawback is a significant increase in the size of kernel text. In the case of vmlinux, its text size increases from 45997948 to 67606807. That is a 47% size increase (about 21 Mbytes). The size increase of other kernel modules should be similar. With the newly added rtmutex and lockdep lock events, the relevant event counts for the test runs with the Skylake system were: Event type Debug kernel RT debug kernel ---------- ------------ --------------- lockdep_acquire 1,968,663,277 5,425,313,953 rtlock_slowlock - 401,701,156 rtmutex_slowlock - 139,672 The __lock_acquire() calls in the RT debug kernel are x2.8 times of the non-RT debug kernel with the same workload. Since the __lock_acquire() function is a big hitter in term of performance slowdown, this makes the RT debug kernel much slower than the non-RT one. The average lock nesting depth is likely to be higher in the RT debug kernel too leading to longer execution time in the __lock_acquire() function. As the small advantage of enabling KASAN instrumentation to catch potential memory access error in the lockdep debugging tool is probably not worth the drawback of further slowing down a debug kernel, disable KASAN instrumentation in the lockdep code to allow the debug kernels to regain some performance back, especially for the RT debug kernels. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Marco Elver <elver@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250307232717.1759087-6-boqun.feng@gmail.com
2025-03-08locking/lock_events: Add locking events for lockdepWaiman Long
Add some lock events to lockdep to profile its behavior. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250307232717.1759087-5-boqun.feng@gmail.com
2025-03-08locking/lock_events: Add locking events for rtmutex slow pathsWaiman Long
Add locking events for rtlock_slowlock() and rt_mutex_slowlock() for profiling the slow path behavior of rt_spin_lock() and rt_mutex_lock(). Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250307232717.1759087-4-boqun.feng@gmail.com
2025-03-08Merge branch 'locking/urgent' into locking/core, to pick up locking fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-03-08locking/semaphore: Use wake_q to wake up processes outside lock critical sectionWaiman Long
A circular lock dependency splat has been seen involving down_trylock(): ====================================================== WARNING: possible circular locking dependency detected 6.12.0-41.el10.s390x+debug ------------------------------------------------------ dd/32479 is trying to acquire lock: 0015a20accd0d4f8 ((console_sem).lock){-.-.}-{2:2}, at: down_trylock+0x26/0x90 but task is already holding lock: 000000017e461698 (&zone->lock){-.-.}-{2:2}, at: rmqueue_bulk+0xac/0x8f0 the existing dependency chain (in reverse order) is: -> #4 (&zone->lock){-.-.}-{2:2}: -> #3 (hrtimer_bases.lock){-.-.}-{2:2}: -> #2 (&rq->__lock){-.-.}-{2:2}: -> #1 (&p->pi_lock){-.-.}-{2:2}: -> #0 ((console_sem).lock){-.-.}-{2:2}: The console_sem -> pi_lock dependency is due to calling try_to_wake_up() while holding the console_sem raw_spinlock. This dependency can be broken by using wake_q to do the wakeup instead of calling try_to_wake_up() under the console_sem lock. This will also make the semaphore's raw_spinlock become a terminal lock without taking any further locks underneath it. The hrtimer_bases.lock is a raw_spinlock while zone->lock is a spinlock. The hrtimer_bases.lock -> zone->lock dependency happens via the debug_objects_fill_pool() helper function in the debugobjects code. -> #4 (&zone->lock){-.-.}-{2:2}: __lock_acquire+0xe86/0x1cc0 lock_acquire.part.0+0x258/0x630 lock_acquire+0xb8/0xe0 _raw_spin_lock_irqsave+0xb4/0x120 rmqueue_bulk+0xac/0x8f0 __rmqueue_pcplist+0x580/0x830 rmqueue_pcplist+0xfc/0x470 rmqueue.isra.0+0xdec/0x11b0 get_page_from_freelist+0x2ee/0xeb0 __alloc_pages_noprof+0x2c2/0x520 alloc_pages_mpol_noprof+0x1fc/0x4d0 alloc_pages_noprof+0x8c/0xe0 allocate_slab+0x320/0x460 ___slab_alloc+0xa58/0x12b0 __slab_alloc.isra.0+0x42/0x60 kmem_cache_alloc_noprof+0x304/0x350 fill_pool+0xf6/0x450 debug_object_activate+0xfe/0x360 enqueue_hrtimer+0x34/0x190 __run_hrtimer+0x3c8/0x4c0 __hrtimer_run_queues+0x1b2/0x260 hrtimer_interrupt+0x316/0x760 do_IRQ+0x9a/0xe0 do_irq_async+0xf6/0x160 Normally a raw_spinlock to spinlock dependency is not legitimate and will be warned if CONFIG_PROVE_RAW_LOCK_NESTING is enabled, but debug_objects_fill_pool() is an exception as it explicitly allows this dependency for non-PREEMPT_RT kernel without causing PROVE_RAW_LOCK_NESTING lockdep splat. As a result, this dependency is legitimate and not a bug. Anyway, semaphore is the only locking primitive left that is still using try_to_wake_up() to do wakeup inside critical section, all the other locking primitives had been migrated to use wake_q to do wakeup outside of the critical section. It is also possible that there are other circular locking dependencies involving printk/console_sem or other existing/new semaphores lurking somewhere which may show up in the future. Let just do the migration now to wake_q to avoid headache like this. Reported-by: yzbot+ed801a886dfdbfe7136d@syzkaller.appspotmail.com Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250307232717.1759087-3-boqun.feng@gmail.com