summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2025-09-24kcfi: Rename CONFIG_CFI_CLANG to CONFIG_CFIKees Cook
The kernel's CFI implementation uses the KCFI ABI specifically, and is not strictly tied to a particular compiler. In preparation for GCC supporting KCFI, rename CONFIG_CFI_CLANG to CONFIG_CFI (along with associated options). Use new "transitional" Kconfig option for old CONFIG_CFI_CLANG that will enable CONFIG_CFI during olddefconfig. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20250923213422.1105654-3-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-09-22lib/decompress: use designated initializers for struct compress_formatThorsten Blum
Switch 'compressed_formats[]' to the more modern and flexible designated initializers. This improves readability and allows struct fields to be reordered. Also use a more concise sentinel marker. Remove the curly braces around the for loop while we're at it. No functional changes intended. Link: https://lkml.kernel.org/r/20250910232350.1308206-2-thorsten.blum@linux.dev Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: Kuan-Wei Chiu <visitorckw@gmail.com> Cc: David Sterba <dsterba@suse.com> Cc: Nick Terrell <terrelln@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-22rust: add find_bit_benchmark_rust module.Burak Emir
Microbenchmark protected by a config FIND_BIT_BENCHMARK_RUST, following `find_bit_benchmark.c` but testing the Rust Bitmap API. We add a fill_random() method protected by the config in order to maintain the abstraction. The sample output from the benchmark, both C and Rust version: find_bit_benchmark.c output: ``` Start testing find_bit() with random-filled bitmap [ 438.101937] find_next_bit: 860188 ns, 163419 iterations [ 438.109471] find_next_zero_bit: 912342 ns, 164262 iterations [ 438.116820] find_last_bit: 726003 ns, 163419 iterations [ 438.130509] find_nth_bit: 7056993 ns, 16269 iterations [ 438.139099] find_first_bit: 1963272 ns, 16270 iterations [ 438.173043] find_first_and_bit: 27314224 ns, 32654 iterations [ 438.180065] find_next_and_bit: 398752 ns, 73705 iterations [ 438.186689] Start testing find_bit() with sparse bitmap [ 438.193375] find_next_bit: 9675 ns, 656 iterations [ 438.201765] find_next_zero_bit: 1766136 ns, 327025 iterations [ 438.208429] find_last_bit: 9017 ns, 656 iterations [ 438.217816] find_nth_bit: 2749742 ns, 655 iterations [ 438.225168] find_first_bit: 721799 ns, 656 iterations [ 438.231797] find_first_and_bit: 2819 ns, 1 iterations [ 438.238441] find_next_and_bit: 3159 ns, 1 iterations ``` find_bit_benchmark_rust.rs output: ``` [ 451.182459] find_bit_benchmark_rust: [ 451.186688] Start testing find_bit() Rust with random-filled bitmap [ 451.194450] next_bit: 777950 ns, 163644 iterations [ 451.201997] next_zero_bit: 918889 ns, 164036 iterations [ 451.208642] Start testing find_bit() Rust with sparse bitmap [ 451.214300] next_bit: 9181 ns, 654 iterations [ 451.222806] next_zero_bit: 1855504 ns, 327026 iterations ``` Here are the results from 32 samples, with 95% confidence interval. The microbenchmark was built with RUST_BITMAP_HARDENED=n and run on a machine that did not execute other processes. Random-filled bitmap: +-----------+-------+-----------+--------------+-----------+-----------+ | Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi | +-----------+-------+-----------+--------------+-----------+-----------+ | find_bit/ | C | 825.07 | 53.89 | 806.40 | 843.74 | | next_bit | Rust | 870.91 | 46.29 | 854.88 | 886.95 | +-----------+-------+-----------+--------------+-----------+-----------+ | find_zero/| C | 933.56 | 56.34 | 914.04 | 953.08 | | next_zero | Rust | 945.85 | 60.44 | 924.91 | 966.79 | +-----------+-------+-----------+--------------+-----------+-----------+ Rust appears 5.5% slower for next_bit, 1.3% slower for next_zero. Sparse bitmap: +-----------+-------+-----------+--------------+-----------+-----------+ | Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi | +-----------+-------+-----------+--------------+-----------+-----------+ | find_bit/ | C | 13.17 | 6.21 | 11.01 | 15.32 | | next_bit | Rust | 14.30 | 8.27 | 11.43 | 17.17 | +-----------+-------+-----------+--------------+-----------+-----------+ | find_zero/| C | 1859.31 | 82.30 | 1830.80 | 1887.83 | | next_zero | Rust | 1908.09 | 139.82 | 1859.65 | 1956.54 | +-----------+-------+-----------+--------------+-----------+-----------+ Rust appears 8.5% slower for next_bit, 2.6% slower for next_zero. In summary, taking the arithmetic mean of all slow-downs, we can say the Rust API has a 4.5% slowdown. Suggested-by: Alice Ryhl <aliceryhl@google.com> Suggested-by: Yury Norov (NVIDIA) <yury.norov@gmail.com> Reviewed-by: Yury Norov (NVIDIA) <yury.norov@gmail.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Signed-off-by: Burak Emir <bqe@google.com> Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
2025-09-21alloc_tag: mark inaccurate allocation counters in /proc/allocinfo outputSuren Baghdasaryan
While rare, memory allocation profiling can contain inaccurate counters if slab object extension vector allocation fails. That allocation might succeed later but prior to that, slab allocations that would have used that object extension vector will not be accounted for. To indicate incorrect counters, "accurate:no" marker is appended to the call site line in the /proc/allocinfo output. Bump up /proc/allocinfo version to reflect the change in the file format and update documentation. Example output with invalid counters: allocinfo - version: 2.0 0 0 arch/x86/kernel/kdebugfs.c:105 func:create_setup_data_nodes 0 0 arch/x86/kernel/alternative.c:2090 func:alternatives_smp_module_add 0 0 arch/x86/kernel/alternative.c:127 func:__its_alloc accurate:no 0 0 arch/x86/kernel/fpu/regset.c:160 func:xstateregs_set 0 0 arch/x86/kernel/fpu/xstate.c:1590 func:fpstate_realloc 0 0 arch/x86/kernel/cpu/aperfmperf.c:379 func:arch_enable_hybrid_capacity_scale 0 0 arch/x86/kernel/cpu/amd_cache_disable.c:258 func:init_amd_l3_attrs 49152 48 arch/x86/kernel/cpu/mce/core.c:2709 func:mce_device_create accurate:no 32768 1 arch/x86/kernel/cpu/mce/genpool.c:132 func:mce_gen_pool_create 0 0 arch/x86/kernel/cpu/mce/amd.c:1341 func:mce_threshold_create_device [surenb@google.com: document new "accurate:no" marker] Fixes: 39d117e04d15 ("alloc_tag: mark inaccurate allocation counters in /proc/allocinfo output") [akpm@linux-foundation.org: simplification per Usama, reflow text] [akpm@linux-foundation.org: add newline to prevent docs warning, per Randy] Link: https://lkml.kernel.org/r/20250915230224.4115531-1-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Suggested-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Usama Arif <usamaarif642@gmail.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: David Rientjes <rientjes@google.com> Cc: David Wang <00107082@163.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Sourav Panda <souravpanda@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21alloc_tag: prevent enabling memory profiling if it was shut downSuren Baghdasaryan
Memory profiling can be shut down due to reasons like a failure during initialization. When this happens, the user should not be able to re-enable it. Current sysctrl interface does not handle this properly and will allow re-enabling memory profiling. Fix this by checking for this condition during sysctrl write operation. Link: https://lkml.kernel.org/r/20250915212756.3998938-3-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Usama Arif <usamaarif642@gmail.com> Cc: David Wang <00107082@163.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Sourav Panda <souravpanda@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21alloc_tag: use release_pages() in the cleanup pathSuren Baghdasaryan
Patch series "Minor fixes for memory allocation profiling", v2. Over the last couple months I gathered a few reports of minor issues in memory allocation profiling which are addressed in this patchset. This patch (of 2): When bulk-freeing an array of pages use release_pages() instead of freeing them page-by-page. Link: https://lkml.kernel.org/r/20250915212756.3998938-1-surenb@google.com Link: https://lkml.kernel.org/r/20250915212756.3998938-2-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Suggested-by: Andrew Morton <akpm@linux-foundation.org> Suggested-by: Usama Arif <usamaarif642@gmail.com> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Usama Arif <usamaarif642@gmail.com> Cc: David Wang <00107082@163.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Sourav Panda <souravpanda@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-21kasan: introduce ARCH_DEFER_KASAN and unify static key across modesSabyrzhan Tasbolatov
Patch series "kasan: unify kasan_enabled() and remove arch-specific implementations", v6. This patch series addresses the fragmentation in KASAN initialization across architectures by introducing a unified approach that eliminates duplicate static keys and arch-specific kasan_arch_is_ready() implementations. The core issue is that different architectures have inconsistent approaches to KASAN readiness tracking: - PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready() - Only HW_TAGS mode had a unified static key (kasan_flag_enabled) - Generic and SW_TAGS modes relied on arch-specific solutions or always-on behavior This patch (of 2): Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need to defer KASAN initialization until shadow memory is properly set up, and unify the static key infrastructure across all KASAN modes. [1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN. The core issue is that different architectures haveinconsistent approaches to KASAN readiness tracking: - PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready() - Only HW_TAGS mode had a unified static key (kasan_flag_enabled) - Generic and SW_TAGS modes relied on arch-specific solutions or always-on behavior This patch addresses the fragmentation in KASAN initialization across architectures by introducing a unified approach that eliminates duplicate static keys and arch-specific kasan_arch_is_ready() implementations. Let's replace kasan_arch_is_ready() with existing kasan_enabled() check, which examines the static key being enabled if arch selects ARCH_DEFER_KASAN or has HW_TAGS mode support. For other arch, kasan_enabled() checks the enablement during compile time. Now KASAN users can use a single kasan_enabled() check everywhere. Link: https://lkml.kernel.org/r/20250810125746.1105476-1-snovitoll@gmail.com Link: https://lkml.kernel.org/r/20250810125746.1105476-2-snovitoll@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049 Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> #powerpc Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Alexandre Ghiti <alex@ghiti.fr> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Baoquan He <bhe@redhat.com> Cc: David Gow <davidgow@google.com> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Huacai Chen <chenhuacai@loongson.cn> Cc: Marco Elver <elver@google.com> Cc: Qing Zhang <zhangqing@loongson.cn> Cc: Sabyrzhan Tasbolatov <snovitoll@gmail.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-19Merge 6.17-rc6 into kbuild-nextNathan Chancellor
Commit bd7c2312128e ("pinctrl: meson: Fix typo in device table macro") is needed in kbuild-next to avoid a build error with a future change. While at it, address the conflict between commit 41f9049cff32 ("riscv: Only allow LTO with CMODEL_MEDANY") and commit 6578a1ff6aa4 ("riscv: Remove version check for LTO_CLANG selects"), as reported by Stephen Rothwell [1]. Link: https://lore.kernel.org/20250908134913.68778b7b@canb.auug.org.au/ [1] Signed-off-by: Nathan Chancellor <nathan@kernel.org>
2025-09-17lib/crypto: tests: Add tests and benchmark for sha256_finup_2x()Eric Biggers
Update sha256_kunit to include test cases and a benchmark for the new sha256_finup_2x() function. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250915160819.140019-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-17lib/crypto: x86/sha256: Add support for 2-way interleaved hashingEric Biggers
Add an implementation of sha256_finup_2x_arch() for x86_64. It interleaves the computation of two SHA-256 hashes using the x86 SHA-NI instructions. dm-verity and fs-verity will take advantage of this for greatly improved performance on capable CPUs. This increases the throughput of SHA-256 hashing 4096-byte messages by the following amounts on the following CPUs: Intel Ice Lake (server): 4% Intel Sapphire Rapids: 38% Intel Emerald Rapids: 38% AMD Zen 1 (Threadripper 1950X): 84% AMD Zen 4 (EPYC 9B14): 98% AMD Zen 5 (Ryzen 9 9950X): 64% For now, this seems to benefit AMD more than Intel. This seems to be because current AMD CPUs support concurrent execution of the SHA-NI instructions, but unfortunately current Intel CPUs don't, except for the sha256msg2 instruction. Hopefully future Intel CPUs will support SHA-NI on more execution ports. Zen 1 supports 2 concurrent sha256rnds2, and Zen 4 supports 4 concurrent sha256rnds2, which suggests that even better performance may be achievable on Zen 4 by interleaving more than two hashes. However, doing so poses a number of trade-offs, and furthermore Zen 5 goes back to supporting "only" 2 concurrent sha256rnds2. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250915160819.140019-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-17lib/crypto: arm64/sha256: Add support for 2-way interleaved hashingEric Biggers
Add an implementation of sha256_finup_2x_arch() for arm64. It interleaves the computation of two SHA-256 hashes using the ARMv8 SHA-256 instructions. dm-verity and fs-verity will take advantage of this for greatly improved performance on capable CPUs. This increases the throughput of SHA-256 hashing 4096-byte messages by the following amounts on the following CPUs: ARM Cortex-X1: 70% ARM Cortex-X3: 68% ARM Cortex-A76: 65% ARM Cortex-A715: 43% ARM Cortex-A510: 25% ARM Cortex-A55: 8% Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250915160819.140019-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-17lib/crypto: sha256: Add support for 2-way interleaved hashingEric Biggers
Many arm64 and x86_64 CPUs can compute two SHA-256 hashes in nearly the same speed as one, if the instructions are interleaved. This is because SHA-256 is serialized block-by-block, and two interleaved hashes take much better advantage of the CPU's instruction-level parallelism. Meanwhile, a very common use case for SHA-256 hashing in the Linux kernel is dm-verity and fs-verity. Both use a Merkle tree that has a fixed block size, usually 4096 bytes with an empty or 32-byte salt prepended. Usually, many blocks need to be hashed at a time. This is an ideal scenario for 2-way interleaved hashing. To enable this optimization, add a new function sha256_finup_2x() to the SHA-256 library API. It computes the hash of two equal-length messages, starting from a common initial context. For now it always falls back to sequential processing. Later patches will wire up arm64 and x86_64 optimized implementations. Note that the interleaving factor could in principle be higher than 2x. However, that runs into many practical difficulties and CPU throughput limitations. Thus, both the implementations I'm adding are 2x. In the interest of using the simplest solution, the API matches that. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250915160819.140019-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-16raid6: riscv: replace one load with a move to speed up the caculationChunyan Zhang
Since wp$$==wq$$, it doesn't need to load the same data twice, use move instruction to replace one of the loads to let the program run faster. Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Link: https://lore.kernel.org/r/20250718072711.3865118-3-zhangchunyan@iscas.ac.cn Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-09-16raid6: riscv: Clean up unused header file inclusionChunyan Zhang
These two C files don't reference things defined in simd.h or types.h so remove these redundant #inclusions. Fixes: 6093faaf9593 ("raid6: Add RISC-V SIMD syndrome and recovery calculations") Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Reviewed-by: Nutty Liu <liujingqi@lanxincomputing.com> Link: https://lore.kernel.org/r/20250718072711.3865118-2-zhangchunyan@iscas.ac.cn Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-09-16kunit: Extend kconfig help text for KUNIT_UML_PCIThomas Weißschuh
Checkpatch.pl expects at least 4 lines of help text. Extend the help text to make checkpatch.pl happy. Link: https://lore.kernel.org/r/20250916-kunit-pci-kconfig-v1-1-6d1369f06f2a@linutronix.de Fixes: 031cdd3bc3f3 ("kunit: Enable PCI on UML without triggering WARN()") Suggested-by: Shuah Khan <skhan@linuxfoundation.org> Link: https://lore.kernel.org/lkml/3dc95227-2be9-48a0-bdea-3f283d9b2a38@linuxfoundation.org/ Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2025-09-15kunit: Enable PCI on UML without triggering WARN()Thomas Weißschuh
Various KUnit tests require PCI infrastructure to work. All normal platforms enable PCI by default, but UML does not. Enabling PCI from .kunitconfig files is problematic as it would not be portable. So in commit 6fc3a8636a7b ("kunit: tool: Enable virtio/PCI by default on UML") PCI was enabled by way of CONFIG_UML_PCI_OVER_VIRTIO=y. However CONFIG_UML_PCI_OVER_VIRTIO requires additional configuration of CONFIG_UML_PCI_OVER_VIRTIO_DEVICE_ID or will otherwise trigger a WARN() in virtio_pcidev_init(). However there is no one correct value for UML_PCI_OVER_VIRTIO_DEVICE_ID which could be used by default. This warning is confusing when debugging test failures. On the other hand, the functionality of CONFIG_UML_PCI_OVER_VIRTIO is not used at all, given that it is completely non-functional as indicated by the WARN() in question. Instead it is only used as a way to enable CONFIG_UML_PCI which itself is not directly configurable. Instead of going through CONFIG_UML_PCI_OVER_VIRTIO, introduce a custom configuration option which enables CONFIG_UML_PCI without triggering warnings or building dead code. Link: https://lore.kernel.org/r/20250908-kunit-uml-pci-v2-1-d8eba5f73c9d@linutronix.de Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Reviewed-by: Johannes Berg <johannes@sipsolutions.net> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2025-09-13btree: simplify merge logic by using btree_last() return valueGuan-Chun Wu
Previously btree_merge() called btree_last() only to test existence, then performed an extra btree_lookup() to fetch the value. This patch changes it to directly use the value returned by btree_last(), avoiding redundant lookups and simplifying the merge loop. Link: https://lkml.kernel.org/r/20250826161741.686704-1-409411716@gms.tku.edu.tw Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Cc: Kuan-Wei Chiu <visitorckw@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13panic/printk: replace this_cpu_in_panic() with panic_on_this_cpu()Jinchao Wang
The helper this_cpu_in_panic() duplicated logic already provided by panic_on_this_cpu(). Remove this_cpu_in_panic() and switch all users to panic_on_this_cpu(). This simplifies the code and avoids having two helpers for the same check. Link: https://lkml.kernel.org/r/20250825022947.1596226-8-wangjinchao600@gmail.com Signed-off-by: Jinchao Wang <wangjinchao600@gmail.com> Cc: Anna Schumaker <anna.schumaker@oracle.com> Cc: Baoquan He <bhe@redhat.com> Cc: "Darrick J. Wong" <djwong@kernel.org> Cc: Dave Young <dyoung@redhat.com> Cc: Doug Anderson <dianders@chromium.org> Cc: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Joanthan Cameron <Jonathan.Cameron@huawei.com> Cc: Joel Granados <joel.granados@kernel.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Kees Cook <kees@kernel.org> Cc: Li Huafei <lihuafei1@huawei.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Luo Gengkun <luogengkun@huaweicloud.com> Cc: Max Kellermann <max.kellermann@ionos.com> Cc: Nam Cao <namcao@linutronix.de> Cc: oushixiong <oushixiong@kylinos.cn> Cc: Petr Mladek <pmladek@suse.com> Cc: Qianqiang Liu <qianqiang.liu@163.com> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Sohil Mehta <sohil.mehta@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Thomas Zimemrmann <tzimmermann@suse.de> Cc: Thorsten Blum <thorsten.blum@linux.dev> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Yicong Yang <yangyicong@hisilicon.com> Cc: Yunhui Cui <cuiyunhui@bytedance.com> Cc: Yury Norov (NVIDIA) <yury.norov@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13lib/sys_info: handle sys_info_mask==0 caseFeng Tang
Generalization of panic_print's dump function [1] has been merged, and this patchset is to address some remaining issues, like adding note of the obsoletion of 'panic_print' cmdline parameter, refining the kernel document for panic_print, and hardening some string management. This patch (of 4): It is a normal case that bitmask parameter is 0, so pre-initialize the names[] to null string to cover this case. Also remove the superfluous "+1" in names[sizeof(sys_info_avail) + 1], which is needed for 'strlen()', but not for 'sizeof()'. Link: https://lkml.kernel.org/r/20250825025701.81921-1-feng.tang@linux.alibaba.com Link: https://lkml.kernel.org/r/20250825025701.81921-2-feng.tang@linux.alibaba.com Link: Link: https://lkml.kernel.org/r/20250703021004.42328-1-feng.tang@linux.alibaba.com [1] Signed-off-by: Feng Tang <feng.tang@linux.alibaba.com> Suggested-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Askar Safin <safinaskar@zohomail.com> Cc: John Ogness <john.ogness@linutronix.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Lance Yang <lance.yang@linux.dev> Cc: "Paul E . McKenney" <paulmck@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13alloc_tag: use str_on_off() helperKuan-Wei Chiu
Replace the ternary (enable ? "on" : "off") with the str_on_off() helper from string_choices.h. This improves readability by replacing the three-operand ternary with a single function call, ensures consistent string output, and allows potential string deduplication by the linker, resulting in a slightly smaller binary. Link: https://lkml.kernel.org/r/20250814093827.237980-1-visitorckw@gmail.com Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Cc: Ching-Chun (Jim) Huang <jserv@ccns.ncku.edu.tw> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13test_firmware: use str_true_false() helperKuan-Wei Chiu
Replace ternary (condition ? "true" : "false") expressions with the str_true_false() helper from string_choices.h. This improves readability by replacing the three-operand ternary with a single function call, ensures consistent string output, and allows potential string deduplication by the linker, resulting in a slightly smaller binary. Link: https://lkml.kernel.org/r/20250814095033.244034-1-visitorckw@gmail.com Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Cc: Kuan-Wei Chiu <visitorckw@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13lib/fault-inject-usercopy.c: use PTR_ERR_OR_ZERO() to simplify codeXichao Zhao
Use the standard error pointer macro to shorten the code and simplify. Link: https://lkml.kernel.org/r/20250812084045.64218-1-zhao.xichao@vivo.com Signed-off-by: Xichao Zhao <zhao.xichao@vivo.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13lib/digsig: remove unnecessary memsetLiao Yuanhong
kzalloc() has already been initialized to full 0 space, there is no need to use memset() to initialize again. Link: https://lkml.kernel.org/r/20250811082739.378284-1-liaoyuanhong@vivo.com Signed-off-by: Liao Yuanhong <liaoyuanhong@vivo.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13ref_tracker: remove redundant __GFP_NOWARNQianfeng Rong
Commit 16f5dfbc851b ("gfp: include __GFP_NOWARN in GFP_NOWAIT") made GFP_NOWAIT implicitly include __GFP_NOWARN. Therefore, explicit __GFP_NOWARN combined with GFP_NOWAIT (e.g., `GFP_NOWAIT | __GFP_NOWARN`) is now redundant. Let's clean up these redundant flags across subsystems. No functional changes. Link: https://lkml.kernel.org/r/20250805023031.331718-1-rongqianfeng@vivo.com Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13maple_tree: fix MAPLE_PARENT_RANGE32 and parent pointer docsSidhartha Kumar
MAPLE_PARENT_RANGE32 should be 0x02 as a 32 bit node is indicated by the bit pattern 0b010 which is the hex value 0x02. There are no users currently, so there is no associated bug with this wrong value. Fix typo Note -> Node and replace x with b to indicate binary values. Link: https://lkml.kernel.org/r/20250826151344.403286-1-sidhartha.kumar@oracle.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13lib/test_hmm: drop redundant conversion to boolXichao Zhao
The result of integer comparison already evaluates to bool. No need for explicit conversion. No functional impact. Link: https://lkml.kernel.org/r/20250819070457.486348-1-zhao.xichao@vivo.com Signed-off-by: Xichao Zhao <zhao.xichao@vivo.com> Reviewed-by: Alistair Popple <apopple@nvidia.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Leon Romanovsky <leon@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13lib/test_maple_tree.c: remove redundant semicolonsLiao Yuanhong
Remove unnecessary semicolons. Link: https://lkml.kernel.org/r/20250813094543.555906-1-liaoyuanhong@vivo.com Signed-off-by: Liao Yuanhong <liaoyuanhong@vivo.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13lib/test_kho: fixes for error handlingMike Rapoport (Microsoft)
* Update kho_test_save() so that folios array won't be freed when returning from the function and the fdt will be freed on error * Reset state->nr_folios to 0 in kho_test_generate_data() on error * Simplify allocation of folios info in fdt. Link: https://lkml.kernel.org/r/20250811082510.4154080-3-rppt@kernel.org Fixes: b753522bed0b ("kho: add test for kexec handover") Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reported-by: Pratyush Yadav <pratyush@kernel.org> Closes: https://lore.kernel.org/all/mafs0zfcjcepf.fsf@kernel.org Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Cc: Alexander Graf <graf@amazon.com> Cc: Baoquan He <bhe@redhat.com> Cc: Changyuan Lyu <changyuanl@google.com> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Thomas Weißschuh <linux@weissschuh.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13xarray: remove redundant __GFP_NOWARNQianfeng Rong
Commit 16f5dfbc851b ("gfp: include __GFP_NOWARN in GFP_NOWAIT") made GFP_NOWAIT implicitly include __GFP_NOWARN. Therefore, explicit __GFP_NOWARN combined with GFP_NOWAIT (e.g., `GFP_NOWAIT | __GFP_NOWARN`) is now redundant. Let's clean up these redundant flags across subsystems. No functional changes. Link: https://lkml.kernel.org/r/20250804130018.484321-1-rongqianfeng@vivo.com Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-13mm/slub: allow to set node and align in k[v]reallocVitaly Wool
Reimplement k[v]realloc_node() to be able to set node and alignment should a user need to do so. In order to do that while retaining the maximal backward compatibility, add k[v]realloc_node_align() functions and redefine the rest of API using these new ones. While doing that, we also keep the number of _noprof variants to a minimum, which implies some changes to the existing users of older _noprof functions, that basically being bcachefs. With that change we also provide the ability for the Rust part of the kernel to set node and alignment in its K[v]xxx [re]allocations. Link: https://lkml.kernel.org/r/20250806124147.1724658-1-vitaly.wool@konsulko.se Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Alice Ryhl <aliceryhl@google.com> Cc: Danilo Krummrich <dakr@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jann Horn <jannh@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Uladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-09-09iov_iter: remove iov_iter_is_alignedKeith Busch
No more callers. Signed-off-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-09-09lib: test_objpool: Avoid direct access to hrtimer clockbaseThomas Weißschuh
The field timer->base->get_time is a private implementation detail and should not be accessed outside of the hrtimer core. Switch to the equivalent helper. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250821-hrtimer-cleanup-get_time-v2-4-3ae822e5bfbd@linutronix.de
2025-09-08KUnit: ffs: Validate all the __attribute_const__ annotationsKees Cook
While tracking down a problem where constant expressions used by BUILD_BUG_ON() suddenly stopped working[1], we found that an added static initializer was convincing the compiler that it couldn't track the state of the prior statically initialized value. Tracing this down found that ffs() was used in the initializer macro, but since it wasn't marked with __attribute_const__, the compiler had to assume the function might change variable states as a side-effect (which is not true for ffs(), which provides deterministic math results). Validate all the __attibute_const__ annotations were found for all architectures by reproducing the specific problem encountered in the original bug report. Build and run tested with everything I could reach with KUnit: $ ./tools/testing/kunit/kunit.py run --arch=x86_64 ffs $ ./tools/testing/kunit/kunit.py run --arch=i386 ffs $ ./tools/testing/kunit/kunit.py run --arch=arm64 --make_options "CROSS_COMPILE=aarch64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=arm --make_options "CROSS_COMPILE=arm-linux-gnueabi-" ffs $ ./tools/testing/kunit/kunit.py run --arch=powerpc ffs $ ./tools/testing/kunit/kunit.py run --arch=powerpc32 ffs $ ./tools/testing/kunit/kunit.py run --arch=powerpcle ffs $ ./tools/testing/kunit/kunit.py run --arch=m68k ffs $ ./tools/testing/kunit/kunit.py run --arch=loongarch ffs $ ./tools/testing/kunit/kunit.py run --arch=s390 --make_options "CROSS_COMPILE=s390x-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=riscv --make_options "CROSS_COMPILE=riscv64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=riscv32 --make_options "CROSS_COMPILE=riscv64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=sparc --make_options "CROSS_COMPILE=sparc64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=sparc64 --make_options "CROSS_COMPILE=sparc64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=alpha --make_options "CROSS_COMPILE=alpha-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=sh --make_options "CROSS_COMPILE=sh4-linux-gnu-" ffs Closes: https://github.com/KSPP/linux/issues/364 Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250804164417.1612371-17-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-09-08bitops: Add __attribute_const__ to generic ffs()-family implementationsKees Cook
While tracking down a problem where constant expressions used by BUILD_BUG_ON() suddenly stopped working[1], we found that an added static initializer was convincing the compiler that it couldn't track the state of the prior statically initialized value. Tracing this down found that ffs() was used in the initializer macro, but since it wasn't marked with __attribute__const__, the compiler had to assume the function might change variable states as a side-effect (which is not true for ffs(), which provides deterministic math results). Add missing __attribute_const__ annotations to generic implementations of ffs(), __ffs(), fls(), and __fls() functions. These are pure mathematical functions that always return the same result for the same input with no side effects, making them eligible for compiler optimization. Build tested with x86_64 defconfig using GCC 14.2.0, which should validate the implementations when used by ARM, ARM64, LoongArch, Microblaze, NIOS2, and SPARC32 architectures. Link: https://github.com/KSPP/linux/issues/364 [1] Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250804164417.1612371-2-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-09-08KUnit: Introduce ffs()-family testsKees Cook
Add KUnit tests for ffs()-family bit scanning functions: ffs(), __ffs(), fls(), __fls(), fls64(), __ffs64(), and ffz(). The tests validate mathematical relationships (e.g. ffs(x) == __ffs(x) + 1), and test zero values, power-of-2 patterns, maximum values, and sparse bit patterns. Build and run tested with everything I could reach with KUnit: $ ./tools/testing/kunit/kunit.py run --arch=x86_64 ffs $ ./tools/testing/kunit/kunit.py run --arch=i386 ffs $ ./tools/testing/kunit/kunit.py run --arch=arm64 --make_options "CROSS_COMPILE=aarch64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=arm --make_options "CROSS_COMPILE=arm-linux-gnueabihf-" ffs $ ./tools/testing/kunit/kunit.py run --arch=powerpc ffs $ ./tools/testing/kunit/kunit.py run --arch=powerpc32 ffs $ ./tools/testing/kunit/kunit.py run --arch=powerpcle ffs $ ./tools/testing/kunit/kunit.py run --arch=riscv --make_options "CROSS_COMPILE=riscv64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=riscv32 --make_options "CROSS_COMPILE=riscv64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=s390 --make_options "CROSS_COMPILE=s390x-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=m68k ffs $ ./tools/testing/kunit/kunit.py run --arch=loongarch ffs $ ./tools/testing/kunit/kunit.py run --arch=mips --make_options "CROSS_COMPILE=mipsel-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=sparc --make_options "CROSS_COMPILE=sparc64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=sparc64 --make_options "CROSS_COMPILE=sparc64-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=alpha --make_options "CROSS_COMPILE=alpha-linux-gnu-" ffs $ ./tools/testing/kunit/kunit.py run --arch=sh --make_options "CROSS_COMPILE=sh4-linux-gnu-" ffs Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250804164417.1612371-1-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-09-06lib/crypto: tests: Enable Curve25519 test when CRYPTO_SELFTESTSEric Biggers
Now that the Curve25519 library has been disentangled from CRYPTO, adding CRYPTO_SELFTESTS as a default value of CRYPTO_LIB_CURVE25519_KUNIT_TEST no longer causes a recursive kconfig dependency. Do this, which makes this option consistent with the other crypto KUnit test options in the same file. Link: https://lore.kernel.org/r/20250906213523.84915-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-06lib/crypto: curve25519: Consolidate into single moduleEric Biggers
Reorganize the Curve25519 library code: - Build a single libcurve25519 module, instead of up to three modules: libcurve25519, libcurve25519-generic, and an arch-specific module. - Move the arch-specific Curve25519 code from arch/$(SRCARCH)/crypto/ to lib/crypto/$(SRCARCH)/. Centralize the build rules into lib/crypto/Makefile and lib/crypto/Kconfig. - Include the arch-specific code directly in lib/crypto/curve25519.c via a header, rather than using a separate .c file. - Eliminate the entanglement with CRYPTO. CRYPTO_LIB_CURVE25519 no longer selects CRYPTO, and the arch-specific Curve25519 code no longer depends on CRYPTO. This brings Curve25519 in line with the latest conventions for lib/crypto/, used by other algorithms. The exception is that I kept the generic code in separate translation units for now. (Some of the function names collide between the x86 and generic Curve25519 code. And the Curve25519 functions are very long anyway, so inlining doesn't matter as much for Curve25519 as it does for some other algorithms.) Link: https://lore.kernel.org/r/20250906213523.84915-11-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-06lib/crypto: curve25519: Move a couple functions out-of-lineEric Biggers
Move curve25519() and curve25519_generate_public() from curve25519.h to curve25519.c. There's no good reason for them to be inline. Link: https://lore.kernel.org/r/20250906213523.84915-10-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-06lib/crypto: tests: Add Curve25519 benchmarkEric Biggers
Add a benchmark to curve25519_kunit. This brings it in line with the other crypto KUnit tests and provides an easy way to measure performance. Link: https://lore.kernel.org/r/20250906213523.84915-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-06lib/crypto: tests: Migrate Curve25519 self-test to KUnitEric Biggers
Move the Curve25519 test from an ad-hoc self-test to a KUnit test. Generally keep the same test logic for now, just translated to KUnit. There's one exception, which is that I dropped the incomplete test of curve25519_generic(). The approach I'm taking to cover the different implementations with the KUnit tests is to just rely on booting kernels in QEMU with different '-cpu' options, rather than try to make the tests (incompletely) test multiple implementations on one CPU. This way, both the test and the library API are simpler. This commit makes the file lib/crypto/curve25519.c no longer needed, as its only purpose was to call the self-test. However, keep it for now, since a later commit will add code to it again. Temporarily omit the default value of CRYPTO_SELFTESTS that the other lib/crypto/ KUnit tests have. It would cause a recursive kconfig dependency, since the Curve25519 code is still entangled with CRYPTO. A later commit will fix that. Link: https://lore.kernel.org/r/20250906213523.84915-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-04vdso: Gate VDSO_GETRANDOM behind HAVE_GENERIC_VDSOThomas Weißschuh
All architectures which want to implement getrandom() in the vDSO need to use the generic vDSO library. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250826-vdso-cleanups-v1-11-d9b65750e49f@linutronix.de
2025-09-04vdso: Drop Kconfig GENERIC_VDSO_TIME_NSThomas Weißschuh
All architectures implementing time-related functionality in the vDSO are using the generic vDSO library which handles time namespaces properly. Remove the now unnecessary Kconfig symbol. Enables the use of time namespaces on architectures, which use the generic vDSO but did not enable GENERIC_VDSO_TIME_NS, namely MIPS and arm. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/all/20250826-vdso-cleanups-v1-10-d9b65750e49f@linutronix.de
2025-09-04vdso: Drop Kconfig GENERIC_VDSO_DATA_STOREThomas Weißschuh
All users of the generic vDSO library also use the generic vDSO datastore. Remove the now unnecessary Kconfig symbol. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/all/20250826-vdso-cleanups-v1-9-d9b65750e49f@linutronix.de
2025-09-04vdso: Drop kconfig GENERIC_COMPAT_VDSOThomas Weißschuh
This configuration is never used. Remove it. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/all/20250826-vdso-cleanups-v1-8-d9b65750e49f@linutronix.de
2025-09-04vdso: Drop kconfig GENERIC_VDSO_32Thomas Weißschuh
This configuration is never used. Remove it. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250826-vdso-cleanups-v1-7-d9b65750e49f@linutronix.de
2025-09-04vdso/gettimeofday: Remove !CONFIG_TIME_NS stubsThomas Weißschuh
All calls of these functions are already gated behind CONFIG_TIME_NS. The compiler will already optimize them away if time namespaces are disabled. Drop the unnecessary stubs. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250826-vdso-cleanups-v1-4-d9b65750e49f@linutronix.de
2025-09-04vdso/datastore: Gate time data behind CONFIG_GENERIC_GETTIMEOFDAYThomas Weißschuh
When the generic vDSO does not provide time functions, as for example on riscv32, then the time data store is not necessary. Avoid allocating these time data pages when not used. Fixes: df7fcbefa710 ("vdso: Add generic time data storage") Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250826-vdso-cleanups-v1-1-d9b65750e49f@linutronix.de
2025-08-31Merge tag 'hardening-v6.17-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull hardening fixes from Kees Cook: - ARM: stacktrace: include asm/sections.h in asm/stacktrace.h (Arnd Bergmann) - ubsan: Fix incorrect hand-side used in handle (Junhui Pei) - hardening: Require clang 20.1.0 for __counted_by (Nathan Chancellor) * tag 'hardening-v6.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: hardening: Require clang 20.1.0 for __counted_by ARM: stacktrace: include asm/sections.h in asm/stacktrace.h ubsan: Fix incorrect hand-side used in handle
2025-08-29lib/crypto: tests: Add KUnit tests for BLAKE2sEric Biggers
Add a KUnit test suite for BLAKE2s. Most of the core test logic is in the previously-added hash-test-template.h. This commit just adds the actual KUnit suite, commits the generated test vectors to the tree so that gen-hash-testvecs.py won't have to be run at build time, and adds a few BLAKE2s-specific test cases. This is the replacement for blake2s-selftest, which an earlier commit removed. Improvements over blake2s-selftest include integration with KUnit, more comprehensive test cases, and support for benchmarking. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250827151131.27733-13-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-08-29lib/crypto: blake2s: Consolidate into single C translation unitEric Biggers
As was done with the other algorithms, reorganize the BLAKE2s code so that the generic implementation and the arch-specific "glue" code is consolidated into a single translation unit, so that the compiler will inline the functions and automatically decide whether to include the generic code in the resulting binary or not. Similarly, also consolidate the build rules into lib/crypto/{Makefile,Kconfig}. This removes the last uses of lib/crypto/{arm,x86}/{Makefile,Kconfig}, so remove those too. Don't keep the !KMSAN dependency. It was needed only for other algorithms such as ChaCha that initialize memory from assembly code. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250827151131.27733-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>