summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2025-02-26powerpc/io: Remove PCI_FIX_ADDRMichael Ellerman
Now that PPC_INDIRECT_MMIO is removed, PCI_FIX_ADDR does nothing, so remove it. Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-11-mpe@ellerman.id.au
2025-02-26powerpc/io: Remove PPC_INDIRECT_MMIOMichael Ellerman
The Cell blade support was the last user of PPC_INDIRECT_MMIO, so it can now be removed. PPC_INDIRECT_PIO is still used by Power8 powernv, so it needs to remain. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-10-mpe@ellerman.id.au
2025-02-26powerpc/io: Remove PPC_IO_WORKAROUNDSMichael Ellerman
The Cell blade support was the last user of PPC_IO_WORKAROUNDS, so they can now be removed. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-9-mpe@ellerman.id.au
2025-02-26powerpc: Remove PPC_OF_PLATFORM_PCIMichael Ellerman
The Cell blade support was the last user of PPC_OF_PLATFORM_PCI, so remove it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-8-mpe@ellerman.id.au
2025-02-26powerpc: Remove DCR_MMIO and the DCR generic layerMichael Ellerman
The Cell blade support was the last user of DCR_MMIO, so it can now be removed. That only leaves DCR_NATIVE, meaning the DCR generic layer which allows using either DCR_NATIVE or DCR_MMIO is also unnecessary, remove it too. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-7-mpe@ellerman.id.au
2025-02-26powerpc/xmon: Remove SPU debug and disassemblyMichael Ellerman
Now that the IBM Cell Blade support is removed, the xmon SPU support is effectively unusable. That is because PS3 doesn't implement udbg_getc which is required to send input to xmon. So remove the xmon SPU support. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-6-mpe@ellerman.id.au
2025-02-26powerpc/cell: Remove CBE_CPUFREQ_SPU_GOVERNORMichael Ellerman
Although this driver is still buildable, it can't actually do anything in practice now that the low-level cpufreq driver for Cell has been disabled due to the removal of CBE_RAS. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-5-mpe@ellerman.id.au
2025-02-26powerpc: Remove IBM_CELL_BLADE & SPIDER_NET referencesMichael Ellerman
The symbols are no longer selectable so remove references to them. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-4-mpe@ellerman.id.au
2025-02-26powerpc: Remove PPC_PMI and driverMichael Ellerman
CONFIG_PPC_PMI is no longer selectable now that PPC_IBM_CELL_BLADE has been removed, via the dependency on PPC_IBM_CELL_POWERBUTTON. So remove it and the driver, and the pmi.h header which it was the only user of. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-3-mpe@ellerman.id.au
2025-02-26powerpc: Remove some Cell leftoversMichael Ellerman
Now that CONFIG_PPC_CELL_NATIVE is removed, iommu_fixed_is_weak will always be false, so remove it entirely. Also remove a hack/quirk in the HTAB code that was only used on Cell. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-2-mpe@ellerman.id.au
2025-02-26powerpc/cell: Remove support for IBM Cell BladesMichael Ellerman
IBM Cell Blades used the Cell processor and the "blade" server form factor. They were sold as models QS20, QS21 & QS22 from roughly 2006 to 2012 [1]. They were used in a few supercomputers (eg. Roadrunner) that have since been dismantled, and were not that widely used otherwise. Until recently I still had a working QS22, which meant I was able to keep the platform support working, but unfortunately that machine has now died. I'm not aware of any users. If there is a user that wants to keep the upstream support working, we can look at bringing some of the code back as appropriate. See previous discussion at [2]. Remove the top-level config symbol PPC_IBM_CELL_BLADE, and then the dependent symbols PPC_CELL_NATIVE, PPC_CELL_COMMON, CBE_RAS, PPC_IBM_CELL_RESETBUTTON, PPC_IBM_CELL_POWERBUTTON, CBE_THERM, and AXON_MSI. Then remove the associated C files and headers, and trim unused header content (some is shared with PS3). Note that PPC_CELL_COMMON sounds like it would build code shared with PS3, but it does not. It's a relic from when code was shared between the Blade support and QPACE support. Most of the primary authors already have CREDITS entries, with the exception of Christian, so add one for him. [1]: https://www.theregister.com/2011/06/28/ibm_kills_qs22_blade [2]: https://lore.kernel.org/linuxppc-dev/60581044-df82-40ad-b94c-56468007a93e@app.fastmail.com Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Acked-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-1-mpe@ellerman.id.au
2025-02-26powerpc/static_call: Implement inline static callsChristophe Leroy
Implement inline static calls: - Put a 'bl' to the destination function ('b' if tail call) - Put a 'nop' when the destination function is NULL ('blr' if tail call) - Put a 'li r3,0' when the destination is the RET0 function and not a tail call. If the destination is too far (over the 32Mb limit), go via the trampoline. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/3dbd0b2ba577c942729235d0211d04a406653d81.1733245362.git.christophe.leroy@csgroup.eu
2025-02-26powerpc: Prepare arch_static_call_transform() for supporting inline static callsChristophe Leroy
Reorganise arch_static_call_transform() in order to ease the support of inline static calls in following patch: - remove 'target' to nhide whether it is a 'return 0' or not. - Don't bail out if 'tramp' is NULL, just do nothing until next patch. Note that 'target' was 'tramp + PPC_SCT_RET0', is_short was perforce true. So in the 'if (func && !is_short)' leg, target was perforce equal to 'func'. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/7a8b9245e773307c315c2548a4c6cad570ac2648.1733245362.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/xmon: simplify xmon_batch_next_cpu()Yury Norov
The function opencodes for_each_cpu_wrap() macro. As a loop termination condition it uses cpumask_empty(), which is O(N), and it makes the whole algorithm O(N^2). Switching to for_each_cpu_wrap() simplifies the logic, and makes the algorithm linear. Signed-off-by: Yury Norov <yury.norov@gmail.com>
2025-02-24arch/powerpc: Remove unused function icp_native_cause_ipi_rm()Gautam Menghani
Remove icp_native_cause_ipi_rm() as it has no callers since commit 53af3ba2e819("KVM: PPC: Book3S HV: Allow guest exit path to have MMU on") Signed-off-by: Gautam Menghani <gautam@linux.ibm.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250101134251.436679-1-gautam@linux.ibm.com
2025-02-24powerpc/time: Define div128_by_32() static and __initChristophe Leroy
div128_by_32() used to be called from outside time.c in the old days but since v2.6.15 it hasn't been used outside time.c $ git grep div128_by_32 v2.6.14 v2.6.14:arch/ppc64/kernel/iSeries_setup.c: div128_by_32(1024 * 1024, 0, tb_ticks_per_sec, &divres); v2.6.14:arch/ppc64/kernel/pmac_time.c: div128_by_32( 1024*1024, 0, tb_ticks_per_sec, &divres ); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32( XSEC_PER_SEC, 0, tb_ticks_per_sec, &divres ); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32(1024*1024, 0, tb_ticks_per_sec, &divres); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32(1000000000, 0, tb_ticks_per_sec, &res); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32( 1024*1024, 0, new_tb_ticks_per_sec, &divres ); v2.6.14:arch/ppc64/kernel/time.c:void div128_by_32( unsigned long dividend_high, unsigned long dividend_low, v2.6.14:include/asm-ppc64/time.h:void div128_by_32( unsigned long dividend_high, unsigned long dividend_low, $ git grep div128_by_32 v2.6.15 v2.6.15:arch/powerpc/kernel/time.c: div128_by_32( XSEC_PER_SEC, 0, tb_ticks_per_sec, &divres ); v2.6.15:arch/powerpc/kernel/time.c: div128_by_32(1024*1024, 0, tb_ticks_per_sec, &res); v2.6.15:arch/powerpc/kernel/time.c: div128_by_32(1000000000, 0, tb_ticks_per_sec, &res); v2.6.15:arch/powerpc/kernel/time.c: div128_by_32(1024*1024, 0, new_tb_ticks_per_sec, &divres); v2.6.15:arch/powerpc/kernel/time.c:void div128_by_32(u64 dividend_high, u64 dividend_low, v2.6.15:include/asm-powerpc/time.h:extern void div128_by_32(u64 dividend_high, u64 dividend_low, Move it above its only caller which is time_init() and define it static and __init. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/50810349bf1eee378fbeab72a36e4b6553a60c3d.1738749246.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/ipic: Stop printing address of registersChristophe Leroy
The following line appears at boot: IPIC (128 IRQ sources) at (ptrval) This is pointless so remove the printing of the virtual address and replace it by matching physical address. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/ecffb21d88405f99e7ffc906a733396c57c36d50.1736405302.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/32: Stop printing Kernel virtual memory layoutChristophe Leroy
Printing of Kernel virtual memory layout was added for debug purpose by commit f637a49e507c ("powerpc: Minor cleanups of kernel virt address space definitions") For security reasons, don't display the kernel's virtual memory layout. Other architectures have removed it through following commits. Commit 071929dbdd86 ("arm64: Stop printing the virtual memory layout") Commit 1c31d4e96b8c ("ARM: 8820/1: mm: Stop printing the virtual memory layout") Commit 31833332f798 ("m68k/mm: Stop printing the virtual memory layout") Commit fd8d0ca25631 ("parisc: Hide virtual kernel memory layout") Commit 681ff0181bbf ("x86/mm/init/32: Stop printing the virtual memory layout") Commit 681ff0181bbf ("x86/mm/init/32: Stop printing the virtual memory layout") thought x86 was the last one, but in reality powerpc/32 still had it. So remove it now on powerpc/32 as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Kees Cook <kees@kernel.org> [Maddy: Added "Commit" in commit message to avoid checkpatch error] Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/430bc8c1f2ff2eb9224b04450e22db472b0b9fa9.1736361630.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/vmlinux: Remove etext, edata and endChristophe Leroy
etext is not used anymore since commit 843a1ffaf6f2 ("powerpc/mm: use core_kernel_text() helper") edata and end have not been used since the merge of arch/ppc/ and arch/ppc64/ Remove the three and remove macro PROVIDE32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/d1686d36cdd6b9d681e7ee4dd677c386d43babb1.1736332415.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/44x: Declare primary_uic static in uic.cChristophe Leroy
primary_uic is not used outside uic.c, declare it static. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202411101746.lD8YdVzY-lkp@intel.com/ Fixes: e58923ed1437 ("[POWERPC] Add arch/powerpc driver for UIC, PPC4xx interrupt controller") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/0e11233d30333610ab460b3a1fd0f43c3a51e34d.1736331884.git.christophe.leroy@csgroup.eu
2025-02-22crypto: lib/Kconfig - Fix lib built-in failure when arch is modularHerbert Xu
The HAVE_ARCH Kconfig options in lib/crypto try to solve the modular versus built-in problem, but it still fails when the the LIB option (e.g., CRYPTO_LIB_CURVE25519) is selected externally. Fix this by introducing a level of indirection with ARCH_MAY_HAVE Kconfig options, these then go on to select the ARCH_HAVE options if the ARCH Kconfig options matches that of the LIB option. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202501230223.ikroNDr1-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-21sysv: Remove the filesystemJan Kara
Since 2002 (change "Replace BKL for chain locking with sysvfs-private rwlock") the sysv filesystem was doing IO under a rwlock in its get_block() function (yes, a non-sleepable lock hold over a function used to read inode metadata for all reads and writes). Nobody noticed until syzbot in 2023 [1]. This shows nobody is using the filesystem. Just drop it. [1] https://lore.kernel.org/all/0000000000000ccf9a05ee84f5b0@google.com/ Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20250220163940.10155-2-jack@suse.cz Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-21make use of anon_inode_getfile_fmode()Al Viro
["fallen through the cracks" misc stuff] A bunch of anon_inode_getfile() callers follow it with adjusting ->f_mode; we have a helper doing that now, so let's make use of it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Link: https://lore.kernel.org/r/20250118014434.GT1977892@ZenIV Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-21powerpc/vdso: Switch to generic storage implementationThomas Weißschuh
The generic storage implementation provides the same features as the custom one. However it can be shared between architectures, making maintenance easier. Co-developed-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Link: https://lore.kernel.org/all/20250204-vdso-store-rng-v3-14-13a4669dfc8c@linutronix.de
2025-02-21vdso: Rename included MakefileThomas Weißschuh
As the Makefile is included into other Makefiles it can not be used to define objects to be built from the current source directory. However the generic datastore will introduce such a local source file. Rename the included Makefile so it is clear how it is to be used and to make room for a regular Makefile in lib/vdso/. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250204-vdso-store-rng-v3-4-13a4669dfc8c@linutronix.de
2025-02-21powerpc/perf/hv-24x7: Constify 'struct bin_attribute'Thomas Weißschuh
The sysfs core now allows instances of 'struct bin_attribute' to be moved into read-only memory. Make use of that to protect them against accidental or malicious modifications. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20241216-sysfs-const-bin_attr-powerpc-v1-5-bbed8906f476@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-21powerpc/powernv/opal: Constify 'struct bin_attribute'Thomas Weißschuh
The sysfs core now allows instances of 'struct bin_attribute' to be moved into read-only memory. Make use of that to protect them against accidental or malicious modifications. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20241216-sysfs-const-bin_attr-powerpc-v1-4-bbed8906f476@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-21powerpc/powernv/ultravisor: Constify 'struct bin_attribute'Thomas Weißschuh
The sysfs core now allows instances of 'struct bin_attribute' to be moved into read-only memory. Make use of that to protect them against accidental or malicious modifications. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20241216-sysfs-const-bin_attr-powerpc-v1-3-bbed8906f476@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-21powerpc/secvar: Constify 'struct bin_attribute'Thomas Weißschuh
The sysfs core now allows instances of 'struct bin_attribute' to be moved into read-only memory. Make use of that to protect them against accidental or malicious modifications. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20241216-sysfs-const-bin_attr-powerpc-v1-2-bbed8906f476@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-21powerpc/secvar: Mark __init functions as suchThomas Weißschuh
The setup functions are only called during the init phase of the kernel. They can be discarded and their memory reused after that. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Link: https://lore.kernel.org/r/20241216-sysfs-const-bin_attr-powerpc-v1-1-bbed8906f476@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-02-18powerpc/watchdog: Switch to use hrtimer_setup()Nam Cao
hrtimer_setup() takes the callback function pointer as argument and initializes the timer completely. Replace hrtimer_init() and the open coded initialization of hrtimer::function with the new setup mechanism. Patch was created by using Coccinelle. Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/099ebf6d352094a56e22fdfe76582b50f8fd6029.1738746821.git.namcao@linutronix.de
2025-02-18KVM: PPC: Switch to use hrtimer_setup()Nam Cao
hrtimer_setup() takes the callback function pointer as argument and initializes the timer completely. Replace hrtimer_init() and the open coded initialization of hrtimer::function with the new setup mechanism. Patch was created by using Coccinelle. Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/a58c4f16b1cfba13f96201fda355553850a97562.1738746821.git.namcao@linutronix.de
2025-02-12fs: add open_tree_attr()Christian Brauner
Add open_tree_attr() which allow to atomically create a detached mount tree and set mount options on it. If OPEN_TREE_CLONE is used this will allow the creation of a detached mount with a new set of mount options without it ever being exposed to userspace without that set of mount options applied. Link: https://lore.kernel.org/r/20250128-work-mnt_idmap-update-v2-v1-3-c25feb0d2eb3@kernel.org Reviewed-by: "Seth Forshee (DigitalOcean)" <sforshee@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-12powerpc/code-patching: Fix KASAN hit by not flagging text patching area as ↵Christophe Leroy
VM_ALLOC Erhard reported the following KASAN hit while booting his PowerMac G4 with a KASAN-enabled kernel 6.13-rc6: BUG: KASAN: vmalloc-out-of-bounds in copy_to_kernel_nofault+0xd8/0x1c8 Write of size 8 at addr f1000000 by task chronyd/1293 CPU: 0 UID: 123 PID: 1293 Comm: chronyd Tainted: G W 6.13.0-rc6-PMacG4 #2 Tainted: [W]=WARN Hardware name: PowerMac3,6 7455 0x80010303 PowerMac Call Trace: [c2437590] [c1631a84] dump_stack_lvl+0x70/0x8c (unreliable) [c24375b0] [c0504998] print_report+0xdc/0x504 [c2437610] [c050475c] kasan_report+0xf8/0x108 [c2437690] [c0505a3c] kasan_check_range+0x24/0x18c [c24376a0] [c03fb5e4] copy_to_kernel_nofault+0xd8/0x1c8 [c24376c0] [c004c014] patch_instructions+0x15c/0x16c [c2437710] [c00731a8] bpf_arch_text_copy+0x60/0x7c [c2437730] [c0281168] bpf_jit_binary_pack_finalize+0x50/0xac [c2437750] [c0073cf4] bpf_int_jit_compile+0xb30/0xdec [c2437880] [c0280394] bpf_prog_select_runtime+0x15c/0x478 [c24378d0] [c1263428] bpf_prepare_filter+0xbf8/0xc14 [c2437990] [c12677ec] bpf_prog_create_from_user+0x258/0x2b4 [c24379d0] [c027111c] do_seccomp+0x3dc/0x1890 [c2437ac0] [c001d8e0] system_call_exception+0x2dc/0x420 [c2437f30] [c00281ac] ret_from_syscall+0x0/0x2c --- interrupt: c00 at 0x5a1274 NIP: 005a1274 LR: 006a3b3c CTR: 005296c8 REGS: c2437f40 TRAP: 0c00 Tainted: G W (6.13.0-rc6-PMacG4) MSR: 0200f932 <VEC,EE,PR,FP,ME,IR,DR,RI> CR: 24004422 XER: 00000000 GPR00: 00000166 af8f3fa0 a7ee3540 00000001 00000000 013b6500 005a5858 0200f932 GPR08: 00000000 00001fe9 013d5fc8 005296c8 2822244c 00b2fcd8 00000000 af8f4b57 GPR16: 00000000 00000001 00000000 00000000 00000000 00000001 00000000 00000002 GPR24: 00afdbb0 00000000 00000000 00000000 006e0004 013ce060 006e7c1c 00000001 NIP [005a1274] 0x5a1274 LR [006a3b3c] 0x6a3b3c --- interrupt: c00 The buggy address belongs to the virtual mapping at [f1000000, f1002000) created by: text_area_cpu_up+0x20/0x190 The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:00000000 index:0x0 pfn:0x76e30 flags: 0x80000000(zone=2) raw: 80000000 00000000 00000122 00000000 00000000 00000000 ffffffff 00000001 raw: 00000000 page dumped because: kasan: bad access detected Memory state around the buggy address: f0ffff00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f0ffff80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >f1000000: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ^ f1000080: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f1000100: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 ================================================================== f8 corresponds to KASAN_VMALLOC_INVALID which means the area is not initialised hence not supposed to be used yet. Powerpc text patching infrastructure allocates a virtual memory area using get_vm_area() and flags it as VM_ALLOC. But that flag is meant to be used for vmalloc() and vmalloc() allocated memory is not supposed to be used before a call to __vmalloc_node_range() which is never called for that area. That went undetected until commit e4137f08816b ("mm, kasan, kmsan: instrument copy_from/to_kernel_nofault") The area allocated by text_area_cpu_up() is not vmalloc memory, it is mapped directly on demand when needed by map_kernel_page(). There is no VM flag corresponding to such usage, so just pass no flag. That way the area will be unpoisonned and usable immediately. Reported-by: Erhard Furtner <erhard_f@mailbox.org> Closes: https://lore.kernel.org/all/20250112135832.57c92322@yea/ Fixes: 37bc3e5fd764 ("powerpc/lib/code-patching: Use alternate map for patch_instruction()") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/06621423da339b374f48c0886e3a5db18e896be8.1739342693.git.christophe.leroy@csgroup.eu
2025-02-11powerpc/pseries/iommu: memory notifier incorrectly adds TCEs for pmemoryGaurav Batra
iommu_mem_notifier() is invoked when RAM is dynamically added/removed. This notifier call is responsible to add/remove TCEs from the Dynamic DMA Window (DDW) when TCEs are pre-mapped. TCEs are pre-mapped only for RAM and not for persistent memory (pmemory). For DMA buffers in pmemory, TCEs are dynamically mapped when the device driver instructs to do so. The issue is 'daxctl' command is capable of adding pmemory as "System RAM" after LPAR boot. The command to do so is - daxctl reconfigure-device --mode=system-ram dax0.0 --force This will dynamically add pmemory range to LPAR RAM eventually invoking iommu_mem_notifier(). The address range of pmemory is way beyond the Max RAM that the LPAR can have. Which means, this range is beyond the DDW created for the device, at device initialization time. As a result when TCEs are pre-mapped for the pmemory range, by iommu_mem_notifier(), PHYP HCALL returns H_PARAMETER. This failed the command, daxctl, to add pmemory as RAM. The solution is to not pre-map TCEs for pmemory. Signed-off-by: Gaurav Batra <gbatra@linux.ibm.com> Tested-by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250130183854.92258-1-gbatra@linux.ibm.com
2025-02-11powerpc/pseries/iommu: create DDW for devices with DMA mask less than 64-bitsGaurav Batra
Starting with PAPR level 2.13, platform supports placing PHB in limited address mode. Devices that support DMA masks less that 64-bit but greater than 32-bits are placed in limited address mode. In this mode, the starting DMA address returned by the DDW is 4GB. When the device driver calls dma_supported, with mask less then 64-bit, the PowerPC IOMMU driver places PHB in the Limited Addressing Mode before creating DDW. Signed-off-by: Gaurav Batra <gbatra@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250108164814.73250-1-gbatra@linux.ibm.com
2025-02-11powerpc/pseries: Export hardware trace macro dump via debugfsAbhishek Dubey
This patch adds debugfs interface to export Hardware Trace Macro (HTM) function data in a LPAR. New hypervisor call "H_HTM" has been defined to setup, configure, control and dump the HTM data. This patch supports only dumping of HTM data in a LPAR. New debugfs folder called "htmdump" has been added under /sys/kernel/debug/arch path which contains files need to pass required parameters for the H_HTM dump function. New Kconfig option called "CONFIG_HTMDUMP" is added in platform/pseries for the same. With this module loaded, list of files in debugfs path /sys/kernel/debug/powerpc/htmdump coreindexonchip htmtype nodalchipindex nodeindex trace Signed-off-by: Abhishek Dubey <adubey@linux.ibm.com> Co-developed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250113164039.302017-2-adubey@linux.ibm.com
2025-02-11powerpc/pseries: Macros and wrapper functions for H_HTM callAbhishek Dubey
Define macros and wrapper functions to handle H_HTM (Hardware Trace Macro) hypervisor call. H_HTM is new HCALL added to export data from Hardware Trace Macro (HTM) function. Signed-off-by: Abhishek Dubey <adubey@linux.ibm.com> Co-developed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250113164039.302017-1-adubey@linux.ibm.com
2025-02-11arch/powerpc/perf: Update get_mem_data_src function to use saved values of ↵Athira Rajeev
sier and mmcra regs During performance monitor interrupt handling, the regs are setup using perf_read_regs function. Here some of the pt_regs fields is overloaded. Samples Instruction Event Register (SIER) is loaded into pt_regs, overloading regs->dar. And regs->dsisr to store MMCRA (Monitor Mode Control Register A) so that we only need to read these once on each interrupt. Update the isa207_get_mem_data_src function to use regs->dar instead of reading from SPRN_SIER again. Also use regs->dsisr to read the mmcra value Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250121131621.39054-2-atrajeev@linux.vnet.ibm.com
2025-02-11arch/powerpc/perf: Check the instruction type before creating sample with ↵Athira Rajeev
perf_mem_data_src perf mem report aborts as below sometimes (during some corner case) in powerpc: # ./perf mem report 1>out *** stack smashing detected ***: terminated Aborted (core dumped) The backtrace is as below: __pthread_kill_implementation () raise () abort () __libc_message __fortify_fail __stack_chk_fail hist_entry.lvl_snprintf __sort__hpp_entry __hist_entry__snprintf hists.fprintf cmd_report cmd_mem Snippet of code which triggers the issue from tools/perf/util/sort.c static int hist_entry__lvl_snprintf(struct hist_entry *he, char *bf, size_t size, unsigned int width) { char out[64]; perf_mem__lvl_scnprintf(out, sizeof(out), he->mem_info); return repsep_snprintf(bf, size, "%-*s", width, out); } The value of "out" is filled from perf_mem_data_src value. Debugging this further showed that for some corner cases, the value of "data_src" was pointing to wrong value. This resulted in bigger size of string and causing stack check fail. The perf mem data source values are captured in the sample via isa207_get_mem_data_src function. The initial check is to fetch the type of sampled instruction. If the type of instruction is not valid (not a load/store instruction), the function returns. Since 'commit e16fd7f2cb1a ("perf: Use sample_flags for data_src")', data_src field is not initialized by the perf_sample_data_init() function. If the PMU driver doesn't set the data_src value to zero if type is not valid, this will result in uninitailised value for data_src. The uninitailised value of data_src resulted in stack check fail followed by abort for "perf mem report". When requesting for data source information in the sample, the instruction type is expected to be load or store instruction. In ISA v3.0, due to hardware limitation, there are corner cases where the instruction type other than load or store is observed. In ISA v3.0 and before values "0" and "7" are considered reserved. In ISA v3.1, value "7" has been used to indicate "larx/stcx". Drop the sample if instruction type has reserved values for this field with a ISA version check. Initialize data_src to zero in isa207_get_mem_data_src if the instruction type is not load/store. Reported-by: Disha Goel <disgoel@linux.vnet.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250121131621.39054-1-atrajeev@linux.vnet.ibm.com
2025-02-11powerpc: increase MIN RMA size for CAS negotiationAvnish Chouhan
Change RMA size from 512 MB to 768 MB which will result in more RMA at boot time for PowerPC. When PowerPC LPAR use/uses vTPM, Secure Boot or FADump, the 512 MB RMA memory is not sufficient for booting. With this 512 MB RMA, GRUB2 run out of memory and unable to load the necessary. Sometimes even usage of CDROM which requires more memory for installation along with the options mentioned above troubles the boot memory and result in boot failures. Increasing the RMA size will resolves multiple out of memory issues observed in PowerPC. Failure details: 1. GRUB2 kern/ieee1275/init.c:550: mm requested region of size 8513000, flags 1 kern/ieee1275/init.c:563: Cannot satisfy allocation and retain minimum runtime space kern/ieee1275/init.c:550: mm requested region of size 8513000, flags 0 kern/ieee1275/init.c:563: Cannot satisfy allocation and retain minimum runtime space kern/file.c:215: Closing `/ppc/ppc64/initrd.img' ... kern/disk.c:297: Closing `ieee1275//vdevice/v-scsi @30000067/disk@8300000000000000'... kern/disk.c:311: Closing `ieee1275//vdevice/v-scsi @30000067/disk@8300000000000000' succeeded. kern/file.c:225: Closing `/ppc/ppc64/initrd.img' failed with 3. kern/file.c:148: Opening `/ppc/ppc64/initrd.img' succeeded. error: ../../grub-core/kern/mm.c:552:out of memory. 2. Kernel [ 0.777633] List of all partitions: [ 0.777639] No filesystem could mount root, tried: [ 0.777640] [ 0.777649] Kernel panic - not syncing: VFS: Unable to mount root fs on "" or unknown-block(0,0) [ 0.777658] CPU: 17 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-0.rc4.20.el10.ppc64le #1 [ 0.777669] Hardware name: IBM,9009-22A POWER9 (architected) 0x4e0202 0xf000005 of:IBM,FW950.B0 (VL950_149) hv:phyp pSeries [ 0.777678] Call Trace: [ 0.777682] [c000000003db7b60] [c000000001119714] dump_stack_lvl+0x88/0xc4 (unreliable) [ 0.777700] [c000000003db7b90] [c00000000016c274] panic+0x174/0x460 [ 0.777711] [c000000003db7c30] [c00000000200631c] mount_root_generic+0x320/0x354 [ 0.777724] [c000000003db7d00] [c0000000020066f8] prepare_namespace+0x27c/0x2f4 [ 0.777735] [c000000003db7d90] [c000000002005824] kernel_init_freeable+0x254/0x294 [ 0.777747] [c000000003db7df0] [c00000000001131c] kernel_init+0x30/0x1c4 [ 0.777757] [c000000003db7e50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c [ 0.777768] --- interrupt: 0 at 0x0 [ 0.784238] pstore: backend (nvram) writing error (-1) [ 0.790447] Rebooting in 10 seconds.. Signed-off-by: Avnish Chouhan <avnish@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250123114254.200527-4-sourabhjain@linux.ibm.com
2025-02-11powerpc/fadump: fix additional param memory reservation for HASH MMUSourabh Jain
Commit 683eab94da75bc ("powerpc/fadump: setup additional parameters for dump capture kernel") introduced the additional parameter feature in fadump for HASH MMU with the understanding that GRUB does not use the memory area between 640MB and 768MB for its operation. However, the third patch in this series ("powerpc: increase MIN RMA size for CAS negotiation") changes the MIN RMA size to 768MB, allowing GRUB to use memory up to 768MB. This makes the fadump reservation for the additional parameter feature for HASH MMU unreliable. To address this, adjust the memory range for the additional parameter in fadump for HASH MMU. This will ensure that GRUB does not overwrite the memory reserved for fadump's additional parameter in HASH MMU. The new policy for the memory range for the additional parameter in HASH MMU is that the first memory block must be larger than the MIN_RMA size, as the bootloader can use memory up to the MIN_RMA size. The range should be between MIN_RMA and the RMA size (ppc64_rma_size), and it must not overlap with the fadump reserved area. Reviewed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250123114254.200527-3-sourabhjain@linux.ibm.com
2025-02-11powerpc: export MIN RMA sizeSourabh Jain
Commit 683eab94da75bc ("powerpc/fadump: setup additional parameters for dump capture kernel") introduced the additional parameter feature in fadump for HASH MMU with the understanding that GRUB does not use the memory area between 640MB and 768MB for its operation. However, the third patch ("powerpc: increase MIN RMA size for CAS negotiation") in this series is changing the MIN RMA size to 768MB, allowing GRUB to use memory up to 768MB. This makes the fadump reservation for the additional parameter feature for HASH MMU unreliable. To address this, export the MIN_RMA so that the next patch ("powerpc/fadump: fix additional param memory reservation for HASH MMU") can identify the correct memory range for the additional parameter feature in fadump for HASH MMU. Reviewed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250123114254.200527-2-sourabhjain@linux.ibm.com
2025-02-10powerpc/crash: Use note name macrosAkihiko Odaki
Use note name macros to match with the userspace's expectation. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Acked-by: Baoquan He <bhe@redhat.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20250115-elf-v5-3-0f9e55bbb2fc@daynix.com Signed-off-by: Kees Cook <kees@kernel.org>
2025-02-10seccomp: remove the 'sd' argument from __secure_computing()Oleg Nesterov
After the previous changes 'sd' is always NULL. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250128150313.GA15336@redhat.com Signed-off-by: Kees Cook <kees@kernel.org>
2025-02-10powerpc/64s: Rewrite __real_pte() and __rpte_to_hidx() as static inlineChristophe Leroy
Rewrite __real_pte() and __rpte_to_hidx() as static inline in order to avoid following warnings/errors when building with 4k page size: CC arch/powerpc/mm/book3s64/hash_tlb.o arch/powerpc/mm/book3s64/hash_tlb.c: In function 'hpte_need_flush': arch/powerpc/mm/book3s64/hash_tlb.c:49:16: error: variable 'offset' set but not used [-Werror=unused-but-set-variable] 49 | int i, offset; | ^~~~~~ CC arch/powerpc/mm/book3s64/hash_native.o arch/powerpc/mm/book3s64/hash_native.c: In function 'native_flush_hash_range': arch/powerpc/mm/book3s64/hash_native.c:782:29: error: variable 'index' set but not used [-Werror=unused-but-set-variable] 782 | unsigned long hash, index, hidx, shift, slot; | ^~~~~ Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202501081741.AYFwybsq-lkp@intel.com/ Fixes: ff31e105464d ("powerpc/mm/hash64: Store the slot information at the right offset for hugetlb") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/e0d340a5b7bd478ecbf245d826e6ab2778b74e06.1736706263.git.christophe.leroy@csgroup.eu
2025-02-10powerpc/code-patching: Disable KASAN report during patching via temporary mmChristophe Leroy
Erhard reports the following KASAN hit on Talos II (power9) with kernel 6.13: [ 12.028126] ================================================================== [ 12.028198] BUG: KASAN: user-memory-access in copy_to_kernel_nofault+0x8c/0x1a0 [ 12.028260] Write of size 8 at addr 0000187e458f2000 by task systemd/1 [ 12.028346] CPU: 87 UID: 0 PID: 1 Comm: systemd Tainted: G T 6.13.0-P9-dirty #3 [ 12.028408] Tainted: [T]=RANDSTRUCT [ 12.028446] Hardware name: T2P9D01 REV 1.01 POWER9 0x4e1202 opal:skiboot-bc106a0 PowerNV [ 12.028500] Call Trace: [ 12.028536] [c000000008dbf3b0] [c000000001656a48] dump_stack_lvl+0xbc/0x110 (unreliable) [ 12.028609] [c000000008dbf3f0] [c0000000006e2fc8] print_report+0x6b0/0x708 [ 12.028666] [c000000008dbf4e0] [c0000000006e2454] kasan_report+0x164/0x300 [ 12.028725] [c000000008dbf600] [c0000000006e54d4] kasan_check_range+0x314/0x370 [ 12.028784] [c000000008dbf640] [c0000000006e6310] __kasan_check_write+0x20/0x40 [ 12.028842] [c000000008dbf660] [c000000000578e8c] copy_to_kernel_nofault+0x8c/0x1a0 [ 12.028902] [c000000008dbf6a0] [c0000000000acfe4] __patch_instructions+0x194/0x210 [ 12.028965] [c000000008dbf6e0] [c0000000000ade80] patch_instructions+0x150/0x590 [ 12.029026] [c000000008dbf7c0] [c0000000001159bc] bpf_arch_text_copy+0x6c/0xe0 [ 12.029085] [c000000008dbf800] [c000000000424250] bpf_jit_binary_pack_finalize+0x40/0xc0 [ 12.029147] [c000000008dbf830] [c000000000115dec] bpf_int_jit_compile+0x3bc/0x930 [ 12.029206] [c000000008dbf990] [c000000000423720] bpf_prog_select_runtime+0x1f0/0x280 [ 12.029266] [c000000008dbfa00] [c000000000434b18] bpf_prog_load+0xbb8/0x1370 [ 12.029324] [c000000008dbfb70] [c000000000436ebc] __sys_bpf+0x5ac/0x2e00 [ 12.029379] [c000000008dbfd00] [c00000000043a228] sys_bpf+0x28/0x40 [ 12.029435] [c000000008dbfd20] [c000000000038eb4] system_call_exception+0x334/0x610 [ 12.029497] [c000000008dbfe50] [c00000000000c270] system_call_vectored_common+0xf0/0x280 [ 12.029561] --- interrupt: 3000 at 0x3fff82f5cfa8 [ 12.029608] NIP: 00003fff82f5cfa8 LR: 00003fff82f5cfa8 CTR: 0000000000000000 [ 12.029660] REGS: c000000008dbfe80 TRAP: 3000 Tainted: G T (6.13.0-P9-dirty) [ 12.029735] MSR: 900000000280f032 <SF,HV,VEC,VSX,EE,PR,FP,ME,IR,DR,RI> CR: 42004848 XER: 00000000 [ 12.029855] IRQMASK: 0 GPR00: 0000000000000169 00003fffdcf789a0 00003fff83067100 0000000000000005 GPR04: 00003fffdcf78a98 0000000000000090 0000000000000000 0000000000000008 GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR12: 0000000000000000 00003fff836ff7e0 c000000000010678 0000000000000000 GPR16: 0000000000000000 0000000000000000 00003fffdcf78f28 00003fffdcf78f90 GPR20: 0000000000000000 0000000000000000 0000000000000000 00003fffdcf78f80 GPR24: 00003fffdcf78f70 00003fffdcf78d10 00003fff835c7239 00003fffdcf78bd8 GPR28: 00003fffdcf78a98 0000000000000000 0000000000000000 000000011f547580 [ 12.030316] NIP [00003fff82f5cfa8] 0x3fff82f5cfa8 [ 12.030361] LR [00003fff82f5cfa8] 0x3fff82f5cfa8 [ 12.030405] --- interrupt: 3000 [ 12.030444] ================================================================== Commit c28c15b6d28a ("powerpc/code-patching: Use temporary mm for Radix MMU") is inspired from x86 but unlike x86 is doesn't disable KASAN reports during patching. This wasn't a problem at the begining because __patch_mem() is not instrumented. Commit 465cabc97b42 ("powerpc/code-patching: introduce patch_instructions()") use copy_to_kernel_nofault() to copy several instructions at once. But when using temporary mm the destination is not regular kernel memory but a kind of kernel-like memory located in user address space. Because it is not in kernel address space it is not covered by KASAN shadow memory. Since commit e4137f08816b ("mm, kasan, kmsan: instrument copy_from/to_kernel_nofault") KASAN reports bad accesses from copy_to_kernel_nofault(). Here a bad access to user memory is reported because KASAN detects the lack of shadow memory and the address is below TASK_SIZE. Do like x86 in commit b3fd8e83ada0 ("x86/alternatives: Use temporary mm for text poking") and disable KASAN reports during patching when using temporary mm. Reported-by: Erhard Furtner <erhard_f@mailbox.org> Close: https://lore.kernel.org/all/20250201151435.48400261@yea/ Fixes: 465cabc97b42 ("powerpc/code-patching: introduce patch_instructions()") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/1c05b2a1b02ad75b981cfc45927e0b4a90441046.1738577687.git.christophe.leroy@csgroup.eu
2025-02-09lib/crc-t10dif: remove crc_t10dif_is_optimized()Eric Biggers
With the "crct10dif" algorithm having been removed from the crypto API, crc_t10dif_is_optimized() is no longer used. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250208175647.12333-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-08lib/crc32: remove "_le" from crc32c base and arch functionsEric Biggers
Following the standardization on crc32c() as the lib entry point for the Castagnoli CRC32 instead of the previous mix of crc32c(), crc32c_le(), and __crc32c_le(), make the same change to the underlying base and arch functions that implement it. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250208024911.14936-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-08lib/crc32: remove obsolete CRC32 options from defconfig filesEric Biggers
Remove all remaining references to CONFIG_CRC32_BIT, CONFIG_CRC32_SARWATE, CONFIG_CRC32_SLICEBY4, and CONFIG_CRC32_SLICEBY8. These options no longer exist, now that we've standardized on a single generic CRC32 implementation. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250205000424.75149-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>