summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2023-04-20powerpc/fsl_uli1575: Misc cleanupChristophe Leroy
Use a single line for uli_exclude_device(). Add uli_exclude_device() prototype in ppc-pci.h and guard it. Remove that prototype from mpc85xx_ds.c and mpc86xx_hpcn.c files. Make uli_pirq_to_irq[] static as it is used only in that file. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230409000812.18904-2-pali@kernel.org
2023-04-20powerpc/boot: Fix boot wrapper code generation with CONFIG_POWER10_CPUNicholas Piggin
-mcpu=power10 will generate prefixed and pcrel code by default, which we do not support. The general kernel disables these with cflags, but those were missed for the boot wrapper. Fixes: 4b2a9315f20d ("powerpc/64s: POWER10 CPU Kconfig build option") Cc: stable@vger.kernel.org # v6.1+ Reported-by: Danny Tsen <dtsen@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230407040909.230998-1-npiggin@gmail.com
2023-04-19s390/relocate_kernel: adjust indentationHeiko Carstens
relocate_kernel.S seems to be the only assembler file which doesn't follow the standard way of indentation. Adjust this for the sake of consistency. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/relocate_kernel: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/entry: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/purgatory: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kprobes: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/reipl: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/head64: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/earlypgm: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/mcount: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/crc32le: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/crc32be: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/crypto,chacha: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Acked-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/amode31: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/lib: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kasan: remove override of mem*() functionsHeiko Carstens
The kasan mem*() functions are not used anymore since s390 has switched to GENERIC_ENTRY and commit 69d4c0d32186 ("entry, kasan, x86: Disallow overriding mem*() functions"). Therefore remove the now dead code, similar to x86. While at it also use the SYM* macros in mem.S. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: remove nodat stack restriction for calling nodat functionsAlexander Gordeev
To allow calling of DAT-off code from kernel the stack needs to be switched to nodat_stack (or other stack mapped as 1:1). Before call_nodat() macro was introduced that was necessary to provide the very same memory address for STNSM and STOSM instructions. If the kernel would stay on a random stack (e.g. a virtually mapped one) then a virtual address provided for STNSM instruction could differ from the physical address needed for the corresponding STOSM instruction. After call_nodat() macro is introduced the kernel stack does not need to be mapped 1:1 anymore, since the macro stores the physical memory address of return PSW in a register before entering DAT-off mode. This way the return LPSWE instruction is able to pick the correct memory location and restore the DAT-on mode. That however might fail in case the 16-byte return PSW happened to cross page boundary: PSW mask and PSW address could end up in two separate non-contiguous physical pages. Align the return PSW on 16-byte boundary so it always fits into a single physical page. As result any stack (including the virtually mapped one) could be used for calling DAT-off code and prior switching to nodat_stack becomes unnecessary. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: rework invocation of DAT-off codeAlexander Gordeev
Calling kdump kernel is a two-step process that involves invocation of the purgatory code: first time - to verify the new kernel checksum and second time - to call the new kernel itself. The purgatory code operates on real addresses and does not expect any memory protection. Therefore, before the purgatory code is entered the DAT mode is always turned off. However, it is only restored upon return from the new kernel checksum verification. In case the purgatory was called to start the new kernel and failed the control is returned to the old kernel, but the DAT mode continues staying off. The new kernel start failure is unlikely and leads to the disabled wait state anyway. Still that poses a risk, since the kernel code in general is not DAT-off safe and even calling the disabled_wait() function might crash. Introduce call_nodat() macro that allows entering DAT-off mode, calling an arbitrary function and restoring DAT mode back on. Switch all invocations of DAT-off code to that macro and avoid the above described scenario altogether. Name the call_nodat() macro in small letters after the already existing call_on_stack() and put it to the same header file. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> [hca@linux.ibm.com: some small modifications to call_nodat() macro] Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: fix virtual vs physical address confusionAlexander Gordeev
Fix virtual vs physical address confusion (which currently are the same). Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: cleanup do_start_kdump() prototype and usageAlexander Gordeev
Avoid unnecessary run-time and compile-time type conversions of do_start_kdump() function return value and parameter. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kexec: turn DAT mode off immediately before purgatoryAlexander Gordeev
The kernel code is not guaranteed DAT-off mode safe. Turn the DAT mode off immediately before entering the purgatory. Further, to avoid subtle side effects reset the system immediately before turning DAT mode off while making all necessary preparations in advance. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19KVM: arm64: Fix buffer overflow in kvm_arm_set_fw_reg()Dan Carpenter
The KVM_REG_SIZE() comes from the ioctl and it can be a power of two between 0-32768 but if it is more than sizeof(long) this will corrupt memory. Fixes: 99adb567632b ("KVM: arm/arm64: Add save/restore support for firmware workaround state") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/4efbab8c-640f-43b2-8ac6-6d68e08280fe@kili.mountain Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-04-19s390/cpum_cf: remove function validate_ctr_auth() by inline codeThomas Richter
Remove function validate_ctr_auth() and replace this very small function by its body. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/cpum_cf: provide counter number to validate_ctr_version()Thomas Richter
Function validate_ctr_version() first parameter is a pointer to a large structure, but only member hw_perf_event::config is used. Supply this structure member value in the function invocation. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/cpum_cf: introduce static CPU counter facility informationThomas Richter
The CPU measurement facility counter information instruction qctri() retrieves information about the available counter sets. The information varies between machine generations, but is constant when running on a particular machine. For example the CPU measurement facility counter first and second version numbers determine the amount of counters in a counter set. This information never changes. The counter sets are identical for all CPUs in the system. It does not matter which CPU performs the instruction. Authorization control of the CPU Measurement facility can only be changed in the activation profile while the LPAR is not running. Retrieve the CPU measurement counter information at device driver initialization time and use its constant values. Function validate_ctr_version() verifies if a user provided CPU Measurement counter facility counter is valid and defined. It now uses the newly introduced static CPU counter facility information. To avoid repeated recalculation of the counter set sizes (numbers of counters per set), which never changes on a running machine, calculate the counter set size once at device driver initialization and store the result in an array. Functions cpum_cf_make_setsize() and cpum_cf_read_setsize() are introduced. Finally remove cpu_cf_events::info member and use the static CPU counter facility information instead. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19Merge patch series "Introduce 64b relocatable kernel"Palmer Dabbelt
Alexandre Ghiti <alexghiti@rivosinc.com> says: After multiple attempts, this patchset is now based on the fact that the 64b kernel mapping was moved outside the linear mapping. The first patch allows to build relocatable kernels but is not selected by default. That patch is a requirement for KASLR. The second and third patches take advantage of an already existing powerpc script that checks relocations at compile-time, and uses it for riscv. * b4-shazam-merge: riscv: Use --emit-relocs in order to move .rela.dyn in init riscv: Check relocations at compile time powerpc: Move script to check relocations at compile time in scripts/ riscv: Introduce CONFIG_RELOCATABLE riscv: Move .rela.dyn outside of init to avoid empty relocations riscv: Prepare EFI header for relocatable kernels Link: https://lore.kernel.org/r/20230329045329.64565-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Use --emit-relocs in order to move .rela.dyn in initAlexandre Ghiti
To circumvent an issue where placing the relocations inside the init sections produces empty relocations, use --emit-relocs. But to avoid carrying those relocations in vmlinux, use an intermediate vmlinux.relocs file which is a copy of vmlinux *before* stripping its relocations. Suggested-by: Björn Töpel <bjorn@kernel.org> Suggested-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230329045329.64565-7-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Check relocations at compile timeAlexandre Ghiti
Relocating kernel at runtime is done very early in the boot process, so it is not convenient to check for relocations there and react in case a relocation was not expected. There exists a script in scripts/ that extracts the relocations from vmlinux that is then used at postlink to check the relocations. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Reviewed-by: Anup Patel <anup@brainfault.org> Link: https://lore.kernel.org/r/20230329045329.64565-6-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19powerpc: Move script to check relocations at compile time in scripts/Alexandre Ghiti
Relocating kernel at runtime is done very early in the boot process, so it is not convenient to check for relocations there and react in case a relocation was not expected. Powerpc architecture has a script that allows to check at compile time for such unexpected relocations: extract the common logic to scripts/ so that other architectures can take advantage of it. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Reviewed-by: Anup Patel <anup@brainfault.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Link: https://lore.kernel.org/r/20230329045329.64565-5-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Introduce CONFIG_RELOCATABLEAlexandre Ghiti
This config allows to compile 64b kernel as PIE and to relocate it at any virtual address at runtime: this paves the way to KASLR. Runtime relocation is possible since relocation metadata are embedded into the kernel. Note that relocating at runtime introduces an overhead even if the kernel is loaded at the same address it was linked at and that the compiler options are those used in arm64 which uses the same RELA relocation format. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230329045329.64565-4-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Move .rela.dyn outside of init to avoid empty relocationsAlexandre Ghiti
This is a preparatory patch for relocatable kernels: .rela.dyn should be in .init but doing so actually produces empty relocations, so this should be a temporary commit until we find a solution. This issue was reported here [1]. [1] https://lore.kernel.org/all/4a6fc7a3-9697-a49b-0941-97f32194b0d7@ghiti.fr/. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230329045329.64565-3-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Prepare EFI header for relocatable kernelsAlexandre Ghiti
ld does not handle relocations correctly as explained here [1], a fix for that was proposed by Nelson there but we have to support older toolchains and then provide this fix. Note that llvm does not need this fix and is then excluded. [1] https://sourceware.org/pipermail/binutils/2023-March/126690.html Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230329045329.64565-2-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19Merge tag 'loongarch-fixes-6.3-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch fixes from Huacai Chen: "Some bug fixes, some build fixes, a comment fix and a trivial cleanup" * tag 'loongarch-fixes-6.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: tools/loongarch: Use __SIZEOF_LONG__ to define __BITS_PER_LONG LoongArch: Replace hard-coded values in comments with VALEN LoongArch: Clean up plat_swiotlb_setup() related code LoongArch: Check unwind_error() in arch_stack_walk() LoongArch: Adjust user_regset_copyin parameter to the correct offset LoongArch: Adjust user_watch_state for explicit alignment LoongArch: module: set section addresses to 0x0 LoongArch: Mark 3 symbol exports as non-GPL LoongArch: Enable PG when wakeup from suspend LoongArch: Fix _CONST64_(x) as unsigned LoongArch: Fix build error if CONFIG_SUSPEND is not set LoongArch: Fix probing of the CRC32 feature LoongArch: Make WriteCombine configurable for ioremap()
2023-04-19Merge patch series "RISC-V kasan rework"Palmer Dabbelt
Alexandre Ghiti <alexghiti@rivosinc.com> says: As described in patch 2, our current kasan implementation is intricate, so I tried to simplify the implementation and mimic what arm64/x86 are doing. In addition it fixes UEFI bootflow with a kasan kernel and kasan inline instrumentation: all kasan configurations were tested on a large ubuntu kernel with success with KASAN_KUNIT_TEST and KASAN_MODULE_TEST. inline ubuntu config + uefi: sv39: OK sv48: OK sv57: OK outline ubuntu config + uefi: sv39: OK sv48: OK sv57: OK Actually 1 test always fails with KASAN_KUNIT_TEST that I have to check: KASAN failure expected in "set_bit(nr, addr)", but none occurrred Note that Palmer recently proposed to remove COMMAND_LINE_SIZE from the userspace abi https://lore.kernel.org/lkml/20221211061358.28035-1-palmer@rivosinc.com/T/ so that we can finally increase the command line to fit all kasan kernel parameters. All of this should hopefully fix the syzkaller riscv build that has been failing for a few months now, any test is appreciated and if I can help in any way, please ask. * b4-shazam-merge: riscv: Unconditionnally select KASAN_VMALLOC if KASAN riscv: Fix ptdump when KASAN is enabled riscv: Fix EFI stub usage of KASAN instrumented strcmp function riscv: Move DTB_EARLY_BASE_VA to the kernel address space riscv: Rework kasan population functions riscv: Split early and final KASAN population functions Link: https://lore.kernel.org/r/20230203075232.274282-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Unconditionnally select KASAN_VMALLOC if KASANAlexandre Ghiti
If KASAN is enabled, VMAP_STACK depends on KASAN_VMALLOC so enable KASAN_VMALLOC with KASAN so that we can enable VMAP_STACK by default. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230203075232.274282-7-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Fix ptdump when KASAN is enabledAlexandre Ghiti
The KASAN shadow region was moved next to the kernel mapping but the ptdump code was not updated and it appears to break the dump of the kernel page table, so fix this by moving the KASAN shadow region in ptdump. Fixes: f7ae02333d13 ("riscv: Move KASAN mapping next to the kernel mapping") Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230203075232.274282-6-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Fix EFI stub usage of KASAN instrumented strcmp functionAlexandre Ghiti
The EFI stub must not use any KASAN instrumented code as the kernel proper did not initialize the thread pointer and the mapping for the KASAN shadow region. Avoid using the generic strcmp function, instead use the one in drivers/firmware/efi/libstub/string.c. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20230203075232.274282-5-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Move DTB_EARLY_BASE_VA to the kernel address spaceAlexandre Ghiti
The early virtual address should lie in the kernel address space for inline kasan instrumentation to succeed, otherwise kasan tries to dereference an address that does not exist in the address space (since kasan only maps *kernel* address space, not the userspace). Simply use the very first address of the kernel address space for the early fdt mapping. It allowed an Ubuntu kernel to boot successfully with inline instrumentation. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230203075232.274282-4-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Rework kasan population functionsAlexandre Ghiti
Our previous kasan population implementation used to have the final kasan shadow region mapped with kasan_early_shadow_page, because we did not clean the early mapping and then we had to populate the kasan region "in-place" which made the code cumbersome. So now we clear the early mapping, establish a temporary mapping while we populate the kasan shadow region with just the kernel regions that will be used. This new version uses the "generic" way of going through a page table that may be folded at runtime (avoid the XXX_next macros). It was tested with outline instrumentation on an Ubuntu kernel configuration successfully. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230203075232.274282-3-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19riscv: Split early and final KASAN population functionsAlexandre Ghiti
This is a preliminary work that allows to make the code more understandable. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230203075232.274282-2-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-04-19sh: sq: Fix incorrect element size for allocating bitmap bufferJohn Paul Adrian Glaubitz
The Store Queue code allocates a bitmap buffer with the size of multiple of sizeof(long) in sq_api_init(). While the buffer size is calculated correctly, the code uses the wrong element size to allocate the buffer which results in the allocated bitmap buffer being too small. Fix this by allocating the buffer with kcalloc() with element size sizeof(long) instead of kzalloc() whose elements size defaults to sizeof(char). Fixes: d7c30c682a27 ("sh: Store Queue API rework.") Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20230419114854.528677-1-glaubitz@physik.fu-berlin.de
2023-04-19Merge tag 'cacheinfo-updates-6.4' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux into driver-core-next Sudeep writes: cacheinfo and arch_topology updates for v6.4 The cache information can be extracted from either a Device Tree(DT), the PPTT ACPI table, or arch registers (clidr_el1 for arm64). When the DT is used but no cache properties are advertised, the current code doesn't correctly fallback to using arch information. The changes fixes the same and also assuse the that L1 data/instruction caches are private and L2/higher caches are shared when the cache information is missing in DT/ACPI and is derived form clidr_el1/arch registers. Currently the cacheinfo is built from the primary CPU prior to secondary CPUs boot, if the DT/ACPI description contains cache information. However, if not present, it still reverts to the old behavior, which allocates the cacheinfo memory on each secondary CPUs which causes RT kernels to triggers a "BUG: sleeping function called from invalid context". The changes here attempts to enable automatic detection for RT kernels when no DT/ACPI cache information is available, by pre-allocating cacheinfo memory on the primary CPU. * tag 'cacheinfo-updates-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux: cacheinfo: Add use_arch[|_cache]_info field/function arch_topology: Remove early cacheinfo error message if -ENOENT cacheinfo: Check cache properties are present in DT cacheinfo: Check sib_leaf in cache_leaves_are_shared() cacheinfo: Allow early level detection when DT/ACPI info is missing/broken cacheinfo: Add arm64 early level initializer implementation cacheinfo: Add arch specific early level initializer
2023-04-19arm: mvebu: dt: Add PHY LED support for 370-rd WAN portAndrew Lunn
The WAN port of the 370-RD has a Marvell PHY, with one LED on the front panel.y List this LED in the device tree. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19ARM: dts: qcom: ipq8064-rb3011: Add Switch LED for each portChristian Marangi
Add Switch LED for each port for MikroTik RB3011UiAS-RM. MikroTik RB3011UiAS-RM is a 10 port device with 2 qca8337 switch chips connected. It was discovered that in the hardware design all 3 Switch LED trace of the related port is connected to the same LED. This was discovered by setting to 'always on' the related led in the switch regs and noticing that all 3 LED for the specific port (for example for port 1) cause the connected LED for port 1 to turn on. As an extra test we tried enabling 2 different LED for the port resulting in the LED turned off only if every led in the reg was off. Aside from this funny and strange hardware implementation, the device itself have one green LED for each port, resulting in 10 green LED one for each of the 10 supported port. Cc: Jonathan McDowell <noodles@earth.li> Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19ARM: dts: qcom: ipq8064-rb3011: Drop unevaluated properties in switch nodesChristian Marangi
IPQ8064 MikroTik RB3011UiAS-RM DT have currently unevaluted properties in the 2 switch nodes. The bindings #address-cells and #size-cells are redundant and cause warning for 'Unevaluated properties are not allowed'. Drop these bindings to mute these warning as they should not be there from the start. Cc: Jonathan McDowell <noodles@earth.li> Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Jonathan McDowell <noodles@earth.li> Tested-by: Jonathan McDowell <noodles@earth.li> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19sh: pci: Remove unused variable in SH-7786 PCI Express codeJohn Paul Adrian Glaubitz
Addresses the following warning when building sdk7786_defconfig: arch/sh/drivers/pci/pcie-sh7786.c:34:22: warning: 'dma_pfn_offset' defined but not used [-Wunused-variable] 34 | static unsigned long dma_pfn_offset; | ^~~~~~~~~~~~~~ Fixes: e0d072782c73 ("dma-mapping: introduce DMA range map, supplanting dma_pfn_offset") Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20230419070934.422997-1-glaubitz@physik.fu-berlin.de
2023-04-19LoongArch: Replace hard-coded values in comments with VALENEnze Li
According to LoongArch documentation [1], CSR.PGDL and CSR.PGDH are concerned with the VA's MSB which is VALEN-1 instead of always being 47. Fix comments to avoid misleading others. [1] https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#page-global-directory-base-address-for-lower-half-address-space Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Enze Li <lienze@kylinos.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-04-19LoongArch: Clean up plat_swiotlb_setup() related codeTiezhu Yang
After commit c78c43fe7d42 ("LoongArch: Use acpi_arch_dma_setup() and remove ARCH_HAS_PHYS_TO_DMA"), plat_swiotlb_setup() has been deleted, so clean up the related code. Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-04-19LoongArch: Check unwind_error() in arch_stack_walk()Tiezhu Yang
We can see the following messages with CONFIG_PROVE_LOCKING=y on LoongArch: BUG: MAX_STACK_TRACE_ENTRIES too low! turning off the locking correctness validator. This is because stack_trace_save() returns a big value after call arch_stack_walk(), here is the call trace: save_trace() stack_trace_save() arch_stack_walk() stack_trace_consume_entry() arch_stack_walk() should return immediately if unwind_next_frame() failed, no need to do the useless loops to increase the value of c->len in stack_trace_consume_entry(), then we can fix the above problem. Cc: stable@vger.kernel.org Reported-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/all/8a44ad71-68d2-4926-892f-72bfc7a67e2a@roeck-us.net/ Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>