summaryrefslogtreecommitdiff
path: root/arch/arm/kernel
AgeCommit message (Collapse)Author
2023-02-09mm: replace vma->vm_flags direct modifications with modifier callsSuren Baghdasaryan
Replace direct modifications to vma->vm_flags with calls to modifier functions to be able to track flag changes and to keep vma locking correctness. [akpm@linux-foundation.org: fix drivers/misc/open-dice.c, per Hyeonggon Yoo] Link: https://lkml.kernel.org/r/20230126193752.297968-5-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Acked-by: Sebastian Reichel <sebastian.reichel@collabora.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arjun Roy <arjunroy@google.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: David Rientjes <rientjes@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Thelen <gthelen@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Joel Fernandes <joelaf@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Minchan Kim <minchan@google.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Peter Oskolkov <posk@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Punit Agrawal <punit.agrawal@bytedance.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Shakeel Butt <shakeelb@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: Song Liu <songliubraving@fb.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-04efi: Discover BTI support in runtime services regionsArd Biesheuvel
Add the generic plumbing to detect whether or not the runtime code regions were constructed with BTI/IBT landing pads by the firmware, permitting the OS to enable enforcement when mapping these regions into the OS's address space. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org>
2023-01-13cpuidle,arch: Mark all regular cpuidle_state:: Enter methods __cpuidlePeter Zijlstra
For all cpuidle drivers that do not use CPUIDLE_FLAG_RCU_IDLE (iow, the simple ones) make sure all the functions are marked __cpuidle. ( due to lack of noinstr validation on these platforms it is entirely possible this isn't complete ) Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230112195542.335211484@infradead.org
2023-01-13arm, smp: Remove trace_.*_rcuidle() usagePeter Zijlstra
None of these functions should ever be ran with RCU disabled anymore. Specifically, do_handle_IPI() is only called from handle_IPI() which explicitly does irq_enter()/irq_exit() which ensures RCU is watching. The problem with smp_cross_call() was, per commit description: 7c64cc0531fa ("arm: Use _rcuidle for smp_cross_call() tracepoints") ... that cpuidle_enter_state_coupled() already had RCU disabled, but that's long been fixed by commit: 1098582a0f6c ("sched,idle,rcu: Push rcu_idle deeper into the idle path") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.743432118@infradead.org
2023-01-13arch/idle: Change arch_cpu_idle() behavior: always exit with IRQs disabledPeter Zijlstra
Current arch_cpu_idle() is called with IRQs disabled, but will return with IRQs enabled. However, the very first thing the generic code does after calling arch_cpu_idle() is raw_local_irq_disable(). This means that architectures that can idle with IRQs disabled end up doing a pointless 'enable-disable' dance. Therefore, push this IRQ disabling into the idle function, meaning that those architectures can avoid the pointless IRQ state flipping. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64] Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Guo Ren <guoren@kernel.org> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.618076436@infradead.org
2023-01-11ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabledArd Biesheuvel
In a subsequent patch, we will relax the kernel mode NEON policy, and permit kernel mode NEON to be used not only from task context, as is permitted today, but also from softirq context. Given that softirqs may trigger over the back of any IRQ unless they are explicitly disabled, we need to address the resulting races in the VFP state handling, by disabling softirq processing in two distinct but related cases: - kernel mode NEON will leave the FPU disabled after it completes, so any kernel code sequence that enables the FPU and subsequently accesses its registers needs to disable softirqs until it completes; - kernel_neon_begin() will preserve the userland VFP state in memory, and if it interrupts the ordinary VFP state preserve sequence, the latter will resume execution with the VFP registers corrupted, and happily continue saving them to memory. Given that disabling softirqs also disables preemption, we can replace the existing preempt_disable/enable occurrences in the VFP state handling asm code with new macros that dis/enable softirqs instead. In the VFP state handling C code, add local_bh_disable/enable() calls in those places where the VFP state is preserved. One thing to keep in mind is that, once we allow NEON use in softirq context, the result of any such interruption is that the FPEXC_EN bit in the FPEXC register will be cleared, and vfp_current_hw_state[cpu] will be NULL. This means that any sequence that [conditionally] clears FPEXC_EN and/or sets vfp_current_hw_state[cpu] to NULL does not need to run with softirqs disabled, as the result will be the same. Furthermore, the handling of THREAD_NOTIFY_SWITCH is guaranteed to run with IRQs disabled, and so it does not need protection from softirq interruptions either. Tested-by: Martin Willi <martin@strongswan.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2023-01-11filelock: move file locking definitions to separate header fileJeff Layton
The file locking definitions have lived in fs.h since the dawn of time, but they are only used by a small subset of the source files that include it. Move the file locking definitions to a new header file, and add the appropriate #include directives to the source files that need them. By doing this we trim down fs.h a bit and limit the amount of rebuilding that has to be done when we make changes to the file locking APIs. Reviewed-by: Xiubo Li <xiubli@redhat.com> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Howells <dhowells@redhat.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Acked-by: Chuck Lever <chuck.lever@oracle.com> Acked-by: Joseph Qi <joseph.qi@linux.alibaba.com> Acked-by: Steve French <stfrench@microsoft.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Jeff Layton <jlayton@kernel.org>
2023-01-10ARM: footbridge: remove CATSArnd Bergmann
Nobody seems to have a CATS machine any more, so remove it now, leaving only NetWinder and EBSA285. Cc: Russell King <linux@armlinux.org.uk> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-01-10ARM: iop32x: remove the platformArnd Bergmann
This was marked as unused in 5.19 and can now be removed Cc: Lennert Buytenhek <kernel@wantstofly.org> Cc: Martin Michlmayr <tbm@cyrius.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Wolfram Sang <wsa@kernel.org> # for I2C Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-12-13Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: - update unwinder to cope with module PLTs - enable UBSAN on ARM - improve kernel fault message - update UEFI runtime page tables dump - avoid clang's __aeabi_uldivmod generated in NWFPE code - disable FIQs on CPU shutdown paths - update XOR register usage - a number of build updates (using .arch, thread pointer, removal of lazy evaluation in Makefile) - conversion of stacktrace code to stackwalk - findbit assembly updates - hwcap feature updates for ARMv8 CPUs - instruction dump updates for big-endian platforms - support for function error injection * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (31 commits) ARM: 9279/1: support function error injection ARM: 9277/1: Make the dumped instructions are consistent with the disassembled ones ARM: 9276/1: Refactor dump_instr() ARM: 9275/1: Drop '-mthumb' from AFLAGS_ISA ARM: 9274/1: Add hwcap for Speculative Store Bypassing Safe ARM: 9273/1: Add hwcap for Speculation Barrier(SB) ARM: 9272/1: vfp: Add hwcap for FEAT_AA32I8MM ARM: 9271/1: vfp: Add hwcap for FEAT_AA32BF16 ARM: 9270/1: vfp: Add hwcap for FEAT_FHM ARM: 9269/1: vfp: Add hwcap for FEAT_DotProd ARM: 9268/1: vfp: Add hwcap FPHP and ASIMDHP for FEAT_FP16 ARM: 9267/1: Define Armv8 registers in AArch32 state ARM: findbit: add unwinder information ARM: findbit: operate by words ARM: findbit: convert to macros ARM: findbit: provide more efficient ARMv7 implementation ARM: findbit: document ARMv5 bit offset calculation ARM: 9259/1: stacktrace: Convert stacktrace to generic ARCH_STACKWALK ARM: 9258/1: stacktrace: Make stack walk callback consistent with generic code ARM: 9265/1: pass -march= only to compiler ...
2022-12-13Merge tag 'efi-next-for-v6.2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi Pull EFI updates from Ard Biesheuvel: "Another fairly sizable pull request, by EFI subsystem standards. Most of the work was done by me, some of it in collaboration with the distro and bootloader folks (GRUB, systemd-boot), where the main focus has been on removing pointless per-arch differences in the way EFI boots a Linux kernel. - Refactor the zboot code so that it incorporates all the EFI stub logic, rather than calling the decompressed kernel as a EFI app. - Add support for initrd= command line option to x86 mixed mode. - Allow initrd= to be used with arbitrary EFI accessible file systems instead of just the one the kernel itself was loaded from. - Move some x86-only handling and manipulation of the EFI memory map into arch/x86, as it is not used anywhere else. - More flexible handling of any random seeds provided by the boot environment (i.e., systemd-boot) so that it becomes available much earlier during the boot. - Allow improved arch-agnostic EFI support in loaders, by setting a uniform baseline of supported features, and adding a generic magic number to the DOS/PE header. This should allow loaders such as GRUB or systemd-boot to reduce the amount of arch-specific handling substantially. - (arm64) Run EFI runtime services from a dedicated stack, and use it to recover from synchronous exceptions that might occur in the firmware code. - (arm64) Ensure that we don't allocate memory outside of the 48-bit addressable physical range. - Make EFI pstore record size configurable - Add support for decoding CXL specific CPER records" * tag 'efi-next-for-v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi: (43 commits) arm64: efi: Recover from synchronous exceptions occurring in firmware arm64: efi: Execute runtime services from a dedicated stack arm64: efi: Limit allocations to 48-bit addressable physical region efi: Put Linux specific magic number in the DOS header efi: libstub: Always enable initrd command line loader and bump version efi: stub: use random seed from EFI variable efi: vars: prohibit reading random seed variables efi: random: combine bootloader provided RNG seed with RNG protocol output efi/cper, cxl: Decode CXL Error Log efi/cper, cxl: Decode CXL Protocol Error Section efi: libstub: fix efi_load_initrd_dev_path() kernel-doc comment efi: x86: Move EFI runtime map sysfs code to arch/x86 efi: runtime-maps: Clarify purpose and enable by default for kexec efi: pstore: Add module parameter for setting the record size efi: xen: Set EFI_PARAVIRT for Xen dom0 boot on all architectures efi: memmap: Move manipulation routines into x86 arch tree efi: memmap: Move EFI fake memmap support into x86 arch tree efi: libstub: Undeprecate the command line initrd loader efi: libstub: Add mixed mode support to command line initrd loader efi: libstub: Permit mixed mode return types other than efi_status_t ...
2022-12-12Merge tag 'mm-nonmm-stable-2022-12-12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull non-MM updates from Andrew Morton: - A ptrace API cleanup series from Sergey Shtylyov - Fixes and cleanups for kexec from ye xingchen - nilfs2 updates from Ryusuke Konishi - squashfs feature work from Xiaoming Ni: permit configuration of the filesystem's compression concurrency from the mount command line - A series from Akinobu Mita which addresses bound checking errors when writing to debugfs files - A series from Yang Yingliang to address rapidio memory leaks - A series from Zheng Yejian to address possible overflow errors in encode_comp_t() - And a whole shower of singleton patches all over the place * tag 'mm-nonmm-stable-2022-12-12' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (79 commits) ipc: fix memory leak in init_mqueue_fs() hfsplus: fix bug causing custom uid and gid being unable to be assigned with mount rapidio: devices: fix missing put_device in mport_cdev_open kcov: fix spelling typos in comments hfs: Fix OOB Write in hfs_asc2mac hfs: fix OOB Read in __hfs_brec_find relay: fix type mismatch when allocating memory in relay_create_buf() ocfs2: always read both high and low parts of dinode link count io-mapping: move some code within the include guarded section kernel: kcsan: kcsan_test: build without structleak plugin mailmap: update email for Iskren Chernev eventfd: change int to __u64 in eventfd_signal() ifndef CONFIG_EVENTFD rapidio: fix possible UAF when kfifo_alloc() fails relay: use strscpy() is more robust and safer cpumask: limit visibility of FORCE_NR_CPUS acct: fix potential integer overflow in encode_comp_t() acct: fix accuracy loss for input value of encode_comp_t() linux/init.h: include <linux/build_bug.h> and <linux/stringify.h> rapidio: rio: fix possible name leak in rio_register_mport() rapidio: fix possible name leaks when rio_add_device() fails ...
2022-11-29ARM: 9277/1: Make the dumped instructions are consistent with the ↵Zhen Lei
disassembled ones In ARM, the mapping of instruction memory is always little-endian, except some BE-32 supported ARM architectures. Such as ARMv7-R, its instruction endianness may be BE-32. Of course, its data endianness will also be BE-32 mode. Due to two negatives make a positive, the instruction stored in the register after reading is in little-endian format. But for the case of BE-8, the instruction endianness is LE, the instruction stored in the register after reading is in big-endian format, which is inconsistent with the disassembled one. For example: The content of disassembly: c0429ee8: e3500000 cmp r0, #0 c0429eec: 159f2044 ldrne r2, [pc, #68] c0429ef0: 108f2002 addne r2, pc, r2 c0429ef4: 1882000a stmne r2, {r1, r3} c0429ef8: e7f000f0 udf #0 The output of undefined instruction exception: Internal error: Oops - undefined instruction: 0 [#1] SMP ARM ... ... Code: 000050e3 44209f15 02208f10 0a008218 (f000f0e7) This inconveniences the checking of instructions. What's worse is that, for somebody who don't know about this, might think the instructions are all broken. So, when CONFIG_CPU_ENDIAN_BE8=y, let's convert the instructions to little-endian format before they are printed. The conversion result is as follows: Code: e3500000 159f2044 108f2002 1882000a (e7f000f0) Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9276/1: Refactor dump_instr()Zhen Lei
1. Rename local variable 'val16' to 'tmp'. So that the processing statements of thumb and arm can be aligned. 2. Fix two sparse check warnings: (add __user for type conversion) warning: incorrect type in initializer (different address spaces) expected unsigned short [noderef] __user *register __p got unsigned short [usertype] * 3. Prepare for the next patch to avoid repeated judgment. Before: if (!user_mode(regs)) { if (thumb) else } else { if (thumb) else } After: if (thumb) { if (user_mode(regs)) else } else { if (user_mode(regs)) else } Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9274/1: Add hwcap for Speculative Store Bypassing SafeAmit Daniel Kachhap
Speculative Store Bypassing Safe(FEAT_SSBS) is a feature present in AArch32 state for Armv8 and is represented by ID_PFR2_EL1.SSBS identification register. This feature denotes the presence of PSTATE.ssbs bit and hence adding a hwcap will enable the userspace to check it before trying to set/unset this PSTATE. This commit adds the ID feature bit detection, and uses elf_hwcap2 accordingly. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9273/1: Add hwcap for Speculation Barrier(SB)Amit Daniel Kachhap
Speculation Barrier(FEAT_SB) is a feature present in AArch32 state for Armv8 and is represented by ISAR6.SB identification register. This feature denotes the presence of SB instruction and hence adding a hwcap will enable the userspace to check it before trying to use this instruction. This commit adds the ID feature bit detection, and uses elf_hwcap2 accordingly. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9272/1: vfp: Add hwcap for FEAT_AA32I8MMAmit Daniel Kachhap
Int8 matrix multiplication (FEAT_AA32I8MM) is a feature present in AArch32 state for Armv8 and is represented by ISAR6.I8MM identification register. This feature denotes the presence of VSMMLA, VSUDOT, VUMMLA, VUSMMLA and VUSDOT instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9271/1: vfp: Add hwcap for FEAT_AA32BF16Amit Daniel Kachhap
Advanced SIMD BFloat16 (FEAT_AA32BF16) is a feature present in AArch32 state for Armv8 and is represented by ISAR6.BF16 identification register. This feature denotes the presence of VCVT, VCVTB, VCVTT, VDOT, VFMAB, VFMAT and VMMLA instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9270/1: vfp: Add hwcap for FEAT_FHMAmit Daniel Kachhap
Floating-point half-precision multiplication (FHM) is a feature present in AArch32 state for Armv8 and is represented by ISAR6.FHM identification register. This feature denotes the presence of VFMAL and VMFSL instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9269/1: vfp: Add hwcap for FEAT_DotProdAmit Daniel Kachhap
Advanced Dot product is a feature present in AArch32 state for Armv8 and is represented by ISAR6 identification register. This feature denotes the presence of UDOT and SDOT instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-28ARM: 9268/1: vfp: Add hwcap FPHP and ASIMDHP for FEAT_FP16Amit Daniel Kachhap
Floating point half-precision (FPHP) and Advanced SIMD half-precision (ASIMDHP) are VFP features (FEAT_FP16) represented by MVFR1 identification register. These capabilities can optionally exist with VFPv3 and mandatory with VFPv4. Both these new features exist for Armv8 architecture in AArch32 state. These hwcaps may be useful for the userspace to add conditional check before trying to use FEAT_FP16 feature specific instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-18ARM: kexec: make machine_crash_nonpanic_core() staticChen Lifu
This symbol is not used outside of the file, so mark it static. Fixes the following warning: arch/arm/kernel/machine_kexec.c:76:6: warning: symbol 'machine_crash_nonpanic_core' was not declared. Should it be static? Link: https://lkml.kernel.org/r/20220929042936.22012-5-bhe@redhat.com Signed-off-by: Chen Lifu <chenlifu@huawei.com> Signed-off-by: Baoquan He <bhe@redhat.com> Acked-by: Baoquan He <bhe@redhat.com> Cc: "Eric W . Biederman" <ebiederm@xmission.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Jianglei Nie <niejianglei2021@163.com> Cc: Li Chen <lchen@ambarella.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: ye xingchen <ye.xingchen@zte.com.cn> Cc: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-11-18treewide: use get_random_u32_below() instead of deprecated functionJason A. Donenfeld
This is a simple mechanical transformation done by: @@ expression E; @@ - prandom_u32_max + get_random_u32_below (E) Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs Reviewed-by: SeongJae Park <sj@kernel.org> # for damon Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> # for arm Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-11-15arm: ptrace: user_regset_copyin_ignore() always returns 0Sergey Shtylyov
user_regset_copyin_ignore() always returns 0, so checking its result seems pointless -- don't do this anymore... Link: https://lkml.kernel.org/r/20221014212235.10770-3-s.shtylyov@omp.ru Signed-off-by: Sergey Shtylyov <s.shtylyov@omp.ru> Cc: Brian Cain <bcain@quicinc.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David S. Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Helge Deller <deller@gmx.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.osdn.me> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-11-14ARM: 9259/1: stacktrace: Convert stacktrace to generic ARCH_STACKWALKLi Huafei
Historically architectures have had duplicated code in their stack trace implementations for filtering what gets traced. In order to avoid this duplication some generic code has been provided using a new interface arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which factors all this out into the generic stack trace code. Convert ARM to use this common infrastructure. When initializing the stack frame of the current task, arm64 uses __builtin_frame_address(1) to initialize the frame pointer, skipping arch_stack_walk(), see the commit c607ab4f916d ("arm64: stacktrace: don't trace arch_stack_walk()"). Since __builtin_frame_address(1) does not work on ARM, unwind_frame() is used to unwind the stack one layer forward before calling walk_stackframe(). Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-09efi: libstub: Move screen_info handling to common codeArd Biesheuvel
Currently, arm64, RISC-V and LoongArch rely on the fact that struct screen_info can be accessed directly, due to the fact that the EFI stub and the core kernel are part of the same image. This will change after a future patch, so let's ensure that the screen_info handling is able to deal with this, by adopting the arm32 approach of passing it as a configuration table. While at it, switch to ACPI reclaim memory to hold the screen_info data, which is more appropriate for this kind of allocation. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2022-11-08ARM: 9258/1: stacktrace: Make stack walk callback consistent with generic codeLi Huafei
As with the generic arch_stack_walk() code the ARM stack walk code takes a callback that is called per stack frame. Currently the ARM code always passes a struct stackframe to the callback and the generic code just passes the pc, however none of the users ever reference anything in the struct other than the pc value. The ARM code also uses a return type of int while the generic code uses a return type of bool though in both cases the return value is a boolean value and the sense is inverted between the two. In order to reduce code duplication when ARM is converted to use arch_stack_walk() change the signature and return sense of the ARM specific callback to match that of the generic code. Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Linus Waleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-08ARM: 9263/1: use .arch directives instead of assembler command line flagsNick Desaulniers
Similar to commit a6c30873ee4a ("ARM: 8989/1: use .fpu assembler directives instead of assembler arguments"). GCC and GNU binutils support setting the "sub arch" via -march=, -Wa,-march, target function attribute, and .arch assembler directive. Clang was missing support for -Wa,-march=, but this was implemented in clang-13. The behavior of both GCC and Clang is to prefer -Wa,-march= over -march= for assembler and assembler-with-cpp sources, but Clang will warn about the -march= being unused. clang: warning: argument unused during compilation: '-march=armv6k' [-Wunused-command-line-argument] Since most assembler is non-conditionally assembled with one sub arch (modulo arch/arm/delay-loop.S which conditionally is assembled as armv4 based on CONFIG_ARCH_RPC, and arch/arm/mach-at91/pm-suspend.S which is conditionally assembled as armv7-a based on CONFIG_CPU_V7), prefer the .arch assembler directive. Add a few more instances found in compile testing as found by Arnd and Nathan. Link: https://github.com/llvm/llvm-project/commit/1d51c699b9e2ebc5bcfdbe85c74cc871426333d4 Link: https://bugs.llvm.org/show_bug.cgi?id=48894 Link: https://github.com/ClangBuiltLinux/linux/issues/1195 Link: https://github.com/ClangBuiltLinux/linux/issues/1315 Suggested-by: Arnd Bergmann <arnd@arndb.de> Suggested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-07ARM: 9257/1: Disable FIQs (but not IRQs) on CPUs shutdown pathsGuilherme G. Piccoli
Currently the regular CPU shutdown path for ARM disables IRQs/FIQs in the secondary CPUs - smp_send_stop() calls ipi_cpu_stop(), which is responsible for that. IRQs are architecturally masked when we take an interrupt, but FIQs are high priority than IRQs, hence they aren't masked. With that said, it makes sense to disable FIQs here, but there's no need for (re-)disabling IRQs. More than that: there is an alternative path for disabling CPUs, in the form of function crash_smp_send_stop(), which is used for kexec/panic path. This function relies on a SMP call that also triggers a busy-wait loop [at machine_crash_nonpanic_core()], but without disabling FIQs. This might lead to odd scenarios, like early interrupts in the boot of kexec'd kernel or even interrupts in secondary "disabled" CPUs while the main one still works in the panic path and assumes all secondary CPUs are (really!) off. So, let's disable FIQs in both paths and *not* disable IRQs a second time, since they are already masked in both paths by the architecture. This way, we keep both CPU quiesce paths consistent and safe. Cc: Marc Zyngier <maz@kernel.org> Cc: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-11-07ARM: 9252/1: module: Teach unwinder about PLTsAlex Sverdlin
"unwind: Index not found eef26358" warnings keep popping up on CONFIG_ARM_MODULE_PLTS-enabled systems if the PC points to a PLT veneer. Teach the unwinder how to deal with them, taking into account they don't change state of the stack or register file except loading PC. Link: https://lore.kernel.org/linux-arm-kernel/20200402153845.30985-1-kursad.oney@broadcom.com/ Tested-by: Kursad Oney <kursad.oney@broadcom.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-10-16Merge tag 'random-6.1-rc1-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull more random number generator updates from Jason Donenfeld: "This time with some large scale treewide cleanups. The intent of this pull is to clean up the way callers fetch random integers. The current rules for doing this right are: - If you want a secure or an insecure random u64, use get_random_u64() - If you want a secure or an insecure random u32, use get_random_u32() The old function prandom_u32() has been deprecated for a while now and is just a wrapper around get_random_u32(). Same for get_random_int(). - If you want a secure or an insecure random u16, use get_random_u16() - If you want a secure or an insecure random u8, use get_random_u8() - If you want secure or insecure random bytes, use get_random_bytes(). The old function prandom_bytes() has been deprecated for a while now and has long been a wrapper around get_random_bytes() - If you want a non-uniform random u32, u16, or u8 bounded by a certain open interval maximum, use prandom_u32_max() I say "non-uniform", because it doesn't do any rejection sampling or divisions. Hence, it stays within the prandom_*() namespace, not the get_random_*() namespace. I'm currently investigating a "uniform" function for 6.2. We'll see what comes of that. By applying these rules uniformly, we get several benefits: - By using prandom_u32_max() with an upper-bound that the compiler can prove at compile-time is ≤65536 or ≤256, internally get_random_u16() or get_random_u8() is used, which wastes fewer batched random bytes, and hence has higher throughput. - By using prandom_u32_max() instead of %, when the upper-bound is not a constant, division is still avoided, because prandom_u32_max() uses a faster multiplication-based trick instead. - By using get_random_u16() or get_random_u8() in cases where the return value is intended to indeed be a u16 or a u8, we waste fewer batched random bytes, and hence have higher throughput. This series was originally done by hand while I was on an airplane without Internet. Later, Kees and I worked on retroactively figuring out what could be done with Coccinelle and what had to be done manually, and then we split things up based on that. So while this touches a lot of files, the actual amount of code that's hand fiddled is comfortably small" * tag 'random-6.1-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: prandom: remove unused functions treewide: use get_random_bytes() when possible treewide: use get_random_u32() when possible treewide: use get_random_{u8,u16}() when possible, part 2 treewide: use get_random_{u8,u16}() when possible, part 1 treewide: use prandom_u32_max() when possible, part 2 treewide: use prandom_u32_max() when possible, part 1
2022-10-12Merge tag 'mm-nonmm-stable-2022-10-11' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull non-MM updates from Andrew Morton: - hfs and hfsplus kmap API modernization (Fabio Francesco) - make crash-kexec work properly when invoked from an NMI-time panic (Valentin Schneider) - ntfs bugfixes (Hawkins Jiawei) - improve IPC msg scalability by replacing atomic_t's with percpu counters (Jiebin Sun) - nilfs2 cleanups (Minghao Chi) - lots of other single patches all over the tree! * tag 'mm-nonmm-stable-2022-10-11' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (71 commits) include/linux/entry-common.h: remove has_signal comment of arch_do_signal_or_restart() prototype proc: test how it holds up with mapping'less process mailmap: update Frank Rowand email address ia64: mca: use strscpy() is more robust and safer init/Kconfig: fix unmet direct dependencies ia64: update config files nilfs2: replace WARN_ONs by nilfs_error for checkpoint acquisition failure fork: remove duplicate included header files init/main.c: remove unnecessary (void*) conversions proc: mark more files as permanent nilfs2: remove the unneeded result variable nilfs2: delete unnecessary checks before brelse() checkpatch: warn for non-standard fixes tag style usr/gen_init_cpio.c: remove unnecessary -1 values from int file ipc/msg: mitigate the lock contention with percpu counter percpu: add percpu_counter_add_local and percpu_counter_sub_local fs/ocfs2: fix repeated words in comments relay: use kvcalloc to alloc page array in relay_alloc_page_array proc: make config PROC_CHILDREN depend on PROC_FS fs: uninline inode_maybe_inc_iversion() ...
2022-10-11treewide: use get_random_{u8,u16}() when possible, part 1Jason A. Donenfeld
Rather than truncate a 32-bit value to a 16-bit value or an 8-bit value, simply use the get_random_{u8,u16}() functions, which are faster than wasting the additional bytes from a 32-bit value. This was done mechanically with this coccinelle script: @@ expression E; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; typedef __be16; typedef __le16; typedef u8; @@ ( - (get_random_u32() & 0xffff) + get_random_u16() | - (get_random_u32() & 0xff) + get_random_u8() | - (get_random_u32() % 65536) + get_random_u16() | - (get_random_u32() % 256) + get_random_u8() | - (get_random_u32() >> 16) + get_random_u16() | - (get_random_u32() >> 24) + get_random_u8() | - (u16)get_random_u32() + get_random_u16() | - (u8)get_random_u32() + get_random_u8() | - (__be16)get_random_u32() + (__be16)get_random_u16() | - (__le16)get_random_u32() + (__le16)get_random_u16() | - prandom_u32_max(65536) + get_random_u16() | - prandom_u32_max(256) + get_random_u8() | - E->inet_id = get_random_u32() + E->inet_id = get_random_u16() ) @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; identifier v; @@ - u16 v = get_random_u32(); + u16 v = get_random_u16(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u8; identifier v; @@ - u8 v = get_random_u32(); + u8 v = get_random_u8(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; u16 v; @@ - v = get_random_u32(); + v = get_random_u16(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u8; u8 v; @@ - v = get_random_u32(); + v = get_random_u8(); // Find a potential literal @literal_mask@ expression LITERAL; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; position p; @@ ((T)get_random_u32()@p & (LITERAL)) // Examine limits @script:python add_one@ literal << literal_mask.LITERAL; RESULT; @@ value = None if literal.startswith('0x'): value = int(literal, 16) elif literal[0] in '123456789': value = int(literal, 10) if value is None: print("I don't know how to handle %s" % (literal)) cocci.include_match(False) elif value < 256: coccinelle.RESULT = cocci.make_ident("get_random_u8") elif value < 65536: coccinelle.RESULT = cocci.make_ident("get_random_u16") else: print("Skipping large mask of %s" % (literal)) cocci.include_match(False) // Replace the literal mask with the calculated result. @plus_one@ expression literal_mask.LITERAL; position literal_mask.p; identifier add_one.RESULT; identifier FUNC; @@ - (FUNC()@p & (LITERAL)) + (RESULT() & LITERAL) Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Yury Norov <yury.norov@gmail.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> # for sch_cake Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-10-11treewide: use prandom_u32_max() when possible, part 1Jason A. Donenfeld
Rather than incurring a division or requesting too many random bytes for the given range, use the prandom_u32_max() function, which only takes the minimum required bytes from the RNG and avoids divisions. This was done mechanically with this coccinelle script: @basic@ expression E; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u64; @@ ( - ((T)get_random_u32() % (E)) + prandom_u32_max(E) | - ((T)get_random_u32() & ((E) - 1)) + prandom_u32_max(E * XXX_MAKE_SURE_E_IS_POW2) | - ((u64)(E) * get_random_u32() >> 32) + prandom_u32_max(E) | - ((T)get_random_u32() & ~PAGE_MASK) + prandom_u32_max(PAGE_SIZE) ) @multi_line@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; identifier RAND; expression E; @@ - RAND = get_random_u32(); ... when != RAND - RAND %= (E); + RAND = prandom_u32_max(E); // Find a potential literal @literal_mask@ expression LITERAL; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; position p; @@ ((T)get_random_u32()@p & (LITERAL)) // Add one to the literal. @script:python add_one@ literal << literal_mask.LITERAL; RESULT; @@ value = None if literal.startswith('0x'): value = int(literal, 16) elif literal[0] in '123456789': value = int(literal, 10) if value is None: print("I don't know how to handle %s" % (literal)) cocci.include_match(False) elif value == 2**32 - 1 or value == 2**31 - 1 or value == 2**24 - 1 or value == 2**16 - 1 or value == 2**8 - 1: print("Skipping 0x%x for cleanup elsewhere" % (value)) cocci.include_match(False) elif value & (value + 1) != 0: print("Skipping 0x%x because it's not a power of two minus one" % (value)) cocci.include_match(False) elif literal.startswith('0x'): coccinelle.RESULT = cocci.make_expr("0x%x" % (value + 1)) else: coccinelle.RESULT = cocci.make_expr("%d" % (value + 1)) // Replace the literal mask with the calculated result. @plus_one@ expression literal_mask.LITERAL; position literal_mask.p; expression add_one.RESULT; identifier FUNC; @@ - (FUNC()@p & (LITERAL)) + prandom_u32_max(RESULT) @collapse_ret@ type T; identifier VAR; expression E; @@ { - T VAR; - VAR = (E); - return VAR; + return E; } @drop_var@ type T; identifier VAR; @@ { - T VAR; ... when != VAR } Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Yury Norov <yury.norov@gmail.com> Reviewed-by: KP Singh <kpsingh@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> # for ext4 and sbitmap Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> # for drbd Acked-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> # for s390 Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-10-10Merge tag 'kbuild-v6.1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - Remove potentially incomplete targets when Kbuid is interrupted by SIGINT etc in case GNU Make may miss to do that when stderr is piped to another program. - Rewrite the single target build so it works more correctly. - Fix rpm-pkg builds with V=1. - List top-level subdirectories in ./Kbuild. - Ignore auto-generated __kstrtab_* and __kstrtabns_* symbols in kallsyms. - Avoid two different modules in lib/zstd/ having shared code, which potentially causes building the common code as build-in and modular back-and-forth. - Unify two modpost invocations to optimize the build process. - Remove head-y syntax in favor of linker scripts for placing particular sections in the head of vmlinux. - Bump the minimal GNU Make version to 3.82. - Clean up misc Makefiles and scripts. * tag 'kbuild-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (41 commits) docs: bump minimal GNU Make version to 3.82 ia64: simplify esi object addition in Makefile Revert "kbuild: Check if linker supports the -X option" kbuild: rebuild .vmlinux.export.o when its prerequisite is updated kbuild: move modules.builtin(.modinfo) rules to Makefile.vmlinux_o zstd: Fixing mixed module-builtin objects kallsyms: ignore __kstrtab_* and __kstrtabns_* symbols kallsyms: take the input file instead of reading stdin kallsyms: drop duplicated ignore patterns from kallsyms.c kbuild: reuse mksysmap output for kallsyms mksysmap: update comment about __crc_* kbuild: remove head-y syntax kbuild: use obj-y instead extra-y for objects placed at the head kbuild: hide error checker logs for V=1 builds kbuild: re-run modpost when it is updated kbuild: unify two modpost invocations kbuild: move vmlinux.o rule to the top Makefile kbuild: move .vmlinux.objs rule to Makefile.modpost kbuild: list sub-directories in ./Kbuild Makefile.compiler: replace cc-ifversion with compiler-specific macros ...
2022-10-09Merge tag 'efi-next-for-v6.1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi Pull EFI updates from Ard Biesheuvel: "A bit more going on than usual in the EFI subsystem. The main driver for this has been the introduction of the LoonArch architecture last cycle, which inspired some cleanup and refactoring of the EFI code. Another driver for EFI changes this cycle and in the future is confidential compute. The LoongArch architecture does not use either struct bootparams or DT natively [yet], and so passing information between the EFI stub and the core kernel using either of those is undesirable. And in general, overloading DT has been a source of issues on arm64, so using DT for this on new architectures is a to avoid for the time being (even if we might converge on something DT based for non-x86 architectures in the future). For this reason, in addition to the patch that enables EFI boot for LoongArch, there are a number of refactoring patches applied on top of which separate the DT bits from the generic EFI stub bits. These changes are on a separate topich branch that has been shared with the LoongArch maintainers, who will include it in their pull request as well. This is not ideal, but the best way to manage the conflicts without stalling LoongArch for another cycle. Another development inspired by LoongArch is the newly added support for EFI based decompressors. Instead of adding yet another arch-specific incarnation of this pattern for LoongArch, we are introducing an EFI app based on the existing EFI libstub infrastructure that encapulates the decompression code we use on other architectures, but in a way that is fully generic. This has been developed and tested in collaboration with distro and systemd folks, who are eager to start using this for systemd-boot and also for arm64 secure boot on Fedora. Note that the EFI zimage files this introduces can also be decompressed by non-EFI bootloaders if needed, as the image header describes the location of the payload inside the image, and the type of compression that was used. (Note that Fedora's arm64 GRUB is buggy [0] so you'll need a recent version or switch to systemd-boot in order to use this.) Finally, we are adding TPM measurement of the kernel command line provided by EFI. There is an oversight in the TCG spec which results in a blind spot for command line arguments passed to loaded images, which means that either the loader or the stub needs to take the measurement. Given the combinatorial explosion I am anticipating when it comes to firmware/bootloader stacks and firmware based attestation protocols (SEV-SNP, TDX, DICE, DRTM), it is good to set a baseline now when it comes to EFI measured boot, which is that the kernel measures the initrd and command line. Intermediate loaders can measure additional assets if needed, but with the baseline in place, we can deploy measured boot in a meaningful way even if you boot into Linux straight from the EFI firmware. Summary: - implement EFI boot support for LoongArch - implement generic EFI compressed boot support for arm64, RISC-V and LoongArch, none of which implement a decompressor today - measure the kernel command line into the TPM if measured boot is in effect - refactor the EFI stub code in order to isolate DT dependencies for architectures other than x86 - avoid calling SetVirtualAddressMap() on arm64 if the configured size of the VA space guarantees that doing so is unnecessary - move some ARM specific code out of the generic EFI source files - unmap kernel code from the x86 mixed mode 1:1 page tables" * tag 'efi-next-for-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi: (24 commits) efi/arm64: libstub: avoid SetVirtualAddressMap() when possible efi: zboot: create MemoryMapped() device path for the parent if needed efi: libstub: fix up the last remaining open coded boot service call efi/arm: libstub: move ARM specific code out of generic routines efi/libstub: measure EFI LoadOptions efi/libstub: refactor the initrd measuring functions efi/loongarch: libstub: remove dependency on flattened DT efi: libstub: install boot-time memory map as config table efi: libstub: remove DT dependency from generic stub efi: libstub: unify initrd loading between architectures efi: libstub: remove pointless goto kludge efi: libstub: simplify efi_get_memory_map() and struct efi_boot_memmap efi: libstub: avoid efi_get_memory_map() for allocating the virt map efi: libstub: drop pointless get_memory_map() call efi: libstub: fix type confusion for load_options_size arm64: efi: enable generic EFI compressed boot loongarch: efi: enable generic EFI compressed boot riscv: efi: enable generic EFI compressed boot efi/libstub: implement generic EFI zboot efi/libstub: move efi_system_table global var into separate object ...
2022-10-06Merge tag 'arm-soc-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/socLinus Torvalds
Pull ARM SoC updates from Arnd Bergmann: "The main changes this time are for the organization of the Kconfig files, introducing per-vendor top-level options on arm64 to match those on arm32, and making the platform selection on arm32 more uniform, in particular for the remaining StrongARM platforms that still have a couple of special cases compared to the more recent ones. I also did a cleanup of the old Footbridge platform, which was the last holdout for the phys_to_dma()/dma_to_phys() interface that is now completely gone from arm32, completing work started by Christoph Hellwig" * tag 'arm-soc-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (21 commits) ARM: aspeed: Kconfig: Fix indentation ARM: Drop CMDLINE_* dependency on ATAGS ARM: Drop CMDLINE_FORCE dependency on !ARCH_MULTIPLATFORM ARM: s3c: remove orphan declarations from arch/arm/mach-s3c/devs.h pxa: Drop if with an always false condition ARM: orion: fix include path ARM: shmobile: Drop selecting SOC_BUS arm64: renesas: Drop selecting SOC_BUS ARM: disallow PCI with MMU=n again ARM: footbridge: remove custom DMA address handling MAINTAINERS: Add BCM4908 maintainer to BCMBCA entry ARM: footbridge: move isa-dma support into footbridge ARM: footbridge: remove leftover from personal-server ARM: footbridge: remove addin mode arm64: Kconfig.platforms: Group NXP platforms together arm64: Kconfig.platforms: Re-organized Broadcom menu ARM: make ARCH_MULTIPLATFORM user-visible ARM: fix XIP_KERNEL dependencies ARM: Kconfig: clean up platform selection ARM: simplify machdirs/platdirs handling ...
2022-10-06Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: - Print an un-hashed userspace PC on undefined instruction exception - Disable FDPIC ABI - Remove redundant vfp_flush/release_thread functions - Use raw_cpu_* rather than this_cpu_* in handle_bad_stack() - Avoid needlessly long backtraces when show_regs() is called - Fix an issue with stack traces through call_with_stack() - Avoid stack traces saving a duplicate exception PC value - Pass a void pointer to virt_to_page() in DMA mapping code - Fix kasan maps for modules when CONFIG_KASAN_VMALLOC=n - Show FDT region and page table level names in kernel page tables dump * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 9246/1: dump: show page table level name ARM: 9245/1: dump: show FDT region ARM: 9242/1: kasan: Only map modules if CONFIG_KASAN_VMALLOC=n ARM: 9240/1: dma-mapping: Pass (void *) to virt_to_page() ARM: 9234/1: stacktrace: Avoid duplicate saving of exception PC value ARM: 9233/1: stacktrace: Skip frame pointer boundary check for call_with_stack() ARM: 9224/1: Dump the stack traces based on the parameter 'regs' of show_regs() ARM: 9232/1: Replace this_cpu_* with raw_cpu_* in handle_bad_stack() ARM: 9228/1: vfp: kill vfp_flush/release_thread() ARM: 9226/1: disable FDPIC ABI ARM: 9221/1: traps: print un-hashed user pc on undefined instruction
2022-10-04ARM: 9234/1: stacktrace: Avoid duplicate saving of exception PC valueLi Huafei
Because an exception stack frame is not created in the exception entry, save_trace() does special handling for the exception PC, but this is only needed when CONFIG_FRAME_POINTER_UNWIND=y. When CONFIG_ARM_UNWIND=y, unwind annotations have been added to the exception entry and save_trace() will repeatedly save the exception PC: [0x7f000090] hrtimer_hander+0x8/0x10 [hrtimer] [0x8019ec50] __hrtimer_run_queues+0x18c/0x394 [0x8019f760] hrtimer_run_queues+0xbc/0xd0 [0x8019def0] update_process_times+0x34/0x80 [0x801ad2a4] tick_periodic+0x48/0xd0 [0x801ad3dc] tick_handle_periodic+0x1c/0x7c [0x8010f2e0] twd_handler+0x30/0x40 [0x80177620] handle_percpu_devid_irq+0xa0/0x23c [0x801718d0] generic_handle_domain_irq+0x24/0x34 [0x80502d28] gic_handle_irq+0x74/0x88 [0x8085817c] generic_handle_arch_irq+0x58/0x78 [0x80100ba8] __irq_svc+0x88/0xc8 [0x80108114] arch_cpu_idle+0x38/0x3c [0x80108114] arch_cpu_idle+0x38/0x3c <==== duplicate saved exception PC [0x80861bf8] default_idle_call+0x38/0x130 [0x8015d5cc] do_idle+0x150/0x214 [0x8015d978] cpu_startup_entry+0x18/0x1c [0x808589c0] rest_init+0xd8/0xdc [0x80c00a44] arch_post_acpi_subsys_init+0x0/0x8 We can move the special handling of the exception PC in save_trace() to the unwind_frame() of the frame pointer unwinder. Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Linus Waleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-10-04ARM: 9233/1: stacktrace: Skip frame pointer boundary check for call_with_stack()Li Huafei
When using the frame pointer unwinder, it was found that the stack trace output of stack_trace_save() is incomplete if the stack contains call_with_stack(): [0x7f00002c] dump_stack_task+0x2c/0x90 [hrtimer] [0x7f0000a0] hrtimer_hander+0x10/0x18 [hrtimer] [0x801a67f0] __hrtimer_run_queues+0x1b0/0x3b4 [0x801a7350] hrtimer_run_queues+0xc4/0xd8 [0x801a597c] update_process_times+0x3c/0x88 [0x801b5a98] tick_periodic+0x50/0xd8 [0x801b5bf4] tick_handle_periodic+0x24/0x84 [0x8010ffc4] twd_handler+0x38/0x48 [0x8017d220] handle_percpu_devid_irq+0xa8/0x244 [0x80176e9c] generic_handle_domain_irq+0x2c/0x3c [0x8052e3a8] gic_handle_irq+0x7c/0x90 [0x808ab15c] generic_handle_arch_irq+0x60/0x80 [0x8051191c] call_with_stack+0x1c/0x20 For the frame pointer unwinder, unwind_frame() checks stackframe::fp by stackframe::sp. Since call_with_stack() switches the SP from one stack to another, stackframe::fp and stackframe: :sp will point to different stacks, so we can no longer check stackframe::fp by stackframe::sp. Skip checking stackframe::fp at this point to avoid this problem. Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Linus Waleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-10-02kbuild: use obj-y instead extra-y for objects placed at the headMasahiro Yamada
The objects placed at the head of vmlinux need special treatments: - arch/$(SRCARCH)/Makefile adds them to head-y in order to place them before other archives in the linker command line. - arch/$(SRCARCH)/kernel/Makefile adds them to extra-y instead of obj-y to avoid them going into built-in.a. This commit gets rid of the latter. Create vmlinux.a to collect all the objects that are unconditionally linked to vmlinux. The objects listed in head-y are moved to the head of vmlinux.a by using 'ar m'. With this, arch/$(SRCARCH)/kernel/Makefile can consistently use obj-y for builtin objects. There is no *.o that is directly linked to vmlinux. Drop unneeded code in scripts/clang-tools/gen_compile_commands.py. $(AR) mPi needs 'T' to workaround the llvm-ar bug. The fix was suggested by Nathan Chancellor [1]. [1]: https://lore.kernel.org/llvm/YyjjT5gQ2hGMH0ni@dev-arch.thelio-3990X/ Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Nicolas Schier <nicolas@fjasle.eu>
2022-09-27efi/arm: libstub: move ARM specific code out of generic routinesArd Biesheuvel
Move some code that is only reachable when IS_ENABLED(CONFIG_ARM) into the ARM EFI arch code. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2022-09-22ARM: 9224/1: Dump the stack traces based on the parameter 'regs' of show_regs()Zhen Lei
Function show_regs() is usually called in interrupt handler or exception handler, it prints the registers specified by the parameter 'regs', then dump the stack traces. Although not explicitly documented, dump the stack traces based on'regs' seems to make the most sense. Although dump_stack() can finally dump the desired content, because 'regs' are saved by the entry of current interrupt or exception. In the following example we can see: 1) The backtrace of interrupt or exception handler is not expected, it causes confusion. 2) Something is printed repeatedly. The line with the kernel version "CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8", the registers saved in "Exception stack" which 'regs' actually point to. For example: rcu: INFO: rcu_sched self-detected stall on CPU rcu: 0-....: (499 ticks this GP) idle=379/1/0x40000002 softirq=91/91 fqs=249 (t=500 jiffies g=-911 q=13 ncpus=4) CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8 Hardware name: ARM-Versatile Express PC is at ktime_get+0x4c/0xe8 LR is at ktime_get+0x4c/0xe8 pc : 8019a474 lr : 8019a474 psr: 60000013 sp : cabd1f28 ip : 00000001 fp : 00000005 r10: 527bf1b8 r9 : 431bde82 r8 : d7b634db r7 : 0000156e r6 : 61f234f8 r5 : 00000001 r4 : 80ca86c0 r3 : ffffffff r2 : fe5bce0b r1 : 00000000 r0 : 01a431f4 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 6121406a DAC: 00000051 CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8 <-----------start---------- Hardware name: ARM-Versatile Express | unwind_backtrace from show_stack+0x10/0x14 | show_stack from dump_stack_lvl+0x40/0x4c | dump_stack_lvl from rcu_dump_cpu_stacks+0x10c/0x134 | rcu_dump_cpu_stacks from rcu_sched_clock_irq+0x780/0xaf4 | rcu_sched_clock_irq from update_process_times+0x54/0x74 | update_process_times from tick_periodic+0x3c/0xd4 | tick_periodic from tick_handle_periodic+0x20/0x80 worthless tick_handle_periodic from twd_handler+0x30/0x40 or twd_handler from handle_percpu_devid_irq+0x8c/0x1c8 duplicated handle_percpu_devid_irq from generic_handle_domain_irq+0x24/0x34 | generic_handle_domain_irq from gic_handle_irq+0x74/0x88 | gic_handle_irq from generic_handle_arch_irq+0x34/0x44 | generic_handle_arch_irq from call_with_stack+0x18/0x20 | call_with_stack from __irq_svc+0x98/0xb0 | Exception stack(0xcabd1ed8 to 0xcabd1f20) | 1ec0: 01a431f4 00000000 | 1ee0: fe5bce0b ffffffff 80ca86c0 00000001 61f234f8 0000156e d7b634db 431bde82 | 1f00: 527bf1b8 00000005 00000001 cabd1f28 8019a474 8019a474 60000013 ffffffff | __irq_svc from ktime_get+0x4c/0xe8 <---------end-------------- ktime_get from test_task+0x44/0x110 test_task from kthread+0xd8/0xf4 kthread from ret_from_fork+0x14/0x2c Exception stack(0xcabd1fb0 to 0xcabd1ff8) 1fa0: 00000000 00000000 00000000 00000000 1fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 1fe0: 00000000 00000000 00000000 00000000 00000013 00000000 After replacing dump_stack() with dump_backtrace(): rcu: INFO: rcu_sched self-detected stall on CPU rcu: 0-....: (500 ticks this GP) idle=8f7/1/0x40000002 softirq=129/129 fqs=241 (t=500 jiffies g=-915 q=13 ncpus=4) CPU: 0 PID: 69 Comm: test0 Not tainted 5.19.0+ #9 Hardware name: ARM-Versatile Express PC is at ktime_get+0x4c/0xe8 LR is at ktime_get+0x4c/0xe8 pc : 8019a494 lr : 8019a494 psr: 60000013 sp : cabddf28 ip : 00000001 fp : 00000002 r10: 0779cb48 r9 : 431bde82 r8 : d7b634db r7 : 00000a66 r6 : e835ab70 r5 : 00000001 r4 : 80ca86c0 r3 : ffffffff r2 : ff337d39 r1 : 00000000 r0 : 00cc82c6 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 611d006a DAC: 00000051 ktime_get from test_task+0x44/0x110 test_task from kthread+0xd8/0xf4 kthread from ret_from_fork+0x14/0x2c Exception stack(0xcabddfb0 to 0xcabddff8) dfa0: 00000000 00000000 00000000 00000000 dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-09-15Merge branch 'arm-multiplatform-cleanup' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc into arm/soc Now that everything except StrongARM is unified under CONFIG_ARCH_MULTIPLATFORM, the option is rather meaningless in its current form. Rework the Kconfig logic to make this useful again, similar to the way that RISC-V has CONFIG_NONPORTABLE (with the opposite polarity), this now controls the visibility of options that get in the way of building generic kernels, while allowing custom kernels. One side-effect is that 'randconfig' builds now rarely hit strongarm machines, rather than testing them three quarters of the time. * 'arm-multiplatform-cleanup' of https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: ARM: make ARCH_MULTIPLATFORM user-visible ARM: fix XIP_KERNEL dependencies ARM: Kconfig: clean up platform selection ARM: simplify machdirs/platdirs handling ARM: remove obsolete Makefile.boot infrastructure
2022-09-11kernel: exit: cleanup release_thread()Kefeng Wang
Only x86 has own release_thread(), introduce a new weak release_thread() function to clean empty definitions in other ARCHs. Link: https://lkml.kernel.org/r/20220819014406.32266-1-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Guo Ren <guoren@kernel.org> [csky] Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Brian Cain <bcain@quicinc.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Acked-by: Stafford Horne <shorne@gmail.com> [openrisc] Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Acked-by: Huacai Chen <chenhuacai@kernel.org> [LoongArch] Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Chris Zankel <chris@zankel.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Guo Ren <guoren@kernel.org> [csky] Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Jonas Bonn <jonas@southpole.se> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vineet Gupta <vgupta@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Xuerui Wang <kernel@xen0n.name> Cc: Yoshinori Sato <ysato@users.osdn.me> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-09-09ARM: footbridge: move isa-dma support into footbridgeArnd Bergmann
The dma-isa.c was shared between footbridge and shark a long time ago, but as shark was removed, it can be made footbridge specific again. The fb_dma bits in turn are not used at all and can be removed. All the ISA related files are now built into the platform regardless of CONFIG_ISA, as they just refer to on-chip devices rather than actual ISA cards. Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-09-05asm-generic: Conditionally enable do_softirq_own_stack() via Kconfig.Sebastian Andrzej Siewior
Remove the CONFIG_PREEMPT_RT symbol from the ifdef around do_softirq_own_stack() and move it to Kconfig instead. Enable softirq stacks based on SOFTIRQ_ON_OWN_STACK which depends on HAVE_SOFTIRQ_ON_OWN_STACK and its default value is set to !PREEMPT_RT. This ensures that softirq stacks are not used on PREEMPT_RT and avoids a 'select' statement on an option which has a 'depends' statement. Link: https://lore.kernel.org/YvN5E%2FPrHfUhggr7@linutronix.de Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-08-31ARM: 9232/1: Replace this_cpu_* with raw_cpu_* in handle_bad_stack()Zhen Lei
The hardware automatically disable the IRQ interrupt before jumping to the interrupt or exception vector. Therefore, the preempt_disable() operation in this_cpu_read() after macro expansion is unnecessary. In fact, function this_cpu_read() may trigger scheduling, see pseudocode below. Pseudocode of this_cpu_read(xx): preempt_disable_notrace(); raw_cpu_read(xx); if (unlikely(__preempt_count_dec_and_test())) __preempt_schedule_notrace(); Therefore, use raw_cpu_* instead of this_cpu_* to eliminate potential hazards. At the very least, it reduces a few lines of assembly code. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-08-30ARM: 9221/1: traps: print un-hashed user pc on undefined instructionBaruch Siach
When user undefined instruction debug is enabled pc value is hashed like kernel pointers for security reason. But the security benefit of this hash is very limited because the code goes on to call __show_regs() that prints the plain pointer value. pc is a user pointer anyway, so the kernel does not leak anything. The only result is confusion about the difference between the pc value on the first printed line, and the value that __show_regs() prints. Always print the plain value of pc. Signed-off-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-08-30ARM: make ARCH_MULTIPLATFORM user-visibleArnd Bergmann
Some options like CONFIG_DEBUG_UNCOMPRESS and CONFIG_CMDLINE_FORCE are fundamentally incompatible with portable kernels but are currently allowed in all configurations. Other options like XIP_KERNEL are essentially useless after the completion of the multiplatform conversion. Repurpose the existing CONFIG_ARCH_MULTIPLATFORM option to decide whether the resulting kernel image is meant to be portable or not, and using this to guard all of the known incompatible options. This is similar to how the RISC-V kernel handles the CONFIG_NONPORTABLE option (with the opposite polarity). A few references to CONFIG_ARCH_MULTIPLATFORM were left behind by earlier clanups and have to be removed now up. Signed-off-by: Arnd Bergmann <arnd@arndb.de>