summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-11-25perf/smmuv3: Support sysfs identifier fileJohn Garry
SMMU_PMCG_IIDR was added in the SMMUv3.3 spec. For the perf tool to know the specific HW implementation, expose the PMCG_IIDR contents only when set. Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1602149181-237415-5-git-send-email-john.garry@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-25drivers/perf: hisi: Add identifier sysfs fileJohn Garry
To allow userspace to identify the specific implementation of the device, add an "identifier" sysfs file. Encoding is as follows (same for all uncore drivers): hi1620: 0x0 hi1630: 0x30 Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1602149181-237415-2-git-send-email-john.garry@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-25perf: remove duplicate check on fwnodeWang Qing
fwnode is checked IS_ERR_OR_NULL in following check by is_of_node() or is_acpi_device_node(), remove duplicate check. Signed-off-by: Wang Qing <wangqing@vivo.com> Link: https://lore.kernel.org/r/1604644902-29655-1-git-send-email-wangqing@vivo.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-25driver/perf: Add PMU driver for the ARM DMC-620 memory controllerTuan Phan
DMC-620 PMU supports total 10 counters which each is independently programmable to different events and can be started and stopped individually. Currently, it only supports ACPI. Other platforms feel free to test and add support for device tree. Usage example: #perf stat -e arm_dmc620_10008c000/clk_cycle_count/ -C 0 Get perf event for clk_cycle_count counter. #perf stat -e arm_dmc620_10008c000/clkdiv2_allocate,mask=0x1f,match=0x2f, incr=2,invert=1/ -C 0 The above example shows how to specify mask, match, incr, invert parameters for clkdiv2_allocate event. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Tuan Phan <tuanphan@os.amperecomputing.com> Link: https://lore.kernel.org/r/1604518246-6198-1-git-send-email-tuanphan@os.amperecomputing.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23arm64: expose FAR_EL1 tag bits in siginfoPeter Collingbourne
The kernel currently clears the tag bits (i.e. bits 56-63) in the fault address exposed via siginfo.si_addr and sigcontext.fault_address. However, the tag bits may be needed by tools in order to accurately diagnose memory errors, such as HWASan [1] or future tools based on the Memory Tagging Extension (MTE). Expose these bits via the arch_untagged_si_addr mechanism, so that they are only exposed to signal handlers with the SA_EXPOSE_TAGBITS flag set. [1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://linux-review.googlesource.com/id/Ia8876bad8c798e0a32df7c2ce1256c4771c81446 Link: https://lore.kernel.org/r/0010296597784267472fa13b39f8238d87a72cf8.1605904350.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-23signal: define the SA_EXPOSE_TAGBITS bit in sa_flagsPeter Collingbourne
Architectures that support address tagging, such as arm64, may want to expose fault address tag bits to the signal handler to help diagnose memory errors. However, these bits have not been previously set, and their presence may confuse unaware user applications. Therefore, introduce a SA_EXPOSE_TAGBITS flag bit in sa_flags that a signal handler may use to explicitly request that the bits are set. The generic signal handler APIs expect to receive tagged addresses. Architectures may specify how to untag addresses in the case where SA_EXPOSE_TAGBITS is clear by defining the arch_untagged_si_addr function. Signed-off-by: Peter Collingbourne <pcc@google.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Link: https://linux-review.googlesource.com/id/I16dd0ed2081f091fce97be0190cb8caa874c26cb Link: https://lkml.kernel.org/r/13cf24d00ebdd8e1f55caf1821c7c29d54100191.1605904350.git.pcc@google.com Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2020-11-23signal: define the SA_UNSUPPORTED bit in sa_flagsPeter Collingbourne
Define a sa_flags bit, SA_UNSUPPORTED, which will never be supported in the uapi. The purpose of this flag bit is to allow userspace to distinguish an old kernel that does not clear unknown sa_flags bits from a kernel that supports every flag bit. In other words, if userspace does something like: act.sa_flags |= SA_UNSUPPORTED; sigaction(SIGSEGV, &act, 0); sigaction(SIGSEGV, 0, &oldact); and finds that SA_UNSUPPORTED remains set in oldact.sa_flags, it means that the kernel cannot be trusted to have cleared unknown flag bits from sa_flags, so no assumptions about flag bit support can be made. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://linux-review.googlesource.com/id/Ic2501ad150a3a79c1cf27fb8c99be342e9dffbcb Link: https://lkml.kernel.org/r/bda7ddff8895a9bc4ffc5f3cf3d4d37a32118077.1605582887.git.pcc@google.com Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2020-11-23arch: provide better documentation for the arch-specific SA_* flagsPeter Collingbourne
Instead of documenting the arch-specific flag values in a comment at the top where they may be easily overlooked, document them in comments inline with the definitions in numerical order so that it is clear why specific values must be chosen for new generic flags and to reduce the likelihood of conflicts between generic and arch-specific flags. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I40a129cf7c3a71ba1bfd6d936c544072ee3b7ce6 Link: https://lkml.kernel.org/r/198c8b68c76bf3ed73117d817c7cdf9bc0eb174f.1605582887.git.pcc@google.com Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2020-11-23signal: clear non-uapi flag bits when passing/returning sa_flagsPeter Collingbourne
Previously we were not clearing non-uapi flag bits in sigaction.sa_flags when storing the userspace-provided sa_flags or when returning them via oldact. Start doing so. This allows userspace to detect missing support for flag bits and allows the kernel to use non-uapi bits internally, as we are already doing in arch/x86 for two flag bits. Now that this change is in place, we no longer need the code in arch/x86 that was hiding these bits from userspace, so remove it. This is technically a userspace-visible behavior change for sigaction, as the unknown bits returned via oldact.sa_flags are no longer set. However, we are free to define the behavior for unknown bits exactly because their behavior is currently undefined, so for now we can define the meaning of each of them to be "clear the bit in oldact.sa_flags unless the bit becomes known in the future". Furthermore, this behavior is consistent with OpenBSD [1], illumos [2] and XNU [3] (FreeBSD [4] and NetBSD [5] fail the syscall if unknown bits are set). So there is some precedent for this behavior in other kernels, and in particular in XNU, which is probably the most popular kernel among those that I looked at, which means that this change is less likely to be a compatibility issue. Link: [1] https://github.com/openbsd/src/blob/f634a6a4b5bf832e9c1de77f7894ae2625e74484/sys/kern/kern_sig.c#L278 Link: [2] https://github.com/illumos/illumos-gate/blob/76f19f5fdc974fe5be5c82a556e43a4df93f1de1/usr/src/uts/common/syscall/sigaction.c#L86 Link: [3] https://github.com/apple/darwin-xnu/blob/a449c6a3b8014d9406c2ddbdc81795da24aa7443/bsd/kern/kern_sig.c#L480 Link: [4] https://github.com/freebsd/freebsd/blob/eded70c37057857c6e23fae51f86b8f8f43cd2d0/sys/kern/kern_sig.c#L699 Link: [5] https://github.com/NetBSD/src/blob/3365779becdcedfca206091a645a0e8e22b2946e/sys/kern/sys_sig.c#L473 Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Link: https://linux-review.googlesource.com/id/I35aab6f5be932505d90f3b3450c083b4db1eca86 Link: https://lkml.kernel.org/r/878dbcb5f47bc9b11881c81f745c0bef5c23f97f.1605235762.git.pcc@google.com Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2020-11-23arch: move SA_* definitions to generic headersPeter Collingbourne
Most architectures with the exception of alpha, mips, parisc and sparc use the same values for these flags. Move their definitions into asm-generic/signal-defs.h and allow the architectures with non-standard values to override them. Also, document the non-standard flag values in order to make it easier to add new generic flags in the future. A consequence of this change is that on powerpc and x86, the constants' values aside from SA_RESETHAND change signedness from unsigned to signed. This is not expected to impact realistic use of these constants. In particular the typical use of the constants where they are or'ed together and assigned to sa_flags (or another int variable) would not be affected. Signed-off-by: Peter Collingbourne <pcc@google.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://linux-review.googlesource.com/id/Ia3849f18b8009bf41faca374e701cdca36974528 Link: https://lkml.kernel.org/r/b6d0d1ec34f9ee93e1105f14f288fba5f89d1f24.1605235762.git.pcc@google.com Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2020-11-23parisc: start using signal-defs.hPeter Collingbourne
We currently include signal-defs.h on all architectures except parisc. Make parisc fall in line. This will make maintenance easier once the flag bits are moved here. Signed-off-by: Peter Collingbourne <pcc@google.com> Acked-by: Helge Deller <deller@gmx.de> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Link: https://linux-review.googlesource.com/id/If03a5135fb514fe96548fb74610e6c3586a04064 Link: https://lkml.kernel.org/r/be8f3680ef2d0a1a120994e3ae0b11d82f373279.1605235762.git.pcc@google.com Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2020-11-23parisc: Drop parisc special case for __sighandler_tHelge Deller
I believe we can and *should* drop this parisc-specific typedef for __sighandler_t when compiling a 64-bit kernel. The reasons: 1. We don't have a 64-bit userspace yet, so nothing (on userspace side) can break. 2. Inside the Linux kernel, this is only used in kernel/signal.c, in function kernel_sigaction() where the signal handler is compared against SIG_IGN. SIG_IGN is defined as (__sighandler_t)1), so only the pointers are compared. 3. Even when a 64-bit userspace gets added at some point, I think __sighandler_t should be defined what it is: a function pointer struct. I compiled kernel/signal.c with and without the patch, and the produced code is identical in both cases. Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Peter Collingbourne <pcc@google.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Reviewed-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I21c43f21b264f339e3aa395626af838646f62d97 Link: https://lkml.kernel.org/r/a75b8eb7bb9eac1cf73fb119eb53e5892d6e9656.1605235762.git.pcc@google.com Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2020-11-23arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect()Will Deacon
With hardware dirty bit management, calling pte_wrprotect() on a writable, dirty PTE will lose the dirty state and return a read-only, clean entry. Move the logic from ptep_set_wrprotect() into pte_wrprotect() to ensure that the dirty bit is preserved for writable entries, as this is required for soft-dirty bit management if we enable it in the future. Cc: <stable@vger.kernel.org> Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201120143557.6715-3-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23arm64: pgtable: Fix pte_accessible()Will Deacon
pte_accessible() is used by ptep_clear_flush() to figure out whether TLB invalidation is necessary when unmapping pages for reclaim. Although our implementation is correct according to the architecture, returning true only for valid, young ptes in the absence of racing page-table modifications, this is in fact flawed due to lazy invalidation of old ptes in ptep_clear_flush_young() where we elide the expensive DSB instruction for completing the TLB invalidation. Rather than penalise the aging path, adjust pte_accessible() to return true for any valid pte, even if the access flag is cleared. Cc: <stable@vger.kernel.org> Fixes: 76c714be0e5e ("arm64: pgtable: implement pte_accessible()") Reported-by: Yu Zhao <yuzhao@google.com> Acked-by: Yu Zhao <yuzhao@google.com> Reviewed-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201120143557.6715-2-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23ACPI/IORT: Fix doc warnings in iort.cShiju Jose
Fix following warnings caused by mismatch between function parameters and function comments. drivers/acpi/arm64/iort.c:55: warning: Function parameter or member 'iort_node' not described in 'iort_set_fwnode' drivers/acpi/arm64/iort.c:55: warning: Excess function parameter 'node' description in 'iort_set_fwnode' drivers/acpi/arm64/iort.c:682: warning: Function parameter or member 'id' not described in 'iort_get_device_domain' drivers/acpi/arm64/iort.c:682: warning: Function parameter or member 'bus_token' not described in 'iort_get_device_domain' drivers/acpi/arm64/iort.c:682: warning: Excess function parameter 'req_id' description in 'iort_get_device_domain' drivers/acpi/arm64/iort.c:1142: warning: Function parameter or member 'dma_size' not described in 'iort_dma_setup' drivers/acpi/arm64/iort.c:1142: warning: Excess function parameter 'size' description in 'iort_dma_setup' drivers/acpi/arm64/iort.c:1534: warning: Function parameter or member 'ops' not described in 'iort_add_platform_device' Signed-off-by: Shiju Jose <shiju.jose@huawei.com> Acked-by: Hanjun Guo <guohanjun@huawei.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20201014093139.1580-1-shiju.jose@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23arm64/fpsimd: add <asm/insn.h> to <asm/kprobes.h> to fix fpsimd buildRandy Dunlap
Adding <asm/exception.h> brought in <asm/kprobes.h> which uses <asm/probes.h>, which uses 'pstate_check_t' so the latter needs to #include <asm/insn.h> for this typedef. Fixes this build error: In file included from arch/arm64/include/asm/kprobes.h:24, from arch/arm64/include/asm/exception.h:11, from arch/arm64/kernel/fpsimd.c:35: arch/arm64/include/asm/probes.h:16:2: error: unknown type name 'pstate_check_t' 16 | pstate_check_t *pstate_cc; Fixes: c6b90d5cf637 ("arm64/fpsimd: Fix missing-prototypes in fpsimd.c") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: Tian Tao <tiantao6@hisilicon.com> Link: https://lore.kernel.org/r/20201123044510.9942-1-rdunlap@infradead.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-20mm: Remove examples from enum zone_type commentNicolas Saenz Julienne
We can't really list every setup in common code. On top of that they are unlikely to stay true for long as things change in the arch trees independently of this comment. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20201119175400.9995-8-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20arm64: mm: Set ZONE_DMA size based on early IORT scanArd Biesheuvel
We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) Instructing the DMA layer about these limitations is straight-forward, even though we had to fix some issues regarding memory limits set in the IORT for named components, and regarding the handling of ACPI _DMA methods. However, the DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So let's do an early scan of the IORT, and only create the ZONE_DMA if we encounter any devices that need it. This puts the burden on the firmware to describe such limitations in the IORT, which may be redundant (and less precise) if _DMA methods are also being provided. However, it should be noted that this situation is highly unusual for arm64 ACPI machines. Also, the DMA subsystem still gives precedence to the _DMA method if implemented, and so we will not lose the ability to perform streaming DMA outside the ZONE_DMA if the _DMA method permits it. [nsaenz: unified implementation with DT's counterpart] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <guohanjun@huawei.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Rob Herring <robh+dt@kernel.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-7-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20arm64: mm: Set ZONE_DMA size based on devicetree's dma-rangesNicolas Saenz Julienne
We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) The DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So, with the help of of_dma_get_max_cpu_address() get the topmost physical address accessible to all DMA masters in system and use that information to fine-tune ZONE_DMA's size. In the absence of addressing limited masters ZONE_DMA will span the whole 32-bit address space, otherwise, in the case of the Raspberry Pi 4 it'll only span the 30-bit address space, and have ZONE_DMA32 cover the rest of the 32-bit address space. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201119175400.9995-6-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20of: unittest: Add test for of_dma_get_max_cpu_address()Nicolas Saenz Julienne
Introduce a test for of_dma_get_max_cup_address(), it uses the same DT data as the rest of dma-ranges unit tests. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20201119175400.9995-5-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20of/address: Introduce of_dma_get_max_cpu_address()Nicolas Saenz Julienne
Introduce of_dma_get_max_cpu_address(), which provides the highest CPU physical address addressable by all DMA masters in the system. It's specially useful for setting memory zones sizes at early boot time. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20201119175400.9995-4-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20arm64: mm: Move zone_dma_bits initialization into zone_sizes_init()Nicolas Saenz Julienne
zone_dma_bits's initialization happens earlier that it's actually needed, in arm64_memblock_init(). So move it into the more suitable zone_sizes_init(). Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-3-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20arm64: mm: Move reserve_crashkernel() into mem_init()Nicolas Saenz Julienne
crashkernel might reserve memory located in ZONE_DMA. We plan to delay ZONE_DMA's initialization after unflattening the devicetree and ACPI's boot table initialization, so move it later in the boot process. Specifically into bootmem_init() since request_standard_resources() depends on it. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-2-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-20arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is requiredCatalin Marinas
mem_init() currently relies on knowing the boundaries of the crashkernel reservation to map such region with page granularity for later unmapping via set_memory_valid(..., 0). If the crashkernel reservation is deferred, such boundaries are not known when the linear mapping is created. Simply parse the command line for "crashkernel" and, if found, create the linear map with NO_BLOCK_MAPPINGS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Acked-by: James Morse <james.morse@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201119175556.18681-1-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-19arm64: Ignore any DMA offsets in the max_zone_phys() calculationCatalin Marinas
Currently, the kernel assumes that if RAM starts above 32-bit (or zone_bits), there is still a ZONE_DMA/DMA32 at the bottom of the RAM and such constrained devices have a hardwired DMA offset. In practice, we haven't noticed any such hardware so let's assume that we can expand ZONE_DMA32 to the available memory if no RAM below 4GB. Similarly, ZONE_DMA is expanded to the 4GB limit if no RAM addressable by zone_bits. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20201118185809.1078362-1-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-18arm64: mte: optimize asynchronous tag check fault flag checkPeter Collingbourne
We don't need to check for MTE support before checking the flag because it can only be set if the hardware supports MTE. As a result we can unconditionally check the flag bit which is expected to be in a register and therefore the check can be done in a single instruction instead of first needing to load the hwcaps. On a DragonBoard 845c with a kernel built with CONFIG_ARM64_MTE=y with the powersave governor this reduces the cost of a kernel entry/exit (invalid syscall) from 465.1ns to 463.8ns. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/If4dc3501fd4e4f287322f17805509613cfe47d24 Link: https://lore.kernel.org/r/20201118032051.1405907-1-pcc@google.com [catalin.marinas@arm.com: remove IS_ENABLED(CONFIG_ARM64_MTE)] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-17arm64/mm: add fallback option to allocate virtually contiguous memorySudarshan Rajagopalan
When section mappings are enabled, we allocate vmemmap pages from physically continuous memory of size PMD_SIZE using vmemmap_alloc_block_buf(). Section mappings are good to reduce TLB pressure. But when system is highly fragmented and memory blocks are being hot-added at runtime, its possible that such physically continuous memory allocations can fail. Rather than failing the memory hot-add procedure, add a fallback option to allocate vmemmap pages from discontinuous pages using vmemmap_populate_basepages(). Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Will Deacon <will@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: David Hildenbrand <david@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/d6c06f2ef39bbe6c715b2f6db76eb16155fdcee6.1602722808.git.sudaraja@codeaurora.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-17arm64: head: tidy up the Image header definitionArd Biesheuvel
Even though support for EFI boot remains entirely optional for arm64, it is unlikely that we will ever be able to repurpose the image header fields that the EFI loader relies on, i.e., the magic NOP at offset 0x0 and the PE header address at offset 0x3c. So let's factor out the differences into a 'efi_signature_nop' macro and a local symbol representing the PE header address, and move the conditional definitions into efi-header.S, taking into account whether CONFIG_EFI is enabled or not. While at it, switch to a signature NOP that behaves more like a NOP, i.e., one that only clobbers the flags. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201117124729.12642-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-17arm64/head: avoid symbol names pointing into first 64 KB of kernel imageArd Biesheuvel
We no longer map the first 64 KB of the kernel image, as there is nothing there that we ever need to refer back to once the kernel has booted. Even though facilities like kallsyms are very careful to only refer to the region that starts at _stext when mapping virtual addresses to symbol names, let's avoid any confusion by switching to local .L prefixed symbol names for the EFI header, as none of them have any significance to the rest of the kernel. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201117124729.12642-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-17arm64: omit [_text, _stext) from permanent kernel mappingArd Biesheuvel
In a previous patch, we increased the size of the EFI PE/COFF header to 64 KB, which resulted in the _stext symbol to appear at a fixed offset of 64 KB into the image. Since 64 KB is also the largest page size we support, this completely removes the need to map the first 64 KB of the kernel image, given that it only contains the arm64 Image header and the EFI header, neither of which we ever access again after booting the kernel. More importantly, we should avoid an executable mapping of non-executable and not entirely predictable data, to deal with the unlikely event that we inadvertently emitted something that looks like an opcode that could be used as a gadget for speculative execution. So let's limit the kernel mapping of .text to the [_stext, _etext) region, which matches the view of generic code (such as kallsyms) when it reasons about the boundaries of the kernel's .text section. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201117124729.12642-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-13arm64: abort counter_read_on_cpu() when irqs_disabled()Ionela Voinescu
Given that smp_call_function_single() can deadlock when interrupts are disabled, abort the SMP call if irqs_disabled(). This scenario is currently not possible given the function's uses, but safeguard this for potential future uses. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201113155328.4194-1-ionela.voinescu@arm.com [catalin.marinas@arm.com: modified following Mark's comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-13arm64: implement CPPC FFH support using AMUsIonela Voinescu
If Activity Monitors (AMUs) are present, two of the counters can be used to implement support for CPPC's (Collaborative Processor Performance Control) delivered and reference performance monitoring functionality using FFH (Functional Fixed Hardware). Given that counters for a certain CPU can only be read from that CPU, while FFH operations can be called from any CPU for any of the CPUs, use smp_call_function_single() to provide the requested values. Therefore, depending on the register addresses, the following values are returned: - 0x0 (DeliveredPerformanceCounterRegister): AMU core counter - 0x1 (ReferencePerformanceCounterRegister): AMU constant counter The use of Activity Monitors is hidden behind the generic cpu_read_{corecnt,constcnt}() functions. Read functionality for these two registers represents the only current FFH support for CPPC. Read operations for other register values or write operation for all registers are unsupported. Therefore, keep CPPC's FFH unsupported if no CPUs have valid AMU frequency counters. For this purpose, the get_cpu_with_amu_feat() is introduced. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-4-ionela.voinescu@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-13arm64: split counter validation functionIonela Voinescu
In order for the counter validation function to be reused, split validate_cpu_freq_invariance_counters() into: - freq_counters_valid(cpu) - check cpu for valid cycle counters - freq_inv_set_max_ratio(int cpu, u64 max_rate, u64 ref_rate) - generic function that sets the normalization ratio used by topology_scale_freq_tick() Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-3-ionela.voinescu@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-13arm64: wrap and generalise counter read functionsIonela Voinescu
In preparation for other uses of Activity Monitors (AMU) cycle counters, place counter read functionality in generic functions that can reused: read_corecnt() and read_constcnt(). As a result, implement update_freq_counters_refs() to replace init_cpu_freq_invariance_counters() and both initialise and update the per-cpu reference variables. Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-2-ionela.voinescu@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-13arm64: cpu_errata: Apply Erratum 845719 to KRYO2XX SilverKonrad Dybcio
QCOM KRYO2XX Silver cores are Cortex-A53 based and are susceptible to the 845719 erratum. Add them to the lookup list to apply the erratum. Signed-off-by: Konrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20201104232218.198800-5-konrad.dybcio@somainline.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-13arm64: proton-pack: Add KRYO2XX silver CPUs to spectre-v2 safe-listKonrad Dybcio
KRYO2XX silver (LITTLE) CPUs are based on Cortex-A53 and they are not affected by spectre-v2. Signed-off-by: Konrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20201104232218.198800-4-konrad.dybcio@somainline.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-13arm64: kpti: Add KRYO2XX gold/silver CPU cores to kpti safelistKonrad Dybcio
QCOM KRYO2XX gold (big) silver (LITTLE) CPU cores are based on Cortex-A73 and Cortex-A53 respectively and are meltdown safe, hence add them to kpti_safe_list[]. Signed-off-by: Konrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20201104232218.198800-3-konrad.dybcio@somainline.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-13arm64: Add MIDR value for KRYO2XX gold/silver CPU coresKonrad Dybcio
Add MIDR value for KRYO2XX gold (big) and silver (LITTLE) CPU cores which are used in Qualcomm Technologies, Inc. SoCs. This will be used to identify and apply errata which are applicable for these CPU cores. Signed-off-by: Konrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20201104232218.198800-2-konrad.dybcio@somainline.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-13arm64/mm: Validate hotplug range before creating linear mappingAnshuman Khandual
During memory hotplug process, the linear mapping should not be created for a given memory range if that would fall outside the maximum allowed linear range. Else it might cause memory corruption in the kernel virtual space. Maximum linear mapping region is [PAGE_OFFSET..(PAGE_END -1)] accommodating both its ends but excluding PAGE_END. Max physical range that can be mapped inside this linear mapping range, must also be derived from its end points. This ensures that arch_add_memory() validates memory hot add range for its potential linear mapping requirements, before creating it with __create_pgd_mapping(). Fixes: 4ab215061554 ("arm64: Add memory hotplug support") Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Steven Price <steven.price@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1605252614-761-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-12arm64: mm: don't assume struct page is always 64 bytesArd Biesheuvel
Commit 8c96400d6a39be7 simplified the page-to-virt and virt-to-page conversions, based on the assumption that struct page is always 64 bytes in size, in which case we can use a single signed shift to perform the conversion (provided that the vmemmap array is placed appropriately in the kernel VA space) Unfortunately, this assumption turns out not to hold, and so we need to revert part of this commit, and go back to an affine transformation. Given that all the quantities involved are compile time constants, this should not make any practical difference. Fixes: 8c96400d6a39 ("arm64: mm: make vmemmap region a projection of the linear region") Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20201110180511.29083-1-ardb@kernel.org Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64/mm/hotplug: Ensure early memory sections are all onlineAnshuman Khandual
This adds a validation function that scans the entire boot memory and makes sure that all early memory sections are online. This check is essential for the memory notifier to work properly, as it cannot prevent any boot memory from offlining, if all sections are not online to begin with. Although the boot section scanning is selectively enabled with DEBUG_VM. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1604896137-16644-4-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64/mm/hotplug: Enable MEM_OFFLINE event handlingAnshuman Khandual
This enables MEM_OFFLINE memory event handling. It will help intercept any possible error condition such as if boot memory some how still got offlined even after an explicit notifier failure, potentially by a future change in generic hot plug framework. This would help detect such scenarios and help debug further. While here, also call out the first section being attempted for offline or got offlined. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1604896137-16644-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64/mm/hotplug: Register boot memory hot remove notifier earlierAnshuman Khandual
This moves memory notifier registration earlier in the boot process from device_initcall() to early_initcall() which will help in guarding against potential early boot memory offline requests. Even though there should not be any actual offlinig requests till memory block devices are initialized with memory_dev_init() but then generic init sequence might just change in future. Hence an early registration for the memory event notifier would be helpful. While here, just skip the registration if CONFIG_MEMORY_HOTREMOVE is not enabled and also call out when memory notifier registration fails. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1604896137-16644-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: mm: account for hotplug memory when randomizing the linear regionArd Biesheuvel
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed. So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Steven Price <steven.price@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64/smp: Drop the macro S(x,s)Anshuman Khandual
Mapping between IPI type index and its string is direct without requiring an additional offset. Hence the existing macro S(x, s) is now redundant and can just be dropped. This also makes the code clean and simple. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1604921916-23368-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: consistently use reserved_pg_dirMark Rutland
Depending on configuration options and specific code paths, we either use the empty_zero_page or the configuration-dependent reserved_ttbr0 as a reserved value for TTBR{0,1}_EL1. To simplify this code, let's always allocate and use the same reserved_pg_dir, replacing reserved_ttbr0. Note that this is allocated (and hence pre-zeroed), and is also marked as read-only in the kernel Image mapping. Keeping this separate from the empty_zero_page potentially helps with robustness as the empty_zero_page is used in a number of cases where a failure to map it read-only could allow it to become corrupted. The (presently unused) swapper_pg_end symbol is also removed, and comments are added wherever we rely on the offsets between the pre-allocated pg_dirs to keep these cases easily identifiable. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201103102229.8542-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: kprobes: Remove redundant kprobe_step_ctxMasami Hiramatsu
The kprobe_step_ctx (kcb->ss_ctx) has ss_pending and match_addr, but those are redundant because those can be replaced by KPROBE_HIT_SS and &cur_kprobe->ainsn.api.insn[1] respectively. To simplify the code, remove the kprobe_step_ctx. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201103134900.337243-2-jean-philippe@linaro.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10Documentation/arm64: fix RST layout of memory.rstArd Biesheuvel
Stephen reports that commit f4693c2716b3 ("arm64: mm: extend linear region for 52-bit VA configurations") triggers the following warnings when building the htmldocs make target of today's linux-next: Documentation/arm64/memory.rst:35: WARNING: Literal block ends without a blank line; unexpected unindent. Documentation/arm64/memory.rst:53: WARNING: Literal block ends without a blank line; unexpected unindent. Let's tweak the memory layout table to work around this. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Fixes: f4693c2716b3 ("arm64: mm: extend linear region for 52-bit VA configurations") Link: https://lore.kernel.org/r/20201110130851.15751-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10arm64: smp: Tell RCU about CPUs that fail to come onlineWill Deacon
Commit ce3d31ad3cac ("arm64/smp: Move rcu_cpu_starting() earlier") ensured that RCU is informed early about incoming CPUs that might end up calling into printk() before they are online. However, if such a CPU fails the early CPU feature compatibility checks in check_local_cpu_capabilities(), then it will be powered off or parked without informing RCU, leading to an endless stream of stalls: | rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: | rcu: 2-O...: (0 ticks this GP) idle=002/1/0x4000000000000000 softirq=0/0 fqs=2593 | (detected by 0, t=5252 jiffies, g=9317, q=136) | Task dump for CPU 2: | task:swapper/2 state:R running task stack: 0 pid: 0 ppid: 1 flags:0x00000028 | Call trace: | ret_from_fork+0x0/0x30 Ensure that the dying CPU invokes rcu_report_dead() prior to being powered off or parked. Cc: Qian Cai <cai@redhat.com> Cc: "Paul E. McKenney" <paulmck@kernel.org> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Suggested-by: Qian Cai <cai@redhat.com> Link: https://lore.kernel.org/r/20201105222242.GA8842@willie-the-truck Link: https://lore.kernel.org/r/20201106103602.9849-3-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-10arm64: psci: Avoid printing in cpu_psci_cpu_die()Will Deacon
cpu_psci_cpu_die() is called in the context of the dying CPU, which will no longer be online or tracked by RCU. It is therefore not generally safe to call printk() if the PSCI "cpu off" request fails, so remove the pr_crit() invocation. Cc: Qian Cai <cai@redhat.com> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201106103602.9849-2-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>