summaryrefslogtreecommitdiff
path: root/net/unix/garbage.c
diff options
context:
space:
mode:
authorRyan Roberts <ryan.roberts@arm.com>2025-11-06 16:09:42 +0000
committerWill Deacon <will@kernel.org>2025-11-07 14:43:15 +0000
commit40a292f701474f7c21b27911677485efa233e94e (patch)
treec1be7fa4e63f4de64c68c51fab89458e27e71dd6 /net/unix/garbage.c
parentce2b3a50ad922abbba36425343a1bcec46903a26 (diff)
arm64: mm: Optimize range_split_to_ptes()
Enter lazy_mmu mode while splitting a range of memory to pte mappings. This causes barriers, which would otherwise be emitted after every pte (and pmd/pud) write, to be deferred until exiting lazy_mmu mode. For large systems, this is expected to significantly speed up fallback to pte-mapping the linear map for the case where the boot CPU has BBML2_NOABORT, but secondary CPUs do not. I haven't directly measured it, but this is equivalent to commit 1fcb7cea8a5f ("arm64: mm: Batch dsb and isb when populating pgtables"). Note that for the path from arch_kfence_init_pool(), we may sleep while allocating memory inside the lazy_mmu mode. Sleeping is not allowed by generic code inside lazy_mmu, but we know that the arm64 implementation is sleep-safe. So this is ok and follows the same pattern already used by split_kernel_leaf_mapping(). Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Yang Shi <yang@os.amperecomputing.com> Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'net/unix/garbage.c')
0 files changed, 0 insertions, 0 deletions