diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-01-17 09:31:36 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-01-17 09:31:36 -0800 |
commit | 7f5e47f785140c2d7948bee6fc387f939f68dbb8 (patch) | |
tree | 057a9d98ca492c55708baedcc59bf2ea3e2511c7 /mm/kasan/generic.c | |
parent | 052d534373b7ed33712a63d5e17b2b6cdbce84fd (diff) | |
parent | 5d4747a6cc8e78ce74742d557fc9b7697fcacc95 (diff) |
Merge tag 'mm-hotfixes-stable-2024-01-12-16-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull misc hotfixes from Andrew Morton:
"For once not mostly MM-related.
17 hotfixes. 10 address post-6.7 issues and the other 7 are cc:stable"
* tag 'mm-hotfixes-stable-2024-01-12-16-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
userfaultfd: avoid huge_zero_page in UFFDIO_MOVE
MAINTAINERS: add entry for shrinker
selftests: mm: hugepage-vmemmap fails on 64K page size systems
mm/memory_hotplug: fix memmap_on_memory sysfs value retrieval
mailmap: switch email for Tanzir Hasan
mailmap: add old address mappings for Randy
kernel/crash_core.c: make __crash_hotplug_lock static
efi: disable mirror feature during crashkernel
kexec: do syscore_shutdown() in kernel_kexec
mailmap: update entry for Manivannan Sadhasivam
fs/proc/task_mmu: move mmu notification mechanism inside mm lock
mm: zswap: switch maintainers to recently active developers and reviewers
scripts/decode_stacktrace.sh: optionally use LLVM utilities
kasan: avoid resetting aux_lock
lib/Kconfig.debug: disable CONFIG_DEBUG_INFO_BTF for Hexagon
MAINTAINERS: update LTP maintainers
kdump: defer the insertion of crashkernel resources
Diffstat (limited to 'mm/kasan/generic.c')
-rw-r--r-- | mm/kasan/generic.c | 10 |
1 files changed, 8 insertions, 2 deletions
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 24c13dfb1e94..df6627f62402 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -487,6 +487,7 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object) __memset(alloc_meta, 0, sizeof(*alloc_meta)); /* + * Prepare the lock for saving auxiliary stack traces. * Temporarily disable KASAN bug reporting to allow instrumented * raw_spin_lock_init to access aux_lock, which resides inside * of a redzone. @@ -510,8 +511,13 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta) stack_depot_put(meta->aux_stack[0]); stack_depot_put(meta->aux_stack[1]); - /* Zero out alloc meta to mark it as invalid. */ - __memset(meta, 0, sizeof(*meta)); + /* + * Zero out alloc meta to mark it as invalid but keep aux_lock + * initialized to avoid having to reinitialize it when another object + * is allocated in the same slot. + */ + __memset(&meta->alloc_track, 0, sizeof(meta->alloc_track)); + __memset(meta->aux_stack, 0, sizeof(meta->aux_stack)); } static void release_free_meta(const void *object, struct kasan_free_meta *meta) |