diff options
authorJérôme Glisse <>2019-05-13 17:20:34 -0700
committerLinus Torvalds <>2019-05-14 09:47:49 -0700
commit4a83bfe916f3d2100df5bc8389bd182a537ced3e (patch)
parent391aab11e93f36c421abeab62526954d08ac3eed (diff)
mm/mmu_notifier: helper to test if a range invalidation is blockable
Patch series "mmu notifier provide context informations", v6. Here I am not posting users of this, they already have been posted to appropriate mailing list [6] and will be merge through the appropriate tree once this patchset is upstream. Note that this serie does not change any behavior for any existing code. It just pass down more information to mmu notifier listener. The rationale for this patchset: CPU page table update can happens for many reasons, not only as a result of a syscall (munmap(), mprotect(), mremap(), madvise(), ...) but also as a result of kernel activities (memory compression, reclaim, migration, ...). This patchset introduce a set of enums that can be associated with each of the events triggering a mmu notifier: - UNMAP: munmap() or mremap() - CLEAR: page table is cleared (migration, compaction, reclaim, ...) - PROTECTION_VMA: change in access protections for the range - PROTECTION_PAGE: change in access protections for page in the range - SOFT_DIRTY: soft dirtyness tracking Being able to identify munmap() and mremap() from other reasons why the page table is cleared is important to allow user of mmu notifier to update their own internal tracking structure accordingly (on munmap or mremap it is not longer needed to track range of virtual address as it becomes invalid). Without this serie, driver are force to assume that every notification is an munmap which triggers useless trashing within drivers that associate structure with range of virtual address. Each driver is force to free up its tracking structure and then restore it on next device page fault. With this series we can also optimize device page table update. Patches to use this are at Moreover this can also be used to optimize out some page table updates such as for KVM where we can update the secondary MMU directly from the callback instead of clearing it. ACKS AMD/RADEON ACKS RDMA This patch (of 8): Simple helpers to test if range invalidation is blockable. Latter patches use cocinnelle to convert all direct dereference of range-> blockable to use this function instead so that we can convert the blockable field to an unsigned for more flags. Link: Signed-off-by: Jérôme Glisse <> Reviewed-by: Ralph Campbell <> Reviewed-by: Ira Weiny <> Cc: Christian König <> Cc: Joonas Lahtinen <> Cc: Jani Nikula <> Cc: Rodrigo Vivi <> Cc: Jan Kara <> Cc: Andrea Arcangeli <> Cc: Peter Xu <> Cc: Felix Kuehling <> Cc: Jason Gunthorpe <> Cc: Ross Zwisler <> Cc: Dan Williams <> Cc: Paolo Bonzini <> Cc: Radim Krcmar <> Cc: Michal Hocko <> Cc: Christian Koenig <> Cc: John Hubbard <> Cc: Arnd Bergmann <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
1 files changed, 11 insertions, 0 deletions
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 4050ec1c3b45..e630def131ce 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -226,6 +226,12 @@ extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r,
extern void __mmu_notifier_invalidate_range(struct mm_struct *mm,
unsigned long start, unsigned long end);
+static inline bool
+mmu_notifier_range_blockable(const struct mmu_notifier_range *range)
+ return range->blockable;
static inline void mmu_notifier_release(struct mm_struct *mm)
if (mm_has_notifiers(mm))
@@ -455,6 +461,11 @@ static inline void _mmu_notifier_range_init(struct mmu_notifier_range *range,
#define mmu_notifier_range_init(range, mm, start, end) \
_mmu_notifier_range_init(range, start, end)
+static inline bool
+mmu_notifier_range_blockable(const struct mmu_notifier_range *range)
+ return true;
static inline int mm_has_notifiers(struct mm_struct *mm)