path: root/arch/ia64
diff options
authorPeter Xu <>2021-05-04 18:33:00 -0700
committerLinus Torvalds <>2021-05-05 11:27:20 -0700
commitaec44e0f0213e36d4f0868a80cdc5097a510f79d (patch)
treed9425929c399eb159f05c3e0dfaa937d0d0e8357 /arch/ia64
parent786b31121a2ce4309a81a7f36d63f02ca588839e (diff)
hugetlb: pass vma into huge_pte_alloc() and huge_pmd_share()
Patch series "hugetlb: Disable huge pmd unshare for uffd-wp", v4. This series tries to disable huge pmd unshare of hugetlbfs backed memory for uffd-wp. Although uffd-wp of hugetlbfs is still during rfc stage, the idea of this series may be needed for multiple tasks (Axel's uffd minor fault series, and Mike's soft dirty series), so I picked it out from the larger series. This patch (of 4): It is a preparation work to be able to behave differently in the per architecture huge_pte_alloc() according to different VMA attributes. Pass it deeper into huge_pmd_share() so that we can avoid the find_vma() call. [ build fix] Link: Link: Signed-off-by: Peter Xu <> Suggested-by: Mike Kravetz <> Cc: Adam Ruprecht <> Cc: Alexander Viro <> Cc: Alexey Dobriyan <> Cc: Andrea Arcangeli <> Cc: Anshuman Khandual <> Cc: Axel Rasmussen <> Cc: Cannon Matthews <> Cc: Catalin Marinas <> Cc: Chinwen Chang <> Cc: David Rientjes <> Cc: "Dr . David Alan Gilbert" <> Cc: Huang Ying <> Cc: Ingo Molnar <> Cc: Jann Horn <> Cc: Jerome Glisse <> Cc: Kirill A. Shutemov <> Cc: Lokesh Gidra <> Cc: "Matthew Wilcox (Oracle)" <> Cc: Michael Ellerman <> Cc: "Michal Koutn" <> Cc: Michel Lespinasse <> Cc: Mike Rapoport <> Cc: Mina Almasry <> Cc: Nicholas Piggin <> Cc: Oliver Upton <> Cc: Shaohua Li <> Cc: Shawn Anastasio <> Cc: Steven Price <> Cc: Steven Rostedt <> Cc: Vlastimil Babka <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
Diffstat (limited to 'arch/ia64')
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
index b331f94d20ac..f993cb36c062 100644
--- a/arch/ia64/mm/hugetlbpage.c
+++ b/arch/ia64/mm/hugetlbpage.c
@@ -25,7 +25,8 @@ unsigned int hpage_shift = HPAGE_SHIFT_DEFAULT;
pte_t *
-huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz)
+huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+ unsigned long addr, unsigned long sz)
unsigned long taddr = htlbpage_to_page(addr);
pgd_t *pgd;