path: root/mm/Kconfig.debug
AgeCommit message (Collapse)Author
2020-12-15mm, page_poison: remove CONFIG_PAGE_POISONING_ZEROVlastimil Babka
CONFIG_PAGE_POISONING_ZERO uses the zero pattern instead of 0xAA. It was introduced by commit 1414c7f4f7d7 ("mm/page_poisoning.c: allow for zero poisoning"), noting that using zeroes retains the benefit of sanitizing content of freed pages, with the benefit of not having to zero them again on alloc, and the downside of making some forms of corruption (stray writes of NULLs) harder to detect than with the 0xAA pattern. Together with CONFIG_PAGE_POISONING_NO_SANITY it made possible to sanitize the contents on free without checking it back on alloc. These days we have the init_on_free() option to achieve sanitization with zeroes and to save clearing on alloc (and without checking on alloc). Arguably if someone does choose to check the poison for corruption on alloc, the savings of not clearing the page are secondary, and it makes sense to always use the 0xAA poison pattern. Thus, remove the CONFIG_PAGE_POISONING_ZERO option for being redundant. Link: Signed-off-by: Vlastimil Babka <> Acked-by: David Hildenbrand <> Cc: Mike Rapoport <> Cc: Rafael J. Wysocki <> Cc: Alexander Potapenko <> Cc: Kees Cook <> Cc: Laura Abbott <> Cc: Mateusz Nosek <> Cc: Michal Hocko <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2020-12-15mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITYVlastimil Babka
CONFIG_PAGE_POISONING_NO_SANITY skips the check on page alloc whether the poison pattern was corrupted, suggesting a use-after-free. The motivation to introduce it in commit 8823b1dbc05f ("mm/page_poison.c: enable PAGE_POISONING as a separate option") was to simply sanitize freed pages, optimally together with CONFIG_PAGE_POISONING_ZERO. These days we have an init_on_free=1 boot option, which makes this use case of page poisoning redundant. For sanitizing, writing zeroes is sufficient, there is pretty much no benefit from writing the 0xAA poison pattern to freed pages, without checking it back on alloc. Thus, remove this option and suggest init_on_free instead in the main config's help. Link: Signed-off-by: Vlastimil Babka <> Acked-by: David Hildenbrand <> Cc: Mike Rapoport <> Cc: Rafael J. Wysocki <> Cc: Alexander Potapenko <> Cc: Kees Cook <> Cc: Laura Abbott <> Cc: Mateusz Nosek <> Cc: Michal Hocko <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2020-12-15kernel/power: allow hibernation with page_poison sanity checkingVlastimil Babka
Page poisoning used to be incompatible with hibernation, as the state of poisoned pages was lost after resume, thus enabling CONFIG_HIBERNATION forces CONFIG_PAGE_POISONING_NO_SANITY. For the same reason, the poisoning with zeroes variant CONFIG_PAGE_POISONING_ZERO used to disable hibernation. The latter restriction was removed by commit 1ad1410f632d ("PM / Hibernate: allow hibernation with PAGE_POISONING_ZERO") and similarly for init_on_free by commit 18451f9f9e58 ("PM: hibernate: fix crashes with init_on_free=1") by making sure free pages are cleared after resume. We can use the same mechanism to instead poison free pages with PAGE_POISON after resume. This covers both zero and 0xAA patterns. Thus we can remove the Kconfig restriction that disables page poison sanity checking when hibernation is enabled. Link: Signed-off-by: Vlastimil Babka <> Acked-by: Rafael J. Wysocki <> [hibernation] Reviewed-by: David Hildenbrand <> Cc: Mike Rapoport <> Cc: Alexander Potapenko <> Cc: Kees Cook <> Cc: Laura Abbott <> Cc: Mateusz Nosek <> Cc: Michal Hocko <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2020-06-14treewide: replace '---help---' in Kconfig files with 'help'Masahiro Yamada
Since commit 84af7a6194e4 ("checkpatch: kconfig: prefer 'help' over '---help---'"), the number of '---help---' has been gradually decreasing, but there are still more than 2400 instances. This commit finishes the conversion. While I touched the lines, I also fixed the indentation. There are a variety of indentation styles found. a) 4 spaces + '---help---' b) 7 spaces + '---help---' c) 8 spaces + '---help---' d) 1 space + 1 tab + '---help---' e) 1 tab + '---help---' (correct indentation) f) 1 tab + 1 space + '---help---' g) 1 tab + 2 spaces + '---help---' In order to convert all of them to 1 tab + 'help', I ran the following commend: $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/' Signed-off-by: Masahiro Yamada <>
2020-06-03mm: add DEBUG_WX supportZong Li
Patch series "Extract DEBUG_WX to shared use". Some architectures support DEBUG_WX function, it's verbatim from each others, so extract to mm/Kconfig.debug for shared use. PPC and ARM ports don't support generic page dumper yet, so we only refine x86 and arm64 port in this patch series. For RISC-V port, the DEBUG_WX support depends on other patches which be merged already: - RISC-V page table dumper - Support strict kernel memory permissions for security This patch (of 4): Some architectures support DEBUG_WX function, it's verbatim from each others. Extract to mm/Kconfig.debug for shared use. [ reword text, per Will Deacon & Zong Li] Link: [ remove the specific name of arm64] Link: [ add MMU dependency for DEBUG_WX] Link: Suggested-by: Palmer Dabbelt <> Signed-off-by: Zong Li <> Signed-off-by: Andrew Morton <> Cc: Paul Walmsley <> Cc: Thomas Gleixner <> Cc: Ingo Molnar <> Cc: Borislav Petkov <> Cc: "H. Peter Anvin" <> Cc: Catalin Marinas <> Cc: Will Deacon <> Link: Link: Signed-off-by: Linus Torvalds <>
2020-02-04mm: add generic ptdumpSteven Price
Add a generic version of page table dumping that architectures can opt-in to. Link: Signed-off-by: Steven Price <> Cc: Albert Ou <> Cc: Alexandre Ghiti <> Cc: Andy Lutomirski <> Cc: Ard Biesheuvel <> Cc: Arnd Bergmann <> Cc: Benjamin Herrenschmidt <> Cc: Borislav Petkov <> Cc: Catalin Marinas <> Cc: Christian Borntraeger <> Cc: Dave Hansen <> Cc: David S. Miller <> Cc: Heiko Carstens <> Cc: "H. Peter Anvin" <> Cc: Ingo Molnar <> Cc: James Hogan <> Cc: James Morse <> Cc: Jerome Glisse <> Cc: "Liang, Kan" <> Cc: Mark Rutland <> Cc: Michael Ellerman <> Cc: Paul Burton <> Cc: Paul Mackerras <> Cc: Paul Walmsley <> Cc: Peter Zijlstra <> Cc: Ralf Baechle <> Cc: Russell King <> Cc: Thomas Gleixner <> Cc: Vasily Gorbik <> Cc: Vineet Gupta <> Cc: Will Deacon <> Cc: Zong Li <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2019-09-24mm, page_owner, debug_pagealloc: save and dump freeing stack traceVlastimil Babka
The debug_pagealloc functionality is useful to catch buggy page allocator users that cause e.g. use after free or double free. When page inconsistency is detected, debugging is often simpler by knowing the call stack of process that last allocated and freed the page. When page_owner is also enabled, we record the allocation stack trace, but not freeing. This patch therefore adds recording of freeing process stack trace to page owner info, if both page_owner and debug_pagealloc are configured and enabled. With only page_owner enabled, this info is not useful for the memory leak debugging use case. dump_page() is adjusted to print the info. An example result of calling __free_pages() twice may look like this (note the page last free stack trace): BUG: Bad page state in process bash pfn:13d8f8 page:ffffc31984f63e00 refcount:-1 mapcount:0 mapping:0000000000000000 index:0x0 flags: 0x1affff800000000() raw: 01affff800000000 dead000000000100 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000000000 ffffffffffffffff 0000000000000000 page dumped because: nonzero _refcount page_owner tracks the page as freed page last allocated via order 0, migratetype Unmovable, gfp_mask 0xcc0(GFP_KERNEL) prep_new_page+0x143/0x150 get_page_from_freelist+0x289/0x380 __alloc_pages_nodemask+0x13c/0x2d0 khugepaged+0x6e/0xc10 kthread+0xf9/0x130 ret_from_fork+0x3a/0x50 page last free stack trace: free_pcp_prepare+0x134/0x1e0 free_unref_page+0x18/0x90 khugepaged+0x7b/0xc10 kthread+0xf9/0x130 ret_from_fork+0x3a/0x50 Modules linked in: CPU: 3 PID: 271 Comm: bash Not tainted 5.3.0-rc4-2.g07a1a73-default+ #57 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 04/01/2014 Call Trace: dump_stack+0x85/0xc0 bad_page.cold+0xba/0xbf rmqueue_pcplist.isra.0+0x6c5/0x6d0 rmqueue+0x2d/0x810 get_page_from_freelist+0x191/0x380 __alloc_pages_nodemask+0x13c/0x2d0 __get_free_pages+0xd/0x30 __pud_alloc+0x2c/0x110 copy_page_range+0x4f9/0x630 dup_mmap+0x362/0x480 dup_mm+0x68/0x110 copy_process+0x19e1/0x1b40 _do_fork+0x73/0x310 __x64_sys_clone+0x75/0x80 do_syscall_64+0x6e/0x1e0 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7f10af854a10 ... Link: Signed-off-by: Vlastimil Babka <> Cc: Kirill A. Shutemov <> Cc: Matthew Wilcox <> Cc: Mel Gorman <> Cc: Michal Hocko <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2019-07-12mm, debug_pagealloc: use a page type instead of page_ext flagVlastimil Babka
When debug_pagealloc is enabled, we currently allocate the page_ext array to mark guard pages with the PAGE_EXT_DEBUG_GUARD flag. Now that we have the page_type field in struct page, we can use that instead, as guard pages are neither PageSlab nor mapped to userspace. This reduces memory overhead when debug_pagealloc is enabled and there are no other features requiring the page_ext array. Link: Signed-off-by: Vlastimil Babka <> Cc: Joonsoo Kim <> Cc: Matthew Wilcox <> Cc: "Kirill A. Shutemov" <> Cc: Mel Gorman <> Cc: Michal Hocko <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2019-07-12mm, page_alloc: more extensive free page checking with debug_pageallocVlastimil Babka
The page allocator checks struct pages for expected state (mapcount, flags etc) as pages are being allocated (check_new_page()) and freed (free_pages_check()) to provide some defense against errors in page allocator users. Prior commits 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP") and 4db7548ccbd9 ("mm, page_alloc: defer debugging checks of freed pages until a PCP drain") this has happened for order-0 pages as they were allocated from or freed to the per-cpu caches (pcplists). Since those are fast paths, the checks are now performed only when pages are moved between pcplists and global free lists. This however lowers the chances of catching errors soon enough. In order to increase the chances of the checks to catch errors, the kernel has to be rebuilt with CONFIG_DEBUG_VM, which also enables multiple other internal debug checks (VM_BUG_ON() etc), which is suboptimal when the goal is to catch errors in mm users, not in mm code itself. To catch some wrong users of the page allocator we have CONFIG_DEBUG_PAGEALLOC, which is designed to have virtually no overhead unless enabled at boot time. Memory corruptions when writing to freed pages have often the same underlying errors (use-after-free, double free) as corrupting the corresponding struct pages, so this existing debugging functionality is a good fit to extend by also perform struct page checks at least as often as if CONFIG_DEBUG_VM was enabled. Specifically, after this patch, when debug_pagealloc is enabled on boot, and CONFIG_DEBUG_VM disabled, pages are checked when allocated from or freed to the pcplists *in addition* to being moved between pcplists and free lists. When both debug_pagealloc and CONFIG_DEBUG_VM are enabled, pages are checked when being moved between pcplists and free lists *in addition* to when allocated from or freed to the pcplists. When debug_pagealloc is not enabled on boot, the overhead in fast paths should be virtually none thanks to the use of static key. Link: Signed-off-by: Vlastimil Babka <> Reviewed-by: Andrew Morton <> Cc: Mel Gorman <> Cc: Joonsoo Kim <> Cc: "Kirill A. Shutemov" <> Cc: Matthew Wilcox <> Cc: Michal Hocko <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2019-05-21treewide: Add SPDX license identifier - Makefile/KconfigThomas Gleixner
Add SPDX license identifiers to all Make/Kconfig files which: - Have no license information of any form These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <> Signed-off-by: Greg Kroah-Hartman <>
2019-05-14mm: remove redundant 'default n' from Kconfig-sBartlomiej Zolnierkiewicz
'default n' is the default value for any bool or tristate Kconfig setting so there is no need to write it explicitly. Also since commit f467c5640c29 ("kconfig: only write '# CONFIG_FOO is not set' for visible symbols") the Kconfig behavior is the same regardless of 'default n' being present or not: ... One side effect of (and the main motivation for) this change is making the following two definitions behave exactly the same: config FOO bool config FOO bool default n With this change, neither of these will generate a '# CONFIG_FOO is not set' line (assuming FOO isn't selected/implied). That might make it clearer to people that a bare 'default n' is redundant. ... Link: Signed-off-by: Bartlomiej Zolnierkiewicz <> Reviewed-by: Andrew Morton <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2019-03-05mm/page_owner: move config option to mm/Kconfig.debugChangbin Du
Move the PAGE_OWNER option from submenu "Compile-time checks and compiler options" to dedicated submenu "Memory Debugging". Link: Signed-off-by: Changbin Du <> Acked-by: Vlastimil Babka <> Cc: Masahiro Yamada <> Cc: Ingo Molnar <> Cc: Arnd Bergmann <> Cc: Randy Dunlap <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2018-08-22mm: clarify CONFIG_PAGE_POISONING and usageKees Cook
The Kconfig text for CONFIG_PAGE_POISONING doesn't mention that it has to be enabled explicitly. This updates the documentation for that and adds a note about CONFIG_PAGE_POISONING to the "page_poison" command line docs. While here, change description of CONFIG_PAGE_POISONING_ZERO too, as it's not "random" data, but rather the fixed debugging value that would be used when not zeroing. Additionally removes a stray "bool" in the Kconfig. Link: Signed-off-by: Kees Cook <> Reviewed-by: Andrew Morton <> Cc: Jonathan Corbet <> Cc: Laura Abbott <> Cc: Naoya Horiguchi <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2017-11-15kmemcheck: rip it outLevin, Alexander (Sasha Levin)
Fix up makefiles, remove references, and git rm kmemcheck. Link: Signed-off-by: Sasha Levin <> Cc: Steven Rostedt <> Cc: Vegard Nossum <> Cc: Pekka Enberg <> Cc: Michal Hocko <> Cc: Eric W. Biederman <> Cc: Alexander Potapenko <> Cc: Tim Hansen <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2017-05-03mm: enable page poisoning early at bootVinayak Menon
On SPARSEMEM systems page poisoning is enabled after buddy is up, because of the dependency on page extension init. This causes the pages released by free_all_bootmem not to be poisoned. This either delays or misses the identification of some issues because the pages have to undergo another cycle of alloc-free-alloc for any corruption to be detected. Enable page poisoning early by getting rid of the PAGE_EXT_DEBUG_POISON flag. Since all the free pages will now be poisoned, the flag need not be verified before checking the poison during an alloc. [ fix Kconfig] Link: Link: Signed-off-by: Vinayak Menon <> Acked-by: Laura Abbott <> Tested-by: Laura Abbott <> Cc: Joonsoo Kim <> Cc: Michal Hocko <> Cc: Akinobu Mita <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2017-02-27mm: add arch-independent testcases for RODATAJinbum Park
This patch makes arch-independent testcases for RODATA. Both x86 and x86_64 already have testcases for RODATA, But they are arch-specific because using inline assembly directly. And cacheflush.h is not a suitable location for rodata-test related things. Since they were in cacheflush.h, If someone change the state of CONFIG_DEBUG_RODATA_TEST, It cause overhead of kernel build. To solve the above issues, write arch-independent testcases and move it to shared location. [ fix config dependency] Link: Link: Signed-off-by: Jinbum Park <> Acked-by: Kees Cook <> Cc: Ingo Molnar <> Cc: H. Peter Anvin <> Cc: Arjan van de Ven <> Cc: Laura Abbott <> Cc: Russell King <> Cc: Valentin Rothberg <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2016-09-13PM / Hibernate: allow hibernation with PAGE_POISONING_ZEROAnisse Astier
PAGE_POISONING_ZERO disables zeroing new pages on alloc, they are poisoned (zeroed) as they become available. In the hibernate use case, free pages will appear in the system without being cleared, left there by the loading kernel. This patch will make sure free pages are cleared on resume when PAGE_POISONING_ZERO is enabled. We free the pages just after resume because we can't do it later: going through any device resume code might allocate some memory and invalidate the free pages bitmap. Thus we don't need to disable hibernation when PAGE_POISONING_ZERO is enabled. Signed-off-by: Anisse Astier <> Reviewed-by: Kees Cook <> Acked-by: Pavel Machek <> Signed-off-by: Rafael J. Wysocki <>
2016-03-17mm/page_ref: add tracepoint to track down page reference manipulationJoonsoo Kim
CMA allocation should be guaranteed to succeed by definition, but, unfortunately, it would be failed sometimes. It is hard to track down the problem, because it is related to page reference manipulation and we don't have any facility to analyze it. This patch adds tracepoints to track down page reference manipulation. With it, we can find exact reason of failure and can fix the problem. Following is an example of tracepoint output. (note: this example is stale version that printing flags as the number. Recent version will print it as human readable string.) <...>-9018 [004] 92.678375: page_ref_set: pfn=0x17ac9 flags=0x0 count=1 mapcount=0 mapping=(nil) mt=4 val=1 <...>-9018 [004] 92.678378: kernel_stack: => get_page_from_freelist (ffffffff81176659) => __alloc_pages_nodemask (ffffffff81176d22) => alloc_pages_vma (ffffffff811bf675) => handle_mm_fault (ffffffff8119e693) => __do_page_fault (ffffffff810631ea) => trace_do_page_fault (ffffffff81063543) => do_async_page_fault (ffffffff8105c40a) => async_page_fault (ffffffff817581d8) [snip] <...>-9018 [004] 92.678379: page_ref_mod: pfn=0x17ac9 flags=0x40048 count=2 mapcount=1 mapping=0xffff880015a78dc1 mt=4 val=1 [snip] ... ... <...>-9131 [001] 93.174468: test_pages_isolated: start_pfn=0x17800 end_pfn=0x17c00 fin_pfn=0x17ac9 ret=fail [snip] <...>-9018 [004] 93.174843: page_ref_mod_and_test: pfn=0x17ac9 flags=0x40068 count=0 mapcount=0 mapping=0xffff880015a78dc1 mt=4 val=-1 ret=1 => release_pages (ffffffff8117c9e4) => free_pages_and_swap_cache (ffffffff811b0697) => tlb_flush_mmu_free (ffffffff81199616) => tlb_finish_mmu (ffffffff8119a62c) => exit_mmap (ffffffff811a53f7) => mmput (ffffffff81073f47) => do_exit (ffffffff810794e9) => do_group_exit (ffffffff81079def) => SyS_exit_group (ffffffff81079e74) => entry_SYSCALL_64_fastpath (ffffffff817560b6) This output shows that problem comes from exit path. In exit path, to improve performance, pages are not freed immediately. They are gathered and processed by batch. During this process, migration cannot be possible and CMA allocation is failed. This problem is hard to find without this page reference tracepoint facility. Enabling this feature bloat kernel text 30 KB in my configuration. text data bss dec hex filename 12127327 2243616 1507328 15878271 f2487f vmlinux_disabled 12157208 2258880 1507328 15923416 f2f8d8 vmlinux_enabled Note that, due to header file dependency problem between mm.h and tracepoint.h, this feature has to open code the static key functions for tracepoints. Proposed by Steven Rostedt in following link. [ crypto/async_pq: use __free_page() instead of put_page()] [ fix build failure for xtensa] [ tweak Kconfig text, per Vlastimil] Signed-off-by: Joonsoo Kim <> Acked-by: Michal Nazarewicz <> Acked-by: Vlastimil Babka <> Cc: Minchan Kim <> Cc: Mel Gorman <> Cc: "Kirill A. Shutemov" <> Cc: Sergey Senozhatsky <> Acked-by: Steven Rostedt <> Signed-off-by: Arnd Bergmann <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2016-03-15mm/page_poisoning.c: allow for zero poisoningLaura Abbott
By default, page poisoning uses a poison value (0xaa) on free. If this is changed to 0, the page is not only sanitized but zeroing on alloc with __GFP_ZERO can be skipped as well. The tradeoff is that detecting corruption from the poisoning is harder to detect. This feature also cannot be used with hibernation since pages are not guaranteed to be zeroed after hibernation. Credit to Grsecurity/PaX team for inspiring this work Signed-off-by: Laura Abbott <> Acked-by: Rafael J. Wysocki <> Cc: "Kirill A. Shutemov" <> Cc: Vlastimil Babka <> Cc: Michal Hocko <> Cc: Kees Cook <> Cc: Mathias Krause <> Cc: Dave Hansen <> Cc: Jianyu Zhan <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2016-03-15mm/page_poison.c: enable PAGE_POISONING as a separate optionLaura Abbott
Page poisoning is currently set up as a feature if architectures don't have architecture debug page_alloc to allow unmapping of pages. It has uses apart from that though. Clearing of the pages on free provides an increase in security as it helps to limit the risk of information leaks. Allow page poisoning to be enabled as a separate option independent of kernel_map pages since the two features do separate work. Because of how hiberanation is implemented, the checks on alloc cannot occur if hibernation is enabled. The runtime alloc checks can also be enabled with an option when !HIBERNATION. Credit to Grsecurity/PaX team for inspiring this work Signed-off-by: Laura Abbott <> Cc: Rafael J. Wysocki <> Cc: "Kirill A. Shutemov" <> Cc: Vlastimil Babka <> Cc: Michal Hocko <> Cc: Kees Cook <> Cc: Mathias Krause <> Cc: Dave Hansen <> Cc: Jianyu Zhan <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2016-03-15mm/debug_pagealloc: ask users for default setting of debug_pageallocChristian Borntraeger
Since commit 031bc5743f158 ("mm/debug-pagealloc: make debug-pagealloc boottime configurable") CONFIG_DEBUG_PAGEALLOC is by default not adding any page debugging. This resulted in several unnoticed bugs, e.g.<> or<> as this behaviour change was not even documented in Kconfig. Let's provide a new Kconfig symbol that allows to change the default back to enabled, e.g. for debug kernels. This also makes the change obvious to kernel packagers. Let's also change the Kconfig description for CONFIG_DEBUG_PAGEALLOC, to indicate that there are two stages of overhead. Signed-off-by: Christian Borntraeger <> Cc: Joonsoo Kim <> Cc: Peter Zijlstra <> Cc: Heiko Carstens <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2015-01-08mm/debug_pagealloc: remove obsolete Kconfig optionsJoonsoo Kim
These are obsolete since commit e30825f1869a ("mm/debug-pagealloc: prepare boottime configurable") was merged. So remove them. [ find obsolete Kconfig options] Signed-off-by: Joonsoo Kim <> Cc: Paul Bolle <> Cc: Mel Gorman <> Cc: Johannes Weiner <> Cc: Minchan Kim <> Cc: Dave Hansen <> Cc: Michal Nazarewicz <> Cc: Jungsoo Son <> Acked-by: David Rientjes <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2014-12-13mm/debug-pagealloc: prepare boottime configurable on/offJoonsoo Kim
Until now, debug-pagealloc needs extra flags in struct page, so we need to recompile whole source code when we decide to use it. This is really painful, because it takes some time to recompile and sometimes rebuild is not possible due to third party module depending on struct page. So, we can't use this good feature in many cases. Now, we have the page extension feature that allows us to insert extra flags to outside of struct page. This gets rid of third party module issue mentioned above. And, this allows us to determine if we need extra memory for this page extension in boottime. With these property, we can avoid using debug-pagealloc in boottime with low computational overhead in the kernel built with CONFIG_DEBUG_PAGEALLOC. This will help our development process greatly. This patch is the preparation step to achive above goal. debug-pagealloc originally uses extra field of struct page, but, after this patch, it will use field of struct page_ext. Because memory for page_ext is allocated later than initialization of page allocator in CONFIG_SPARSEMEM, we should disable debug-pagealloc feature temporarily until initialization of page_ext. This patch implements this. Signed-off-by: Joonsoo Kim <> Cc: Mel Gorman <> Cc: Johannes Weiner <> Cc: Minchan Kim <> Cc: Dave Hansen <> Cc: Michal Nazarewicz <> Cc: Jungsoo Son <> Cc: Ingo Molnar <> Cc: Joonsoo Kim <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2014-12-13mm/page_ext: resurrect struct page extending code for debuggingJoonsoo Kim
When we debug something, we'd like to insert some information to every page. For this purpose, we sometimes modify struct page itself. But, this has drawbacks. First, it requires re-compile. This makes us hesitate to use the powerful debug feature so development process is slowed down. And, second, sometimes it is impossible to rebuild the kernel due to third party module dependency. At third, system behaviour would be largely different after re-compile, because it changes size of struct page greatly and this structure is accessed by every part of kernel. Keeping this as it is would be better to reproduce errornous situation. This feature is intended to overcome above mentioned problems. This feature allocates memory for extended data per page in certain place rather than the struct page itself. This memory can be accessed by the accessor functions provided by this code. During the boot process, it checks whether allocation of huge chunk of memory is needed or not. If not, it avoids allocating memory at all. With this advantage, we can include this feature into the kernel in default and can avoid rebuild and solve related problems. Until now, memcg uses this technique. But, now, memcg decides to embed their variable to struct page itself and it's code to extend struct page has been removed. I'd like to use this code to develop debug feature, so this patch resurrect it. To help these things to work well, this patch introduces two callbacks for clients. One is the need callback which is mandatory if user wants to avoid useless memory allocation at boot-time. The other is optional, init callback, which is used to do proper initialization after memory is allocated. Detailed explanation about purpose of these functions is in code comment. Please refer it. Others are completely same with previous extension code in memcg. Signed-off-by: Joonsoo Kim <> Cc: Mel Gorman <> Cc: Johannes Weiner <> Cc: Minchan Kim <> Cc: Dave Hansen <> Cc: Michal Nazarewicz <> Cc: Jungsoo Son <> Cc: Ingo Molnar <> Cc: Joonsoo Kim <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2012-01-10mm: more intensive memory corruption debuggingStanislaw Gruszka
With CONFIG_DEBUG_PAGEALLOC configured, the CPU will generate an exception on access (read,write) to an unallocated page, which permits us to catch code which corrupts memory. However the kernel is trying to maximise memory usage, hence there are usually few free pages in the system and buggy code usually corrupts some crucial data. This patch changes the buddy allocator to keep more free/protected pages and to interlace free/protected and allocated pages to increase the probability of catching corruption. When the kernel is compiled with CONFIG_DEBUG_PAGEALLOC, debug_guardpage_minorder defines the minimum order used by the page allocator to grant a request. The requested size will be returned with the remaining pages used as guard pages. The default value of debug_guardpage_minorder is zero: no change from current behaviour. [ tweak documentation, s/flg/flag/] Signed-off-by: Stanislaw Gruszka <> Cc: Mel Gorman <> Cc: Andrea Arcangeli <> Cc: "Rafael J. Wysocki" <> Cc: Christoph Lameter <> Cc: Pekka Enberg <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2011-03-22mm: debug-pagealloc: fix kconfig dependency warningAkinobu Mita
Fix kconfig dependency warning to satisfy dependencies: warning: (PAGE_POISONING) selects DEBUG_PAGEALLOC which has unmet direct dependencies (DEBUG_KERNEL && ARCH_SUPPORTS_DEBUG_PAGEALLOC && (!HIBERNATION || !PPC && !SPARC) && !KMEMCHECK) Signed-off-by: Akinobu Mita <> Cc: Randy Dunlap <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2009-09-21trivial: improve help text for mm debug config optionsFrans Pop
Improve the help text for PAGE_POISONING. Also fix some typos and improve consistency within the file. Signed-of-by: Frans Pop <> Signed-off-by: Jiri Kosina <>
2009-06-15kmemcheck: enable in the x86 KconfigVegard Nossum
let it rip! Signed-off-by: Pekka Enberg <> Signed-off-by: Ingo Molnar <> [rebased for mainline inclusion] Signed-off-by: Vegard Nossum <>
2009-04-02generic debug pagealloc: build fixAkinobu Mita
This fixes a build failure with generic debug pagealloc: mm/debug-pagealloc.c: In function 'set_page_poison': mm/debug-pagealloc.c:8: error: 'struct page' has no member named 'debug_flags' mm/debug-pagealloc.c: In function 'clear_page_poison': mm/debug-pagealloc.c:13: error: 'struct page' has no member named 'debug_flags' mm/debug-pagealloc.c: In function 'page_poison': mm/debug-pagealloc.c:18: error: 'struct page' has no member named 'debug_flags' mm/debug-pagealloc.c: At top level: mm/debug-pagealloc.c:120: error: redefinition of 'kernel_map_pages' include/linux/mm.h:1278: error: previous definition of 'kernel_map_pages' was here mm/debug-pagealloc.c: In function 'kernel_map_pages': mm/debug-pagealloc.c:122: error: 'debug_pagealloc_enabled' undeclared (first use in this function) by fixing - debug_flags should be in struct page - define DEBUG_PAGEALLOC config option for all architectures Signed-off-by: Akinobu Mita <> Reported-by: Alexander Beregalov <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2009-04-01generic debug pageallocAkinobu Mita
CONFIG_DEBUG_PAGEALLOC is now supported by x86, powerpc, sparc64, and s390. This patch implements it for the rest of the architectures by filling the pages with poison byte patterns after free_pages() and verifying the poison patterns before alloc_pages(). This generic one cannot detect invalid page accesses immediately but invalid read access may cause invalid dereference by poisoned memory and invalid write access can be detected after a long delay. Signed-off-by: Akinobu Mita <> Cc: <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>