summaryrefslogtreecommitdiff
path: root/arch/x86/boot/compressed
AgeCommit message (Collapse)Author
2020-07-31x86/kaslr: Drop test for command-line parameters before parsingArvind Sankar
This check doesn't save anything. In the case when none of the parameters are present, each strstr will scan args twice (once to find the length and then for searching), six scans in total. Just going ahead and parsing the arguments only requires three scans: strlen, memcpy, and parsing. This will be the first malloc, so free will actually free up the memory, so the check doesn't save heap space either. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-14-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Simplify process_gb_huge_pages()Arvind Sankar
Replace the loop to determine the number of 1Gb pages with arithmetic. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-13-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Short-circuit gb_huge_pages on x86-32Arvind Sankar
32-bit does not have GB pages, so don't bother checking for them. Using the IS_ENABLED() macro allows the compiler to completely remove the gb_huge_pages code. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-12-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Fix off-by-one error in process_gb_huge_pages()Arvind Sankar
If the remaining size of the region is exactly 1Gb, there is still one hugepage that can be reserved. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-11-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop some redundant checks from __process_mem_region()Arvind Sankar
Clip the start and end of the region to minimum and mem_limit prior to the loop. region.start can only increase during the loop, so raising it to minimum before the loop is enough. A region that becomes empty due to this will get checked in the first iteration of the loop. Drop the check for overlap extending beyond the end of the region. This will get checked in the next loop iteration anyway. Rename end to region_end for symmetry with region.start. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-10-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop redundant variable in __process_mem_region()Arvind Sankar
region.size can be trimmed to store the portion of the region before the overlap, instead of a separate mem_vector variable. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-9-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Eliminate 'start_orig' local variable from __process_mem_region()Arvind Sankar
Set the region.size within the loop, which removes the need for start_orig. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-8-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Drop redundant cur_entry from __process_mem_region()Arvind Sankar
cur_entry is only used as cur_entry.start + cur_entry.size, which is always equal to end. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-7-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Fix off-by-one error in __process_mem_region()Arvind Sankar
In case of an overlap, the beginning of the region should be used even if it is exactly image_size, not just strictly larger. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200728225722.67457-6-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Initialize mem_limit to the real maximum addressArvind Sankar
On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level paging or 4PiB with 5-level paging). This is currently not enforced by KASLR, which thus implicitly relies on physical memory being limited to less than 64TiB. On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by special checks in __process_mem_region(). Initialize mem_limit to the maximum (depending on architecture), instead of ULLONG_MAX, and make sure the command-line arguments can only decrease it. This makes the enforcement explicit on 64-bit, and eliminates the 32-bit specific checks to keep the kernel below 512M. Check upfront to make sure the minimum address is below the limit before doing any work. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-5-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Fix process_efi_entries commentArvind Sankar
Since commit: 0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice") process_efi_entries() will return true if we have an EFI memmap, not just if it contained EFI_MEMORY_MORE_RELIABLE regions. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-4-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Remove bogus warning and unnecessary gotoArvind Sankar
Drop the warning on seeing "--" in handle_mem_options(). This will trigger whenever one of the memory options is present in the command line together with "--", but there's no problem if that is the case. Replace goto with break. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-3-nivedita@alum.mit.edu
2020-07-31x86/kaslr: Make command line handling saferArvind Sankar
Handle the possibility that the command line is NULL. Replace open-coded strlen with a function call. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727230801.3468620-2-nivedita@alum.mit.edu
2020-07-19Merge tag 'x86-urgent-2020-07-19' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master Pull x86 fixes from Thomas Gleixner: "A pile of fixes for x86: - Fix the I/O bitmap invalidation on XEN PV, which was overlooked in the recent ioperm/iopl rework. This caused the TSS and XEN's I/O bitmap to get out of sync. - Use the proper vectors for HYPERV. - Make disabling of stack protector for the entry code work with GCC builds which enable stack protector by default. Removing the option is not sufficient, it needs an explicit -fno-stack-protector to shut it off. - Mark check_user_regs() noinstr as it is called from noinstr code. The missing annotation causes it to be placed in the text section which makes it instrumentable. - Add the missing interrupt disable in exc_alignment_check() - Fixup a XEN_PV build dependency in the 32bit entry code - A few fixes to make the Clang integrated assembler happy - Move EFI stub build to the right place for out of tree builds - Make prepare_exit_to_usermode() static. It's not longer called from ASM code" * tag 'x86-urgent-2020-07-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot: Don't add the EFI stub to targets x86/entry: Actually disable stack protector x86/ioperm: Fix io bitmap invalidation on Xen PV x86: math-emu: Fix up 'cmp' insn for clang ias x86/entry: Fix vectors to IDTENTRY_SYSVEC for CONFIG_HYPERV x86/entry: Add compatibility with IAS x86/entry/common: Make prepare_exit_to_usermode() static x86/entry: Mark check_user_regs() noinstr x86/traps: Disable interrupts in exc_aligment_check() x86/entry/32: Fix XEN_PV build dependency
2020-07-19x86/boot: Don't add the EFI stub to targetsArvind Sankar
vmlinux-objs-y is added to targets, which currently means that the EFI stub gets added to the targets as well. It shouldn't be added since it is built elsewhere. This confuses Makefile.build which interprets the EFI stub as a target $(obj)/$(objtree)/drivers/firmware/efi/libstub/lib.a and will create drivers/firmware/efi/libstub/ underneath arch/x86/boot/compressed, to hold this supposed target, if building out-of-tree. [0] Fix this by pulling the stub out of vmlinux-objs-y into efi-obj-y. [0] See scripts/Makefile.build near the end: # Create directories for object files if they do not exist Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lkml.kernel.org/r/20200715032631.1562882-1-nivedita@alum.mit.edu
2020-07-07kbuild: remove cc-option test of -ffreestandingMasahiro Yamada
Some Makefiles already pass -ffreestanding unconditionally. For example, arch/arm64/lib/Makefile, arch/x86/purgatory/Makefile. No problem report so far about hard-coding this option. So, we can assume all supported compilers know -ffreestanding. I confirmed GCC 4.8 and Clang manuals document this option. Get rid of cc-option from -ffreestanding. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ardb@kernel.org>
2020-07-07kbuild: remove cc-option test of -fno-stack-protectorMasahiro Yamada
Some Makefiles already pass -fno-stack-protector unconditionally. For example, arch/arm64/kernel/vdso/Makefile, arch/x86/xen/Makefile. No problem report so far about hard-coding this option. So, we can assume all supported compilers know -fno-stack-protector. GCC 4.8 and Clang support this option (https://godbolt.org/z/_HDGzN) Get rid of cc-option from -fno-stack-protector. Remove CONFIG_CC_HAS_STACKPROTECTOR_NONE, which is always 'y'. Note: arch/mips/vdso/Makefile adds -fno-stack-protector twice, first unconditionally, and second conditionally. I removed the second one. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2020-06-17efi/x86: Setup stack correctly for efi_pe_entryArvind Sankar
Commit 17054f492dfd ("efi/x86: Implement mixed mode boot without the handover protocol") introduced a new entry point for the EFI stub to be booted in mixed mode on 32-bit firmware. When entered via efi32_pe_entry, control is first transferred to startup_32 to setup for the switch to long mode, and then the EFI stub proper is entered via efi_pe_entry. efi_pe_entry is an MS ABI function, and the ABI requires 32 bytes of shadow stack space to be allocated by the caller, as well as the stack being aligned to 8 mod 16 on entry. Allocate 40 bytes on the stack before switching to 64-bit mode when calling efi_pe_entry to account for this. For robustness, explicitly align boot_stack_end to 16 bytes. It is currently implicitly aligned since .bss is cacheline-size aligned, head_64.o is the first object file with a .bss section, and the heap and boot sizes are aligned. Fixes: 17054f492dfd ("efi/x86: Implement mixed mode boot without the handover protocol") Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200617131957.2507632-1-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-06-11Rebase locking/kcsan to locking/urgentThomas Gleixner
Merge the state of the locking kcsan branch before the read/write_once() and the atomics modifications got merged. Squash the fallout of the rebase on top of the read/write once and atomic fallback work into the merge. The history of the original branch is preserved in tag locking-kcsan-2020-06-02. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-06-09mm: reorder includes after introduction of linux/pgtable.hMike Rapoport
The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include of the latter in the middle of asm includes. Fix this up with the aid of the below script and manual adjustments here and there. import sys import re if len(sys.argv) is not 3: print "USAGE: %s <file> <header>" % (sys.argv[0]) sys.exit(1) hdr_to_move="#include <linux/%s>" % sys.argv[2] moved = False in_hdrs = False with open(sys.argv[1], "r") as f: lines = f.readlines() for _line in lines: line = _line.rstrip(' ') if line == hdr_to_move: continue if line.startswith("#include <linux/"): in_hdrs = True elif not moved and in_hdrs: moved = True print hdr_to_move print line Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09mm: introduce include/linux/pgtable.hMike Rapoport
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions. Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-01Merge tag 'x86-build-2020-06-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 build updates from Ingo Molnar: "Misc dependency fixes, plus a documentation update about memory protection keys support" * tag 'x86-build-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/Kconfig: Update config and kernel doc for MPK feature on AMD x86/boot: Discard .discard.unreachable for arch/x86/boot/compressed/vmlinux x86/boot/build: Add phony targets in arch/x86/boot/Makefile to PHONY x86/boot/build: Make 'make bzlilo' not depend on vmlinux or $(obj)/bzImage x86/boot/build: Add cpustr.h to targets and remove clean-files
2020-06-01Merge tag 'x86-boot-2020-06-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 boot updates from Ingo Molnar: "Misc updates: - Add the initrdmem= boot option to specify an initrd embedded in RAM (flash most likely) - Sanitize the CS value earlier during boot, which also fixes SEV-ES - Various fixes and smaller cleanups" * tag 'x86-boot-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot: Correct relocation destination on old linkers x86/boot/compressed/64: Switch to __KERNEL_CS after GDT is loaded x86/boot: Fix -Wint-to-pointer-cast build warning x86/boot: Add kstrtoul() from lib/ x86/tboot: Mark tboot static x86/setup: Add an initrdmem= option to specify initrd physical address
2020-05-24efi/x86: Drop the special GDT for the EFI thunkArvind Sankar
Instead of using efi_gdt64 to switch back to 64-bit mode and then switching to the real boot-time GDT, just switch to the boot-time GDT directly. The two GDT's are identical other than efi_gdt64 not including the 32-bit code segment. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200523221513.1642948-1-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-22x86/boot: Discard .discard.unreachable for arch/x86/boot/compressed/vmlinuxFangrui Song
With commit ce5e3f909fc0 ("efi/printf: Add 64-bit and 8-bit integer support") arch/x86/boot/compressed/vmlinux may have an undesired .discard.unreachable section coming from drivers/firmware/efi/libstub/vsprintf.stub.o. That section gets generated from unreachable() annotations when CONFIG_STACK_VALIDATION is enabled. .discard.unreachable contains an R_X86_64_PC32 relocation which will be warned about by LLD: a non-SHF_ALLOC section (.discard.unreachable) is not part of the memory image, thus conceptually the distance between a non-SHF_ALLOC and a SHF_ALLOC is not a constant which can be resolved at link time: % ld.lld -m elf_x86_64 -T arch/x86/boot/compressed/vmlinux.lds ... -o arch/x86/boot/compressed/vmlinux ld.lld: warning: vsprintf.c:(.discard.unreachable+0x0): has non-ABS relocation R_X86_64_PC32 against symbol '' Reuse the DISCARDS macro which includes .discard.* to drop .discard.unreachable. [ bp: Massage and complete the commit message. ] Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Fangrui Song <maskray@google.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Arvind Sankar <nivedita@alum.mit.edu> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Link: https://lkml.kernel.org/r/20200520182010.242489-1-maskray@google.com
2020-05-19x86/boot: Correct relocation destination on old linkersArvind Sankar
For the 32-bit kernel, as described in 6d92bc9d483a ("x86/build: Build compressed x86 kernels as PIE"), pre-2.26 binutils generates R_386_32 relocations in PIE mode. Since the startup code does not perform relocation, any reloc entry with R_386_32 will remain as 0 in the executing code. Commit 974f221c84b0 ("x86/boot: Move compressed kernel to the end of the decompression buffer") added a new symbol _end but did not mark it hidden, which doesn't give the correct offset on older linkers. This causes the compressed kernel to be copied beyond the end of the decompression buffer, rather than flush against it. This region of memory may be reserved or already allocated for other purposes by the bootloader. Mark _end as hidden to fix. This changes the relocation from R_386_32 to R_386_RELATIVE even on the pre-2.26 binutils. For 64-bit, this is not strictly necessary, as the 64-bit kernel is only built as PIE if the linker supports -z noreloc-overflow, which implies binutils-2.27+, but for consistency, mark _end as hidden here too. The below illustrates the before/after impact of the patch using binutils-2.25 and gcc-4.6.4 (locally compiled from source) and QEMU. Disassembly before patch: 48: 8b 86 60 02 00 00 mov 0x260(%esi),%eax 4e: 2d 00 00 00 00 sub $0x0,%eax 4f: R_386_32 _end Disassembly after patch: 48: 8b 86 60 02 00 00 mov 0x260(%esi),%eax 4e: 2d 00 f0 76 00 sub $0x76f000,%eax 4f: R_386_RELATIVE *ABS* Dump from extract_kernel before patch: early console in extract_kernel input_data: 0x0207c098 <--- this is at output + init_size input_len: 0x0074fef1 output: 0x01000000 output_len: 0x00fa63d0 kernel_total_size: 0x0107c000 needed_size: 0x0107c000 Dump from extract_kernel after patch: early console in extract_kernel input_data: 0x0190d098 <--- this is at output + init_size - _end input_len: 0x0074fef1 output: 0x01000000 output_len: 0x00fa63d0 kernel_total_size: 0x0107c000 needed_size: 0x0107c000 Fixes: 974f221c84b0 ("x86/boot: Move compressed kernel to the end of the decompression buffer") Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200207214926.3564079-1-nivedita@alum.mit.edu
2020-05-04x86/boot/compressed/64: Switch to __KERNEL_CS after GDT is loadedJoerg Roedel
When the pre-decompression code loads its first GDT in startup_64(), it is still running on the CS value of the previous GDT. In the case of SEV-ES, this is the EFI GDT but it can be anything depending on what has loaded the kernel (boot loader, container runtime, etc.) To make exception handling work (especially IRET) the CPU needs to switch to a CS value in the current GDT, so jump to __KERNEL_CS after the first GDT is loaded. This is prudent also as a general sanitization of CS to a known good value. [ bp: Massage commit message. ] Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200428151725.31091-13-joro@8bytes.org
2020-05-04x86/boot: Fix -Wint-to-pointer-cast build warningVamshi K Sthambamkadi
Fix this warning when building 32-bit with CONFIG_RANDOMIZE_BASE=y CONFIG_MEMORY_HOTREMOVE=y arch/x86/boot/compressed/acpi.c:316:9: warning: \ cast to pointer from integer of different size [-Wint-to-pointer-cast] Have get_cmdline_acpi_rsdp() return unsigned long which is the proper type to convert to a pointer of the respective width. [ bp: Rewrite commit message, touch ups. ] Signed-off-by: Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/1587645588-7130-3-git-send-email-vamshi.k.sthambamkadi@gmail.com
2020-04-24efi/x86: Remove __efistub_global and add relocation checkArvind Sankar
Instead of using __efistub_global to force variables into the .data section, leave them in the .bss but pull the EFI stub's .bss section into .data in the linker script for the compressed kernel. Add relocation checking for x86 as well to catch non-PC-relative relocations that require runtime processing, since the EFI stub does not do any runtime relocation processing. This will catch, for example, data relocations created by static initializers of pointers. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200416151227.3360778-3-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-04-13Merge tag 'v5.7-rc1' into locking/kcsan, to resolve conflicts and refreshIngo Molnar
Resolve these conflicts: arch/x86/Kconfig arch/x86/kernel/Makefile Do a minor "evil merge" to move the KCSAN entry up a bit by a few lines in the Kconfig to reduce the probability of future conflicts. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-04-03Merge tag 'spdx-5.7-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx Pull SPDX updates from Greg KH: "Here are three SPDX patches for 5.7-rc1. One fixes up the SPDX tag for a single driver, while the other two go through the tree and add SPDX tags for all of the .gitignore files as needed. Nothing too complex, but you will get a merge conflict with your current tree, that should be trivial to handle (one file modified by two things, one file deleted.) All three of these have been in linux-next for a while, with no reported issues other than the merge conflict" * tag 'spdx-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: ASoC: MT6660: make spdxcheck.py happy .gitignore: add SPDX License Identifier .gitignore: remove too obvious comments
2020-03-31Merge branch 'x86-boot-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 boot updates from Ingo Molnar: "Misc cleanups and small enhancements all around the map" * 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot/compressed: Fix debug_puthex() parameter type x86/setup: Fix static memory detection x86/vmlinux: Drop unneeded linker script discard of .eh_frame x86/*/Makefile: Use -fno-asynchronous-unwind-tables to suppress .eh_frame sections x86/boot/compressed: Remove .eh_frame section from bzImage x86/boot/compressed/64: Remove .bss/.pgtable from bzImage x86/boot/compressed/64: Use 32-bit (zero-extended) MOV for z_output_len x86/boot/compressed/64: Use LEA to initialize boot stack pointer
2020-03-28x86/boot/compressed: Fix debug_puthex() parameter typeJoerg Roedel
In the CONFIG_X86_VERBOSE_BOOTUP=Y case, the debug_puthex() macro just turns into __puthex(), which takes 'unsigned long' as parameter. But in the CONFIG_X86_VERBOSE_BOOTUP=N case, it is a function which takes 'unsigned char *', causing compile warnings when the function is used. Fix the parameter type to get rid of the warnings. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200319091407.1481-11-joro@8bytes.org
2020-03-25.gitignore: add SPDX License IdentifierMasahiro Yamada
Add SPDX License Identifier to all .gitignore files. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-21Merge branch 'x86/kdump' into locking/kcsan, to resolve conflictsIngo Molnar
Conflicts: arch/x86/purgatory/Makefile Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-08efi/x86: Decompress at start of PE image load addressArvind Sankar
When booted via PE loader, define image_offset to hold the offset of startup_32() from the start of the PE image, and use it as the start of the decompression buffer. [ mingo: Fixed the grammar in the comments. ] Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200303221205.4048668-3-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-17-ardb@kernel.org
2020-03-08x86/boot/compressed/32: Save the output address instead of recalculating itArvind Sankar
In preparation for being able to decompress into a buffer starting at a different address than startup_32, save the calculated output address instead of recalculating it later. We now keep track of three addresses: %edx: startup_32 as we were loaded by bootloader %ebx: new location of compressed kernel %ebp: start of decompression buffer Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200303221205.4048668-2-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-16-ardb@kernel.org
2020-03-08x86/boot: Use unsigned comparison for addressesArvind Sankar
The load address is compared with LOAD_PHYSICAL_ADDR using a signed comparison currently (using jge instruction). When loading a 64-bit kernel using the new efi32_pe_entry() point added by: 97aa276579b2 ("efi/x86: Add true mixed mode entry point into .compat section") using Qemu with -m 3072, the firmware actually loads us above 2Gb, resulting in a very early crash. Use the JAE instruction to perform a unsigned comparison instead, as physical addresses should be considered unsigned. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200301230436.2246909-6-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-14-ardb@kernel.org
2020-03-08efi/x86: Avoid using code32_startArvind Sankar
code32_start is meant for 16-bit real-mode bootloaders to inform the kernel where the 32-bit protected mode code starts. Nothing in the protected mode kernel except the EFI stub uses it. efi_main() currently returns boot_params, with code32_start set inside it to tell efi_stub_entry() where startup_32 is located. Since it was invoked by efi_stub_entry() in the first place, boot_params is already known. Return the address of startup_32 instead. This will allow a 64-bit kernel to live above 4Gb, for example, and it's cleaner as well. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200301230436.2246909-5-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-13-ardb@kernel.org
2020-03-08efi/x86: Make efi32_pe_entry() more readableArvind Sankar
Set up a proper frame pointer in efi32_pe_entry() so that it's easier to calculate offsets for arguments. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200301230436.2246909-4-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-12-ardb@kernel.org
2020-03-08efi/x86: Respect 32-bit ABI in efi32_pe_entry()Arvind Sankar
verify_cpu() clobbers BX and DI. In case we have to return error, we need to preserve them to respect the 32-bit calling convention. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200301230436.2246909-3-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-11-ardb@kernel.org
2020-03-08efi/x86: Annotate the LOADED_IMAGE_PROTOCOL_GUID with SYM_DATAArvind Sankar
Use SYM_DATA*() macros to annotate this constant, and explicitly align it to 4-byte boundary. Use lower-case for hexadecimal data. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200301230436.2246909-2-nivedita@alum.mit.edu Link: https://lore.kernel.org/r/20200308080859.21568-10-ardb@kernel.org
2020-02-29x86/boot/compressed: Fix reloading of GDTR post-relocationArvind Sankar
The following commit: ef5a7b5eb13e ("efi/x86: Remove GDT setup from efi_main") introduced GDT setup into the 32-bit kernel's startup_32, and reloads the GDTR after relocating the kernel for paranoia's sake. A followup commit: 32d009137a56 ("x86/boot: Reload GDTR after copying to the end of the buffer") introduced a similar GDTR reload in the 64-bit kernel as well. The GDTR is adjusted by (init_size-_end), however this may not be the correct offset to apply if the kernel was loaded at a misaligned address or below LOAD_PHYSICAL_ADDR, as in that case the decompression buffer has an additional offset from the original load address. This should never happen for a conformant bootloader, but we're being paranoid anyway, so just store the new GDT address in there instead of adding any offsets, which is simpler as well. Fixes: ef5a7b5eb13e ("efi/x86: Remove GDT setup from efi_main") Fixes: 32d009137a56 ("x86/boot: Reload GDTR after copying to the end of the buffer") Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-efi@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Link: https://lore.kernel.org/r/20200226230031.3011645-2-nivedita@alum.mit.edu
2020-02-26Merge tag 'efi-next' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi into efi/core Pull EFI updates for v5.7 from Ard Biesheuvel: This time, the set of changes for the EFI subsystem is much larger than usual. The main reasons are: - Get things cleaned up before EFI support for RISC-V arrives, which will increase the size of the validation matrix, and therefore the threshold to making drastic changes, - After years of defunct maintainership, the GRUB project has finally started to consider changes from the distros regarding UEFI boot, some of which are highly specific to the way x86 does UEFI secure boot and measured boot, based on knowledge of both shim internals and the layout of bootparams and the x86 setup header. Having this maintenance burden on other architectures (which don't need shim in the first place) is hard to justify, so instead, we are introducing a generic Linux/UEFI boot protocol. Summary of changes: - Boot time GDT handling changes (Arvind) - Simplify handling of EFI properties table on arm64 - Generic EFI stub cleanups, to improve command line handling, file I/O, memory allocation, etc. - Introduce a generic initrd loading method based on calling back into the firmware, instead of relying on the x86 EFI handover protocol or device tree. - Introduce a mixed mode boot method that does not rely on the x86 EFI handover protocol either, and could potentially be adopted by other architectures (if another one ever surfaces where one execution mode is a superset of another) - Clean up the contents of struct efi, and move out everything that doesn't need to be stored there. - Incorporate support for UEFI spec v2.8A changes that permit firmware implementations to return EFI_UNSUPPORTED from UEFI runtime services at OS runtime, and expose a mask of which ones are supported or unsupported via a configuration table. - Various documentation updates and minor code cleanups (Heinrich) - Partial fix for the lack of by-VA cache maintenance in the decompressor on 32-bit ARM. Note that these patches were deliberately put at the beginning so they can be used as a stable branch that will be shared with a PR containing the complete fix, which I will send to the ARM tree. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-02-25x86/vmlinux: Drop unneeded linker script discard of .eh_frameArvind Sankar
Now that .eh_frame sections for the files in setup.elf and realmode.elf are not generated anymore, the linker scripts don't need the special output section name /DISCARD/ any more. Remove the one in the main kernel linker script as well, since there are no .eh_frame sections already, and fix up a comment referencing .eh_frame. Update the comment in asm/dwarf2.h referring to .eh_frame so it continues to make sense, as well as being more specific. [ bp: Touch up commit message. ] Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Link: https://lkml.kernel.org/r/20200224232129.597160-3-nivedita@alum.mit.edu
2020-02-25x86/*/Makefile: Use -fno-asynchronous-unwind-tables to suppress .eh_frame ↵Arvind Sankar
sections While discussing a patch to discard .eh_frame from the compressed vmlinux using the linker script, Fangrui Song pointed out [1] that these sections shouldn't exist in the first place because arch/x86/Makefile uses -fno-asynchronous-unwind-tables. It turns out this is because the Makefiles used to build the compressed kernel redefine KBUILD_CFLAGS, dropping this flag. Add the flag to the Makefile for the compressed kernel, as well as the EFI stub Makefile to fix this. Also add the flag to boot/Makefile and realmode/rm/Makefile so that the kernel's boot code (boot/setup.elf) and realmode trampoline (realmode/rm/realmode.elf) won't be compiled with .eh_frame sections, since their linker scripts also just discard them. [1] https://lore.kernel.org/lkml/20200222185806.ywnqhfqmy67akfsa@google.com/ Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Link: https://lkml.kernel.org/r/20200224232129.597160-2-nivedita@alum.mit.edu
2020-02-24x86/boot/compressed: Remove .eh_frame section from bzImageArvind Sankar
Discarding unnecessary sections with "*(*)" (see thread at Link: below) works fine with the bfd linker but fails with lld: $ make -j$(nproc) -s CC=clang LD=ld.lld O=out.x86_64 distclean defconfig bzImage ld.lld: error: discarding .shstrtab section is not allowed lld tries to also discard essential sections like .shstrtab, .symtab and .strtab, which results in the link failing since .shstrtab is required by the ELF specification: the e_shstrndx field in the ELF header is the index of .shstrtab, and each section in the section table is required to have an sh_name that points into the .shstrtab. .symtab and .strtab are also necessary to generate the zoffset.h file for the bzImage header. Since the only sizeable section that can be discarded is .eh_frame, restrict the discard to only .eh_frame to be safe. [ bp: Flesh out commit message and replace offending commit with this one. ] Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Link: https://lkml.kernel.org/r/20200109150218.16544-2-nivedita@alum.mit.edu
2020-02-23efi/x86: Implement mixed mode boot without the handover protocolArd Biesheuvel
Add support for booting 64-bit x86 kernels from 32-bit firmware running on 64-bit capable CPUs without requiring the bootloader to implement the EFI handover protocol or allocate the setup block, etc etc, all of which can be done by the stub itself, using code that already exists. Instead, create an ordinary EFI application entrypoint but implemented in 32-bit code [so that it can be invoked by 32-bit firmware], and stash the address of this 32-bit entrypoint in the .compat section where the bootloader can find it. Note that we use the setup block embedded in the binary to go through startup_32(), but it gets reallocated and copied in efi_pe_entry(), using the same code that runs when the x86 kernel is booted in EFI mode from native firmware. This requires the loaded image protocol to be installed on the kernel image's EFI handle, and point to the kernel image itself and not to its loader. This, in turn, requires the bootloader to use the LoadImage() boot service to load the 64-bit image from 32-bit firmware, which is in fact supported by firmware based on EDK2. (Only StartImage() will fail, and instead, the newly added entrypoint needs to be invoked) Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-02-23efi/libstub/x86: Incorporate eboot.c into libstubArd Biesheuvel
Most of the EFI stub source files of all architectures reside under drivers/firmware/efi/libstub, where they share a Makefile with special CFLAGS and an include file with declarations that are only relevant for stub code. Currently, we carry a lot of stub specific stuff in linux/efi.h only because eboot.c in arch/x86 needs them as well. So let's move eboot.c into libstub/, and move the contents of eboot.h that we still care about into efistub.h Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-02-22efi/libstub/x86: Avoid overflowing code32_start on PE entryArd Biesheuvel
When using the native PE entry point (as opposed to the EFI handover protocol entry point that is used more widely), we set code32_start, which is a 32-bit wide field, to the effective symbol address of startup_32, which could overflow given that the EFI loader may have located the running image anywhere in memory, and we haven't reached the point yet where we relocate ourselves. Since we relocate ourselves if code32_start != pref_address, this isn't likely to lead to problems in practice, given how unlikely it is that the truncated effective address of startup_32 happens to equal pref_address. But it is better to defer the assignment of code32_start to after the relocation, when it is guaranteed to fit. While at it, move the call to efi_relocate_kernel() to an earlier stage so it is more likely that our preferred offset in memory has not been occupied by other memory allocations done in the mean time. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>