summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2015-10-21arm64/capabilities: Make use of system wide safe valueSuzuki K. Poulose
Now that we can reliably read the system wide safe value for a feature register, use that to compute the system capability. This patch also replaces the 'feature-register-specific' methods with a generic routine to check the capability. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Delay cpu feature capability checksSuzuki K. Poulose
At the moment we run through the arm64_features capability list for each CPU and set the capability if one of the CPU supports it. This could be problematic in a heterogeneous system with differing capabilities. Delay the CPU feature checks until all the enabled CPUs are up(i.e, smp_cpus_done(), so that we can make better decisions based on the overall system capability. Once we decide and advertise the capabilities the alternatives can be applied. From this state, we cannot roll back a feature to disabled based on the values from a new hotplugged CPU, due to the runtime patching and other reasons. So, for all new CPUs, we need to make sure that they have the established system capabilities. Failing which, we bring the CPU down, preventing it from turning online. Once the capabilities are decided, any new CPU booting up goes through verification to ensure that it has all the enabled capabilities and also invokes the respective enable() method on the CPU. The CPU errata checks are not delayed and is still executed per-CPU to detect the respective capabilities. If we ever come across a non-errata capability that needs to be checked on each-CPU, we could introduce them via a new capability table(or introduce a flag), which can be processed per CPU. The next patch will make the feature checks use the system wide safe value of a feature register. NOTE: The enable() methods associated with the capability is scheduled on all the CPUs (which is the only use case at the moment). If we need a different type of 'enable()' which only needs to be run once on any CPU, we should be able to handle that when needed. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> [catalin.marinas@arm.com: static variable and coding style fixes] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Refactor check_cpu_capabilitiesSuzuki K. Poulose
check_cpu_capabilities runs through a given list of caps and checks if the system has the cap, updates the system capability bitmap and also runs any enable() methods associated with them. All of this is not quite obvious from the name 'check'. This patch splits the check_cpu_capabilities into two parts : 1) update_cpu_capabilities => Runs through the given list and updates the system wide capability map. 2) enable_cpu_capabilities => Runs through the given list and invokes enable() (if any) for the caps enabled on the system. Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Suggested-by: Catalin Marinas <catalin.marinsa@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Cleanup mixed endian support detectionSuzuki K. Poulose
Make use of the system wide safe register to decide the support for mixed endian. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Read system wide CPUID valueSuzuki K. Poulose
Add an API for reading the safe CPUID value across the system from the new infrastructure. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Consolidate CPU Sanity check to CPU Feature infrastructureSuzuki K. Poulose
This patch consolidates the CPU Sanity check to the new infrastructure. Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Keep track of CPU feature registersSuzuki K. Poulose
This patch adds an infrastructure to keep track of the CPU feature registers on the system. For each register, the infrastructure keeps track of the system wide safe value of the feature bits. Also, tracks the which fields of a register should be matched strictly across all the CPUs on the system for the SANITY check infrastructure. The feature bits are classified into following 3 types depending on the implication of the possible values. This information is used to decide the safe value for a feature. LOWER_SAFE - The smaller value is safer HIGHER_SAFE - The bigger value is safer EXACT - We can't decide between the two, so a predefined safe_value is used. This infrastructure will be later used to make better decisions for: - Kernel features (e.g, KVM, Debug) - SANITY Check - CPU capability - ELF HWCAP - Exposing CPU Feature register to userspace. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> [catalin.marinas@arm.com: whitespace fix] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Handle width of a cpuid featureSuzuki K. Poulose
Introduce a helper to extract cpuid feature for any given width. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Move /proc/cpuinfo handling codeSuzuki K. Poulose
This patch moves the /proc/cpuinfo handling code: arch/arm64/kernel/{setup.c to cpuinfo.c} No functional changes Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Move mixed endian support detectionSuzuki K. Poulose
Move the mixed endian support detection code to cpufeature.c from cpuinfo.c. This also moves the update_cpu_features() used by mixed endian detection code, which will get more functionality. Also moves the ID register field shifts to asm/sysreg.h, where all the useful definitions will end up in later patches. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Move cpu feature detection codeSuzuki K. Poulose
This patch moves the CPU feature detection code from arch/arm64/kernel/{setup.c to cpufeature.c} The plan is to consolidate all the CPU feature handling in cpufeature.c. Apart from changing pr_fmt from "alternatives" to "cpu features", there are no functional changes. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Delay cpuinfo_store_boot_cpuSuzuki K. Poulose
At the moment the boot CPU stores the cpuinfo long before the PERCPU areas are initialised by the kernel. This could be problematic as the non-boot CPU data structures might get copied with the data from the boot CPU, giving us no chance to detect if a particular CPU updated its cpuinfo. This patch delays the boot cpu store to smp_prepare_boot_cpu(). Also kills the setup_processor() which no longer does meaningful work. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Delay ELF HWCAP initialisation until all CPUs are upSuzuki K. Poulose
Delay the ELF HWCAP initialisation until all the (enabled) CPUs are up, i.e, smp_cpus_done(). This is in preparation for detecting the common features across the CPUS and creating a consistent ELF HWCAP for the system. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Make the CPU information more clearSuzuki K. Poulose
At early boot, we print the CPU version/revision. On a heterogeneous system, we could have different types of CPUs. Print the CPU info for all active cpus. Also, the secondary CPUs prints the message only when they turn online. Also, remove the redundant 'revision' information which doesn't make any sense without the 'variant' field. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-20arm64: Make 36-bit VA depend on EXPERTCatalin Marinas
Commit 215399392fe4 (arm64: 36 bit VA) introduced 36-bit VA support for the arm64 kernel when the 16KB page configuration is enabled. While this is a valid hardware configuration, it's not something we want to encourage since it reduces the memory (and I/O) range that the kernel can access. Make this depend on EXPERT to avoid complaints of Linux not mapping the whole RAM, especially on platforms following the ARM recommended memory map. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Synchronise dump_backtrace() with perf callchainJungseok Lee
Unlike perf callchain relying on walk_stackframe(), dump_backtrace() has its own backtrace logic. A major difference between them is the moment a symbol is recorded. Perf writes down a symbol *before* calling unwind_frame(), but dump_backtrace() prints it out *after* unwind_frame(). As a result, the last valid symbol cannot be hooked in case of dump_backtrace(). This patch addresses the issue as synchronising dump_backtrace() with perf callchain. A simple test and its results are as follows: - crash trigger $ sudo echo c > /proc/sysrq-trigger - current status Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 - with this change Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 [<fffffe00000939ec>] el0_svc_naked+0x20/0x28 Note that this patch does not cover a case where MMU is disabled. The last stack frame of swapper, for example, has PC in a form of physical address. Unfortunately, a simple conversion using phys_to_virt() cannot cover all scenarios since PC is retrieved from LR - 4, not LR. It is a big tradeoff to change both head.S and unwind_frame() for only a few of symbols in *.S. Thus, this hunk does not take care of the case. Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: add cpu_idle tracepoints to arch_cpu_idleJisheng Zhang
Currently, if cpuidle is disabled or not supported, powertop reports zero wakeups and zero events. This is due to the cpu_idle tracepoints are missing. This patch is to make cpu_idle tracepoints always available even if cpuidle is disabled or not supported. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: 36 bit VASuzuki K. Poulose
36bit VA lets us use 2 level page tables while limiting the available address space to 64GB. Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Add 16K page size supportSuzuki K. Poulose
This patch turns on the 16K page support in the kernel. We support 48bit VA (4 level page tables) and 47bit VA (3 level page tables). With 16K we can map 128 entries using contiguous bit hint at level 3 to map 2M using single TLB entry. TODO: 16K supports 32 contiguous entries at level 2 to get us 1G(which is not yet supported by the infrastructure). That should be a separate patch altogether. Cc: Will Deacon <will.deacon@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Add page size to the kernel image headerArd Biesheuvel
This patch adds the page size to the arm64 kernel image header so that one can infer the PAGESIZE used by the kernel. This will be helpful to diagnose failures to boot the kernel with page size not supported by the CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Check for selected granule supportSuzuki K. Poulose
Ensure that the selected page size is supported by the CPU(s). If it doesn't park it. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Kconfig: Fix help text about AArch32 support with 64K pagesSuzuki K. Poulose
Update the help text for ARM64_64K_PAGES to reflect the reality about AArch32 support. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Simplify NR_FIX_BTMAPS calculationMark Rutland
We choose NR_FIX_BTMAPS such that each slot (NR_FIX_BTMAPS * PAGE_SIZE) can address 256K. Use division to derive NR_FIX_BTMAPS rather than defining it for each page size. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Clean config usages for page sizeSuzuki K. Poulose
We use !CONFIG_ARM64_64K_PAGES for CONFIG_ARM64_4K_PAGES (and vice versa) in code. It all worked well, so far since we only had two options. Now, with the introduction of 16K, these cases will break. This patch cleans up the code to use the required CONFIG symbol expression without the assumption that !64K => 4K (and vice versa) Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Handle 4 level page table for swapperSuzuki K. Poulose
At the moment, we only support maximum of 3-level page table for swapper. With 48bit VA, 64K has only 3 levels and 4K uses section mapping. Add support for 4-level page table for swapper, needed by 16K pages. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Calculate size for idmap_pg_dir at compile timeSuzuki K. Poulose
Now that we can calculate the number of levels required for mapping a va width, reserve exact number of pages that would be required to cover the idmap. The idmap should be able to handle the maximum physical address size supported. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Introduce helpers for page table levelsSuzuki K. Poulose
Introduce helpers for finding the number of page table levels required for a given VA width, shift for a particular page table level. Convert the existing users to the new helpers. More users to follow. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Handle section maps for swapper/idmapSuzuki K. Poulose
We use section maps with 4K page size to create the swapper/idmaps. So far we have used !64K or 4K checks to handle the case where we use the section maps. This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to handle cases where we use section maps, instead of using the page size symbols. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19arm64: Move swapper pagetable definitionsSuzuki K. Poulose
Move the kernel pagetable (both swapper and idmap) definitions from the generic asm/page.h to a new file, asm/kernel-pgtable.h. This is mostly a cosmetic change, to clean up the asm/page.h to get rid of the arch specific details which are not needed by the generic code. Also renames the symbols to prevent conflicts. e.g, BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-16arm64: debug: Fix typo in debug-monitors.cYang Shi
Fix handers to handlers. Signed-off-by: Yang Shi <yang.shi@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-16arm64: AArch32 user space PC alignment exceptionMark Salyzyn
ARMv7 does not have a PC alignment exception. ARMv8 AArch32 user space however can produce a PC alignment exception. Add handler so that we do not dump an unexpected stack trace in the logs. Signed-off-by: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-16arm64: Minor coding style fixes for kc_offset_to_vaddr and kc_vaddr_to_offsetCatalin Marinas
These were introduced by commit 03875ad52fdd (arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macro). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13Revert "arm64: ioremap: add ioremap_cache macro"Catalin Marinas
This reverts commit 1b6d7f8742d5d46c478f10c9e57da18d049b116d. This patch would conflict with Dan Williams' "tree-wide convert to memremap()" series (ioremap_cache replaced by arch_memremap) Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macroyalin wang
This patch add kc_offset_to_vaddr() and kc_vaddr_to_offset(), the default version doesn't work on arm64, because arm64 kernel address is below the PAGE_OFFSET, like module address and vmemmap address are all below PAGE_OFFSET address. Signed-off-by: yalin wang <yalin.wang2010@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13arm64: ioremap: add ioremap_cache macroyalin wang
Add ioremap_cache macro, because some code will test if this macro is defined or not, and will generate a generric version if not defined, for example, memremap.c do like this. Signed-off-by: yalin wang <yalin.wang2010@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13arm64: kasan: fix issues reported by sparseWill Deacon
Sparse reports some new issues introduced by the kasan patches: arch/arm64/mm/kasan_init.c:91:13: warning: no previous prototype for 'kasan_early_init' [-Wmissing-prototypes] void __init kasan_early_init(void) ^ arch/arm64/mm/kasan_init.c:91:13: warning: symbol 'kasan_early_init' was not declared. Should it be static? [sparse] This patch resolves the problem by adding a prototype for kasan_early_init and marking the function as asmlinkage, since it's only called from head.S. Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12Documentation/features/KASAN: arm64 supports KASAN nowAndrey Ryabinin
Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12ARM64: kasan: print memory assignmentLinus Walleij
This prints out the virtual memory assigned to KASan in the boot crawl along with other memory assignments, if and only if KASan is activated. Example dmesg from the Juno Development board: Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata, 2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved) Virtual kernel memory layout: kasan : 0xffffff8000000000 - 0xffffff9000000000 ( 64 GB) vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000 ( 182 GB) vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbdc2000000 - 0xffffffbdc3fc0000 ( 31 MB actual) fixed : 0xffffffbffabfd000 - 0xffffffbffac00000 ( 12 KB) PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000 ( 16 MB) modules : 0xffffffbffc000000 - 0xffffffc000000000 ( 64 MB) memory : 0xffffffc000000000 - 0xffffffc07f000000 ( 2032 MB) .init : 0xffffffc0007f5000 - 0xffffffc00084a000 ( 340 KB) .text : 0xffffffc000080000 - 0xffffffc0007f45b4 ( 7634 KB) .data : 0xffffffc000850000 - 0xffffffc0008bf200 ( 445 KB) Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: add KASAN supportAndrey Ryabinin
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: move PGD_SIZE definition to pgalloc.hAndrey Ryabinin
This will be used by KASAN latter. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: atomics: implement native {relaxed, acquire, release} atomicsWill Deacon
Commit 654672d4ba1a ("locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operation") introduced a relaxed atomic API to Linux that maps nicely onto the arm64 memory model, including the new ARMv8.1 atomic instructions. This patch hooks up the API to our relaxed atomic instructions, rather than have them all expand to the full-barrier variants as they do currently. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64/efi: isolate EFI stub from the kernel properArd Biesheuvel
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: use ENDPIPROC() to annotate position independent assembler routinesArd Biesheuvel
For more control over which functions are called with the MMU off or with the UEFI 1:1 mapping active, annotate some assembler routines as position independent. This is done by introducing ENDPIPROC(), which replaces the ENDPROC() declaration of those routines. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64/efi: remove /chosen/linux, uefi-stub-kern-ver DT propertyArd Biesheuvel
With the stub to kernel interface being promoted to a proper interface so that other agents than the stub can boot the kernel proper in EFI mode, we can remove the linux,uefi-stub-kern-ver field, considering that its original purpose was to prevent this from happening in the first place. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12arm64: Fix missing #include in hw_breakpoint.cCatalin Marinas
A prior commit used to detect the hw breakpoint ABI behaviour based on the target state missed the asm/compat.h include and the build fails with !CONFIG_COMPAT. Fixes: 8f48c0629049 ("arm64: hw_breakpoint: use target state to determine ABI behaviour") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-09arm64: fix a migrating irq bug when hotplug cpuYang Yingliang
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-09Merge branch 'irq/for-arm' of ↵Catalin Marinas
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip * 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: genirq: Introduce generic irq migration for cpu hotunplug
2015-10-08arm64: Mark kernel page ranges contiguousJeremy Linton
With 64k pages, the next larger segment size is 512M. The linux kernel also uses different protection flags to cover its code and data. Because of this requirement, the vast majority of the kernel code and data structures end up being mapped with 64k pages instead of the larger pages common with a 4k page kernel. Recent ARM processors support a contiguous bit in the page tables which allows the a TLB to cover a range larger than a single PTE if that range is mapped into physically contiguous ram. So, for the kernel its a good idea to set this flag. Some basic micro benchmarks show it can significantly reduce the number of L1 dTLB refills. Add boot option to enable/disable CONT marking, as well as fix a bug found by Steve Capper. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: Make the kernel page dump utility aware of the CONT bitJeremy Linton
The kernel page dump utility needs to be aware of the CONT bit before it will break up pages ranges for display. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08arm64: Default kernel pages should be contiguousJeremy Linton
The default page attributes for a PMD being broken should have the CONT bit set. Create a new definition for an early boot range of PTE's that are contiguous. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>