summaryrefslogtreecommitdiff
path: root/arch/arm/mm/proc-v7.S
AgeCommit message (Collapse)Author
2014-12-05Merge branches 'fixes', 'misc', 'pm' and 'sa1100' into for-nextRussell King
2014-11-27ARM: 8222/1: mvebu: enable strex backoff delayThomas Petazzoni
Under extremely rare conditions, in an MPCore node consisting of at least 3 CPUs, two CPUs trying to perform a STREX to data on the same shared cache line can enter a livelock situation. This patch enables the HW mechanism that overcomes the bug. This fixes the incorrect setup of the STREX backoff delay bit due to a wrong description in the specification. Note that enabling the STREX backoff delay mechanism is done by leaving the bit *cleared*, while the bit was currently being set by the proc-v7.S code. [Thomas: adapt to latest mainline, slightly reword the commit log, add stable markers.] Fixes: de4901933f6d ("arm: mm: Add support for PJ4B cpu and init routines") Cc: <stable@vger.kernel.org> # v3.8+ Signed-off-by: Nadav Haklai <nadavh@marvell.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-11-21ARM: 8196/1: vfp: Workaround bad MVFR1 register on some KraitsStephen Boyd
Certain versions of the Krait processor don't report that they support the fused multiply accumulate instruction via the MVFR1 register despite the fact that they actually do. Unfortunately we use this register to identify support for VFPv4. Override the hwcap on all Krait processors to indicate support for VFPv4 to workaround this. Tested-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-09-12ARM: 8138/1: drop ISAR0 workaround for B15Brian Norris
The Brahma-B15's ISAR0 correcty advertises UDIV/SDIV support in both ARM and Thumb2 modes (CPUID_EXT_ISAR0=02101110), so we don't need to manually apply this hwcap. The code in question actually predates the following commit, which made our hwcaps unnecessary: commit 8164f7af88d9ad3a757bd14f634b23997ee77f6b Author: Stephen Boyd <sboyd@codeaurora.org> Date: Mon Mar 18 19:44:15 2013 +0100 ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 register Signed-off-by: Brian Norris <computersforpeace@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-24ARM: 8110/1: do CPU-specific init for Broadcom Brahma15 coresMarc Carino
Perform any CPU-specific initialization required on the Broadcom Brahma-15 core. Signed-off-by: Marc Carino <marc.ceeeee@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Brian Norris <computersforpeace@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8103/1: save/restore Cortex-A9 CP15 registers on suspend/resumeShawn Guo
The CP15 diagnostic register holds ARM errata bits on Cortex-A9, so it needs to be saved/restored on suspend/resume. Otherwise, the effectiveness of errata workaround gets lost together with diagnostic register bit across suspend/resume cycle. And the CP15 power control register of Cortex-A9 shares the same problem. The patch adds a couple of Cortex-A9 specific suspend/resume functions to save/restore these two Cortex-A9 CP15 registers across the suspend/resume cycle. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8089/1: cpu_pj4b_suspend_size should base on cpu_v7_suspend_sizeShawn Guo
Since pj4b suspend/resume routines are implemented based on generic ARMv7 ones, instead of hard-coding cpu_pj4b_suspend_size, we should have it be cpu_v7_suspend_size plus pj4b specific bytes. Otherwise, if cpu_v7_suspend_size gets updated alone, the pj4b suspend/resume will likely be broken. While at it, fix the comments in cpu_pj4b_do_resume, as we're restoring CP15 registers rather than saving in there. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: Shawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-05-25ARM: 8046/1: proc: add support for the Cortex-A17 processorWill Deacon
Cortex-A17 has identical initialisation requirements to Cortex-A12, so hook it up in proc-v7.S in the same way. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-23ARM: 8013/1: PJ4B: Add cpu_suspend/cpu_resume hooks for PJ4BGregory CLEMENT
PJ4B needs extra instructions for suspend and resume, so instead of using the armv7 version, this commit introduces specific versions for PJ4B. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04Merge branches 'amba', 'fixes', 'misc', 'mmci', 'unstable/omap-dma' and ↵Russell King
'unstable/sa11x0' into for-next
2014-02-10ARM: 7940/1: add support for the Cortex-A12 processorJonathan Austin
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does. Note that as the ACTLR cannot (usually) be written from non-secure, it is the responsibility of the bootloader/firmware to set this bit per core - it is done here in Linux as last resort in case of bad firmware. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jonathan Austin <jonathan.austin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7953/1: mm: ensure TLB invalidation is complete before enabling MMUWill Deacon
During __v{6,7}_setup, we invalidate the TLBs since we are about to enable the MMU on return to head.S. Unfortunately, without a subsequent dsb instruction, the invalidation is not guaranteed to have completed by the time we write to the sctlr, potentially exposing us to junk/stale translations cached in the TLB. This patch reworks the init functions so that the dsb used to ensure completion of cache/predictor maintenance is also used to ensure completion of the TLB invalidation. Cc: <stable@vger.kernel.org> Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-11-14ARM: 7885/1: Save/Restore 64-bit TTBR registers on LPAE suspend/resumeMahesh Sivasubramanian
LPAE enabled kernels use the 64-bit version of TTBR0 and TTBR1 registers. If we're running an LPAE kernel, fill the upper half of TTBR0 with 0 because we're setting it to the idmap here (the idmap is guaranteed to be < 4Gb) and fully restore TTBR1 instead of just restoring the lower 32 bits. Failure to do so can cause failures on resume from suspend when these registers are only half restored. Signed-off-by: Mahesh Sivasubramanian <msivasub@codeaurora.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-19ARM: asm: Add ARM_BE8() assembly helperBen Dooks
Add ARM_BE8() helper to wrap any code conditional on being compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert existing places where this is to use it. Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
2013-09-05Merge branches 'debug-choice', 'devel-stable' and 'misc' into for-linusRussell King
2013-09-02ARM: 7823/1: errata: workaround Cortex-A15 erratum 773022Will Deacon
On Cortex-A15 CPUs up to and including r0p4, in certain rare sequences of code, the loop buffer may deliver incorrect instructions. This workaround disables the loop buffer to avoid the erratum. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-08-12ARM: mm: use inner-shareable barriers for TLB and user cache operationsWill Deacon
System-wide barriers aren't required for situations where we only need to make visibility and ordering guarantees in the inner-shareable domain (i.e. we are not dealing with devices or potentially incoherent CPUs). This patch changes the v7 TLB operations, coherent_user_range and dcache_clean_area functions to user inner-shareable barriers. For cache maintenance, only the store access type is required to ensure completion. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-07-22ARM: 7784/1: mm: ensure SMP alternates assemble to exactly 4 bytes with Thumb-2Will Deacon
Commit ae8a8b9553bd ("ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead") added early function returns for page table cache flushing operations on ARMv7 SMP CPUs. Unfortunately, when targetting Thumb-2, these `mov pc, lr' sequences assemble to 2 bytes which can lead to corruption of the instruction stream after code patching. This patch fixes the alternates to use wide (32-bit) instructions for Thumb-2, therefore ensuring that the patching code works correctly. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-14arm: delete __cpuinit/__CPUINIT usage from all ARM usersPaul Gortmaker
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-29Merge branch 'devel-stable' into for-nextRussell King
Conflicts: arch/arm/Makefile arch/arm/include/asm/glue-proc.h
2013-06-24ARM: 7773/1: PJ4B: Add support for errata 4742Gregory CLEMENT
This commit fixes the regression on Armada 370 (the kernal hang during boot) introduced by the commit: "ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead". When coming out of either a Wait for Interrupt (WFI) or a Wait for Event (WFE) IDLE states, a specific timing sensitivity exists between the retiring WFI/WFE instructions and the newly issued subsequent instructions. This sensitivity can result in a CPU hang scenario. The workaround is to insert either a Data Synchronization Barrier (DSB) or Data Memory Barrier (DMB) command immediately after the WFI/WFE instruction. This commit was based on the work of Lior Amsalem, but heavily modified to apply the errata fix dynamically according to the processor type thanks to the suggestions of Russell King and Nicolas Pitre. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Willy Tarreau <w@1wt.eu> Cc: <stable@vger.kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-17ARM: 7754/1: Fix the CPU ID and the mask associated to the PJ4BGregory CLEMENT
This commit fixes the ID and mask for the PJ4B which was too restrictive and didn't match the CPU of the Armada 370 SoC. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-07ARM: add Cortex-R7 Processor InfoJonathan Austin
This patch adds processor info for ARM Ltd. Cortex-R7. The R7 has many similarities to the A9 and though the ACTLR layout is not identical, the bits associated with cache operations broadcasting and SMP modes are the same for A9, A5 and R7 (Though in the A-class processors the same bits toggle TLB-ops broadcasting as well as cache-ops) Signed-off-by: Jonathan Austin <jonathan.austin@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Stephen Boyd <sboyd@codeaurora.org>
2013-06-07ARM: suspend: fix CPU suspend code for !CONFIG_MMU configurationsWill Deacon
The ARM CPU suspend code can be selected even for a !CONFIG_MMU configuration. The resulting kernel will not compile and, even if it did, would access undefined co-processor registers when executing. This patch fixes the v6 and v7 CPU suspend code for the nommu case. Signed-off-by: Will Deacon <will.deacon@arm.com> Tested-by: Jonathan Austin <jonathan.austin@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> (commit_signer:1/3=33%) CC: Santosh Shilimkar <santosh.shilimkar@ti.com> (commit_signer:1/3=33%) CC: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
2013-05-02Merge branches 'devel-stable', 'entry', 'fixes', 'mach-types', 'misc' and ↵Russell King
'smp-hotplug' into for-linus
2013-04-17ARM: 7695/1: mvebu: Enable pj4b on LPAE compilationsGregory CLEMENT
pj4b cpus are LPAE capable so enable them on LPAE compilations Signed-off-by: Lior Amsalem <alior@marvell.com> Tested-by: Franklin <flin@marvell.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-03ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP insteadWill Deacon
Many ARMv7 cores have hardware page table walkers that can read the L1 cache. This is discoverable from the ID_MMFR3 register, although this can be expensive to access from the low-level set_pte functions and is a pain to cache, particularly with multi-cluster systems. A useful observation is that the multi-processing extensions for ARMv7 require coherent table walks, meaning that we can make use of ALT_SMP patching in proc-v7-* to patch away the cache flush safely for these cores. Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-22ARM: 7678/1: Work around faulty ISAR0 register in some Krait CPUsStepan Moskovchenko
Some early versions of the Krait CPU design incorrectly indicate that they only support the UDIV and SDIV instructions in Thumb mode when they actually support them in ARM and Thumb mode. It seems that these CPUs follow the DDI0406B ARM ARM which has two possible values for the divide instructions field, instead of the DDI0406C document which has three possible values. Work around this problem by checking the MIDR against Krait CPUs with this faulty ISAR0 register and force the hwcaps to indicate support in both modes. [sboyd: Rewrote commit text to reflect real reasoning now that we autodetect udiv/sdiv] Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-22ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 registerStephen Boyd
The ISAR0 register indicates support for the SDIV and UDIV instructions in both the Thumb and ARM instruction set. Read the register to detect the supported instructions and update the elf_hwcap mask as appropriate. This is better than adding more and more cpuid checks in proc-v7.S for each new cpu variant that supports these instructions. Acked-by: Will Deacon <will.deacon@arm.com> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-01-06ARM: 7614/1: mm: fix wrong branch from Cortex-A9 to PJ4bHaojian Zhuang
If CONFIG_ARCH_MULTIPLATFORM & CONFIG_ARCH_MVEBU are both enabled, __v7_pj4b_setup is added between __v7_ca9mp_setup and __v7_setup. But there's no jump instruction added. If the chip is Cortex A5/A9, it goes through __v7_pj4b_setup also. It results in system hang. Signed-off-by: Haojian Zhuang <haojian.zhuang@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-01-02ARM: 7609/1: disable errata work-arounds which access secure registersRob Herring
In order to support secure and non-secure platforms in multi-platform kernels, errata work-arounds that access secure only registers need to be disabled. Make all the errata options that fit in this category depend on !CONFIG_ARCH_MULTIPLATFORM. This will effectively remove the errata options as platforms are converted over to multi-platform. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-12-14Merge tag 'mvebu' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socLinus Torvalds
Pull ARM SoC updates for Marvell mvebu/kirkwood from Olof Johansson: "This is a branch with updates for Marvell's mvebu/kirkwood platforms. They came in late-ish, and were heavily interdependent such that it didn't make sense to split them up across the cross-platform topic branches. So here they are (for the second release in a row) in a branch on their own." * tag 'mvebu' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (88 commits) arm: l2x0: add aurora related properties to OF binding arm: mvebu: add Aurora L2 Cache Controller to the DT arm: mvebu: add L2 cache support dma: mv_xor: fix error handling path dma: mv_xor: fix error checking of irq_of_parse_and_map() dma: mv_xor: use request_irq() instead of devm_request_irq() dma: mv_xor: clear the window override control registers arm: mvebu: fix address decoding armada_cfg_base() function ARM: mvebu: update defconfig with I2C and RTC support ARM: mvebu: Add SATA support for OpenBlocks AX3-4 ARM: mvebu: Add support for the RTC in OpenBlocks AX3-4 ARM: mvebu: Add support for I2C on OpenBlocks AX3-4 ARM: mvebu: Add support for I2C controllers in Armada 370/XP arm: mvebu: Add hardware I/O Coherency support arm: plat-orion: Add coherency attribute when setup mbus target arm: dma mapping: Export a dma ops function arm_dma_set_mask arm: mvebu: Add SMP support for Armada XP arm: mm: Add support for PJ4B cpu and init routines arm: mvebu: Add IPI support via doorbells arm: mvebu: Add initial support for power managmement service unit ...
2012-11-21arm: mm: Add support for PJ4B cpu and init routinesGregory CLEMENT
PJ4B is an implementation of the ARMv7 (such as the Cortex A9 for example) released by Marvell. This CPU is currently found in Armada 370 and Armada XP SoCs. This patch provides a support for the specific initialization of this CPU. Signed-off-by: Yehuda Yitschak <yehuday@marvell.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2012-10-18ARM: 7553/1: proc-v7: Ensure correct instruction set after cpu_resetDave Martin
Because mov pc,<Rn> never switches instruction set when executed in Thumb code, Thumb-2 kernels will silently execute the target code after cpu_reset as Thumb code, even if the passed code pointer denotes ARM (bit 0 clear). This patch uses bx instead, ensuring the correct instruction set for the target code. Thumb code in the kernel is not supported prior to ARMv7, so other CPUs are not affected. Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-09-25ARM: mm: update __v7_setup() to the new LoUIS cache maintenance APISantosh Shilimkar
The ARMv7 processor setup function __v7_setup() cleans and invalidates the CPU cache before enabling MMU to start the CPU with a clean CPU local cache. But on ARMv7 architectures like Cortex-[A15/A8], this code will end up flushing the L2 caches(up to level of Coherency) which is undesirable and expensive. The setup functions are used in the CPU hotplug scenario too and hence flushing all cache levels should be avoided. This patch replaces the cache flushing call with the newly introduced v7 dcache LoUIS API where only cache levels up to LoUIS are cleaned and invalidated when a processors executes __v7_setup which is the expected behavior. For processors like A9 and A5 where the L2 cache is an outer one the behavior should be unchanged. Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Shawn Guo <shawn.guo@linaro.org>
2012-04-15ARM: 7384/1: ThumbEE: Disable userspace TEEHBR access for !CONFIG_ARM_THUMBEEJonathan Austin
Currently when ThumbEE is not enabled (!CONFIG_ARM_THUMBEE) the ThumbEE register states are not saved/restored at context switch. The default state of the ThumbEE Ctrl register (TEECR) allows userspace accesses to the ThumbEE Base Handler register (TEEHBR). This can cause unexpected behaviour when people use ThumbEE on !CONFIG_ARM_THUMBEE kernels, as well as allowing covert communication - eg between userspace tasks running inside chroot jails. This patch sets up TEECR in order to prevent user-space access to TEEHBR when !CONFIG_ARM_THUMBEE. In this case, tasks are sent SIGILL if they try to access TEEHBR. Cc: stable@vger.kernel.org Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jonathan Austin <jonathan.austin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-02-27ARM: 7345/1: errata: update workaround for A9 erratum #743622Will Deacon
Erratum #743622 affects all r2 variants of the Cortex-A9 processor, so ensure that the workaround is applied regardless of the revision. Cc: <stable@vger.kernel.org> Reported-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-23ARM: 7296/1: proc-v7.S: remove HARVARD_CACHE preprocessor guardsWill Deacon
On v7, we use the same cache maintenance instructions for data lines as for unified lines. This was not the case for v6, where HARVARD_CACHE was defined to indicate the L1 cache topology. This patch removes the erroneous compile-time check for HARVARD_CACHE in proc-v7.S, ensuring that we perform I-side invalidation at boot. Reported-and-Acked-by: Shawn Guo <shawn.guo@linaro.org> Cc: stable <stable@vger.kernel.org> Acked-by: Catalin Marinas <Catalin.Marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-23ARM: 7295/1: cortex-a7: move proc_info out of !CONFIG_ARM_LPAE blockWill Deacon
The merging of commits 1b6ba46b ("ARM: LPAE: MMU setup for the 3-level page table format") and b4244738 ("ARM: 7202/1: Add Cortex-A7 proc info") during the merge window ended up putting the Cortex-A7 proc_info into a code block guarded by !CONFIG_ARM_LPAE. This makes Cortex-A7 platforms unbootable when LPAE is enabled. This patch moves the proc_info structure for Cortex-A7 outside of the guarded block. Cc: Pawel Moll <pawel.moll@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-05Merge branch 'devel-stable' into for-linusRussell King
Conflicts: arch/arm/kernel/setup.c arch/arm/mach-shmobile/board-kota2.c
2012-01-05Merge branches 'fixes' and 'misc' into for-linusRussell King
2011-12-23ARM: 7197/1: errata: Remove SMP dependency for erratum 751472Dave Martin
Activation conditions for a workaround should not be encoded in the workaround's direct dependencies if this makes otherwise reasonable configuration choices impossible. This patches uses the SMP/UP patching facilities instead to compile out the workaround if the configuration means that it is definitely not needed. This means that configs for buggy silicon can simply select ARM_ERRATA_751472, without preventing a UP kernel from being built or duplicatiing knowledge about when to activate the workaround. This seems the correct way to do things, because the erratum is a property of the silicon, irrespective of what the kernel config happens to be. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-12-11ARM: 7202/1: Add Cortex-A7 proc infoPawel Moll
This patch adds processor info for ARM Ltd. Cortex-A7. A7 is architecturally identical to A15 so it shares the same SMP initialization code and hwcaps. Tested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Pawel Moll <pawel.moll@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-12-08ARM: LPAE: MMU setup for the 3-level page table formatCatalin Marinas
This patch adds the MMU initialisation for the LPAE page table format. The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new proc-v7-3level.S file contains the TTB initialisation, context switch and PTE setting code with the LPAE. The TTBRx split is based on the PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings (supersections) and a few other memory types in mmu.c are conditionally compiled. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2011-12-08ARM: LPAE: Factor out classic-MMU specific code into proc-v7-2level.SCatalin Marinas
This patch modifies the proc-v7.S file so that it only contains code shared between classic MMU and LPAE. The non-common code is factored out into a separate file. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2011-12-06ARM: proc-*.S: place cpu_reset functions into .idmap.text sectionWill Deacon
The CPU reset functions disable the MMU and therefore must be executed with an identity mapping in place. This patch places the CPU reset functions into the .idmap.text section, causing the idmap code to include them as part of the identity mapping. Acked-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-10-28Merge branch 'devel-stable' of ↵Linus Torvalds
http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm * 'devel-stable' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (178 commits) ARM: 7139/1: fix compilation with CONFIG_ARM_ATAG_DTB_COMPAT and large TEXT_OFFSET ARM: gic, local timers: use the request_percpu_irq() interface ARM: gic: consolidate PPI handling ARM: switch from NO_MACH_MEMORY_H to NEED_MACH_MEMORY_H ARM: mach-s5p64x0: remove mach/memory.h ARM: mach-s3c64xx: remove mach/memory.h ARM: plat-mxc: remove mach/memory.h ARM: mach-prima2: remove mach/memory.h ARM: mach-zynq: remove mach/memory.h ARM: mach-bcmring: remove mach/memory.h ARM: mach-davinci: remove mach/memory.h ARM: mach-pxa: remove mach/memory.h ARM: mach-ixp4xx: remove mach/memory.h ARM: mach-h720x: remove mach/memory.h ARM: mach-vt8500: remove mach/memory.h ARM: mach-s5pc100: remove mach/memory.h ARM: mach-tegra: remove mach/memory.h ARM: plat-tcc: remove mach/memory.h ARM: mach-mmp: remove mach/memory.h ARM: mach-cns3xxx: remove mach/memory.h ... Fix up mostly pretty trivial conflicts in: - arch/arm/Kconfig - arch/arm/include/asm/localtimer.h - arch/arm/kernel/Makefile - arch/arm/mach-shmobile/board-ap4evb.c - arch/arm/mach-u300/core.c - arch/arm/mm/dma-mapping.c - arch/arm/mm/proc-v7.S - arch/arm/plat-omap/Kconfig largely due to some CONFIG option renaming (ie CONFIG_PM_SLEEP -> CONFIG_ARM_CPU_SUSPEND for the arm-specific suspend code etc) and addition of NEED_MACH_MEMORY_H next to HAVE_IDE.
2011-10-25Merge branches 'arnd-randcfg-fixes', 'debug', 'io' (early part), 'l2x0', ↵Russell King
'p2v', 'pgt' (early part) and 'smp' into for-linus
2011-10-01ARM: pm: let platforms select cpu_suspend supportArnd Bergmann
Support for the cpu_suspend functions is only built-in when CONFIG_PM_SLEEP is enabled, but omap3/4, exynos4 and pxa always call cpu_suspend when CONFIG_PM is enabled. Signed-off-by: Arnd Bergmann <arnd@arndb.de>