summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2017-11-21x86/MCE/AMD: Always give panic severity for UC errors in kernel contextYazen Ghannam
commit d65dfc81bb3894fdb68cbc74bbf5fb48d2354071 upstream. The AMD severity grading function was introduced in kernel 4.1. The current logic can possibly give MCE_AR_SEVERITY for uncorrectable errors in kernel context. The system may then get stuck in a loop as memory_failure() will try to handle the bad kernel memory and find it busy. Return MCE_PANIC_SEVERITY for all UC errors IN_KERNEL context on AMD systems. After: b2f9d678e28c ("x86/mce: Check for faults tagged in EXTABLE_CLASS_FAULT exception table entries") was accepted in v4.6, this issue was masked because of the tail-end attempt at kernel mode recovery in the #MC handler. However, uncorrectable errors IN_KERNEL context should always be considered unrecoverable and cause a panic. Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-edac <linux-edac@vger.kernel.org> Fixes: bf80bbd7dcf5 (x86/mce: Add an AMD severities-grading function) Link: http://lkml.kernel.org/r/20171106174633.13576-1-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21MIPS: Netlogic: Exclude netlogic,xlp-pic code from XLR buildsPaul Burton
[ Upstream commit 9799270affc53414da96e77e454a5616b39cdab0 ] Code in arch/mips/netlogic/common/irq.c which handles the XLP PIC fails to build in XLR configurations due to cpu_is_xlp9xx not being defined, leading to the following build failure: arch/mips/netlogic/common/irq.c: In function ‘xlp_of_pic_init’: arch/mips/netlogic/common/irq.c:298:2: error: implicit declaration of function ‘cpu_is_xlp9xx’ [-Werror=implicit-function-declaration] if (cpu_is_xlp9xx()) { ^ Although the code was conditional upon CONFIG_OF which is indirectly selected by CONFIG_NLM_XLP_BOARD but not CONFIG_NLM_XLR_BOARD, the failing XLR with CONFIG_OF configuration can be configured manually or by randconfig. Fix the build failure by making the affected XLP PIC code conditional upon CONFIG_CPU_XLP which is used to guard the inclusion of asm/netlogic/xlp-hal/xlp.h that provides the required cpu_is_xlp9xx function. [ralf@linux-mips.org: Fixed up as per Jayachandran's suggestion.] Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Jayachandran C <jchandra@broadcom.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14524/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21MIPS: traps: Ensure L1 & L2 ECC checking match for CM3 systemsPaul Burton
[ Upstream commit 35e6de38858f59b6b65dcfeaf700b5d06fc2b93d ] On systems with CM3, we must ensure that the L1 & L2 ECC enables are set to the same value. This is presumed by the hardware & cache corruption can occur when it is not the case. Support enabling & disabling the L2 ECC checking on CM3 systems where this is controlled via a GCR, and ensure that it matches the state of L1 ECC checking. Remove I6400 from the switch statement it will no longer hit, and which was incorrect since the L2 ECC enable bit isn't in the CP0 ErrCtl register. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14413/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21MIPS: init: Ensure reserved memory regions are not added to bootmemMarcin Nowakowski
[ Upstream commit e89ef66d7682f031f026eee6bba03c8c2248d2a9 ] Memories managed through boot_mem_map are generally expected to define non-crossing areas. However, if part of a larger memory block is marked as reserved, it would still be added to bootmem allocator as an available block and could end up being overwritten by the allocator. Prevent this by explicitly marking the memory as reserved it if exists in the range used by bootmem allocator. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14608/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21MIPS: init: Ensure bootmem does not corrupt reserved memoryMarcin Nowakowski
[ Upstream commit d9b5b658210f28ed9f70c757d553e679d76e2986 ] Current init code initialises bootmem allocator with all of the low memory that it assumes is available, but does not check for reserved memory block, which can lead to corruption of data that may be stored there. Move bootmem's allocation map to a location that does not cross any reserved regions Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14609/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21MIPS: End asm function prologue macros with .insnPaul Burton
[ Upstream commit 08889582b8aa0bbc01a1e5a0033b9f98d2e11caa ] When building a kernel targeting a microMIPS ISA, recent GNU linkers will fail the link if they cannot determine that the target of a branch or jump is microMIPS code, with errors such as the following: mips-img-linux-gnu-ld: arch/mips/built-in.o: .text+0x542c: Unsupported jump between ISA modes; consider recompiling with interlinking enabled. mips-img-linux-gnu-ld: final link failed: Bad value or: ./arch/mips/include/asm/uaccess.h:1017: warning: JALX to a non-word-aligned address Placing anything other than an instruction at the start of a function written in assembly appears to trigger such errors. In order to prepare for allowing us to follow function prologue macros with an EXPORT_SYMBOL invocation, end the prologue macros (LEAD, NESTED & FEXPORT) with a .insn directive. This ensures that the start of the function is marked as code, which always makes sense for functions & safely prevents us from hitting the link errors described above. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: Maciej W. Rozycki <macro@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14508/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21arm64: dts: NS2: reserve memory for Nitro firmwareJon Mason
[ Upstream commit 0cc878d678444392ca2a31350f89f489593ef5bb ] Nitro firmware is loaded into memory by the bootloader at a specific location. Set this memory range aside to prevent the kernel from using it. Signed-off-by: Jon Mason <jon.mason@broadcom.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21x86/irq, trace: Add __irq_entry annotation to x86's platform IRQ handlersDaniel Bristot de Oliveira
[ Upstream commit c4158ff536439619fa342810cc575ae2c809f03f ] This patch adds the __irq_entry annotation to the default x86 platform IRQ handlers. ftrace's function_graph tracer uses the __irq_entry annotation to notify the entry and return of IRQ handlers. For example, before the patch: 354549.667252 | 3) d..1 | default_idle_call() { 354549.667252 | 3) d..1 | arch_cpu_idle() { 354549.667253 | 3) d..1 | default_idle() { 354549.696886 | 3) d..1 | smp_trace_reschedule_interrupt() { 354549.696886 | 3) d..1 | irq_enter() { 354549.696886 | 3) d..1 | rcu_irq_enter() { After the patch: 366416.254476 | 3) d..1 | arch_cpu_idle() { 366416.254476 | 3) d..1 | default_idle() { 366416.261566 | 3) d..1 ==========> | 366416.261566 | 3) d..1 | smp_trace_reschedule_interrupt() { 366416.261566 | 3) d..1 | irq_enter() { 366416.261566 | 3) d..1 | rcu_irq_enter() { KASAN also uses this annotation. The smp_apic_timer_interrupt() was already annotated. Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Aaron Lu <aaron.lu@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Claudio Fontana <claudio.fontana@huawei.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Dou Liyang <douly.fnst@cn.fujitsu.com> Cc: Gu Zheng <guz.fnst@cn.fujitsu.com> Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nicolai Stange <nicstange@gmail.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Cc: linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/059fdf437c2f0c09b13c18c8fe4e69999d3ffe69.1483528431.git.bristot@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21ARM: dts: omap5-uevm: Allow bootloader to configure USB Ethernet MACTony Lindgren
[ Upstream commit e13a22a406f20322651b8c0847f4210bdef246d1 ] Note that with 9730 the wiring is different compared to 9514 found on beagleboard xm for example. On beagleboard xm we have: /sys/bus/usb/devices/1-2 hub /sys/bus/usb/devices/1-2.1 9514 While on omap5-uevm we have: /sys/bus/usb/devices/1-2 hub /sys/bus/usb/devices/1-3 9730 Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21ARM: OMAP2+: Fix init for multiple quirks for the same SoCTony Lindgren
[ Upstream commit 6e613ebf4405fc09e2a8c16ed193b47f80a3cbed ] It's possible that there are multiple quirks that need to be initialized for the same SoC. Fix the issue by not returning on the first match. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21ARM: dts: Fix am335x and dm814x scm syscon to probe childrenTony Lindgren
[ Upstream commit 1aa09df0854efe16b7a80358a18f0a0bebafd246 ] Without these changes children of the scn syscon won't probe. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21ARM: dts: Fix compatible for ti81xx uarts for 8250Tony Lindgren
[ Upstream commit f62280efe8934a1275fd148ef302d1afec8cd3df ] When using 8250_omap driver, we need to specify the right compatible value for the UART to work on dm814x and dm816x. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-21arm: crypto: reduce priority of bit-sliced AES cipherEric Biggers
[ Not upstream because this is a minimal fix for a bug where arm32 kernels can use a much slower implementation of AES than is actually available, potentially forcing vendors to disable encryption on their devices.] All the aes-bs (bit-sliced) and aes-ce (cryptographic extensions) algorithms had a priority of 300. This is undesirable because it means an aes-bs algorithm may be used when an aes-ce algorithm is available. The aes-ce algorithms have much better performance (up to 10x faster). Fix it by decreasing the priority of the aes-bs algorithms to 250. This was fixed upstream by commit cc477bf64573 ("crypto: arm/aes - replace bit-sliced OpenSSL NEON code"), but it was just a small part of a complete rewrite. This patch just fixes the priority bug for older kernels. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2017-11-18security/keys: add CONFIG_KEYS_COMPAT to KconfigBilal Amarni
commit 47b2c3fff4932e6fc17ce13d51a43c6969714e20 upstream. CONFIG_KEYS_COMPAT is defined in arch-specific Kconfigs and is missing for several 64-bit architectures : mips, parisc, tile. At the moment and for those architectures, calling in 32-bit userspace the keyctl syscall would return an ENOSYS error. This patch moves the CONFIG_KEYS_COMPAT option to security/keys/Kconfig, to make sure the compatibility wrapper is registered by default for any 64-bit architecture as long as it is configured with CONFIG_COMPAT. [DH: Modified to remove arm64 compat enablement also as requested by Eric Biggers] Signed-off-by: Bilal Amarni <bilal.amarni@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> cc: Eric Biggers <ebiggers3@gmail.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Cc: James Cowgill <james.cowgill@mips.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-18Revert "ARM: dts: imx53-qsb-common: fix FEC pinmux config"Greg Kroah-Hartman
This reverts commit 62b9fa2c436ffd9b87e6ed81df7f86c29fee092b which is commit 8b649e426336d7d4800ff9c82858328f4215ba01 upstream. Turns out not to be a good idea in the stable kernels for now as Patrick writes: As discussed for 4.4 stable queue this patch might break existing machines, if they use a different pinmux configuration with their own bootloader. Reported-by: Patrick Brünn <P.Bruenn@beckhoff.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15x86/oprofile/ppro: Do not use __this_cpu*() in preemptible contextBorislav Petkov
commit a743bbeef27b9176987ec0cb7f906ab0ab52d1da upstream. The warning below says it all: BUG: using __this_cpu_read() in preemptible [00000000] code: swapper/0/1 caller is __this_cpu_preempt_check CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc8 #4 Call Trace: dump_stack check_preemption_disabled ? do_early_param __this_cpu_preempt_check arch_perfmon_init op_nmi_init ? alloc_pci_root_info oprofile_arch_init oprofile_init do_one_initcall ... These accessors should not have been used in the first place: it is PPro so no mixed silicon revisions and thus it can simply use boot_cpu_data. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Tested-by: Fengguang Wu <fengguang.wu@intel.com> Fix-creation-mandated-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Robert Richter <rric@kernel.org> Cc: x86@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15x86/smpboot: Make optimization of delay calibration work correctlyPavel Tatashin
commit 76ce7cfe35ef58f34e6ba85327afb5fbf6c3ff9b upstream. If the TSC has constant frequency then the delay calibration can be skipped when it has been calibrated for a package already. This is checked in calibrate_delay_is_known(), but that function is buggy in two aspects: It returns 'false' if (!tsc_disabled && !cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC) which is obviously the reverse of the intended check and the check for the sibling mask cannot work either because the topology links have not been set up yet. Correct the condition and move the call to set_cpu_sibling_map() before invoking calibrate_delay() so the sibling check works correctly. [ tglx: Rewrote changelong ] Fixes: c25323c07345 ("x86/tsc: Use topology functions") Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: peterz@infradead.org Cc: bob.picco@oracle.com Cc: steven.sistare@oracle.com Cc: daniel.m.jordan@oracle.com Link: https://lkml.kernel.org/r/20171028001100.26603-1-pasha.tatashin@oracle.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: AR7: Ensure that serial ports are properly set upOswald Buddenhagen
commit b084116f8587b222a2c5ef6dcd846f40f24b9420 upstream. Without UPF_FIXED_TYPE, the data from the PORT_AR7 uart_config entry is never copied, resulting in a dead port. Fixes: 154615d55459 ("MIPS: AR7: Use correct UART port type") Signed-off-by: Oswald Buddenhagen <oswald.buddenhagen@gmx.de> [jonas.gorski: add Fixes tag] Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Cc: Oswald Buddenhagen <oswald.buddenhagen@gmx.de> Cc: linux-mips@linux-mips.org Cc: linux-serial@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/17543/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: AR7: Defer registration of GPIOJonas Gorski
commit e6b03ab63b4d270e0249f96536fde632409dc1dc upstream. When called from prom init code, ar7_gpio_init() will fail as it will call gpiochip_add() which relies on a working kmalloc() to alloc the gpio_desc array and kmalloc is not useable yet at prom init time. Move ar7_gpio_init() to ar7_register_devices() (a device_initcall) where kmalloc works. Fixes: 14e85c0e69d5 ("gpio: remove gpio_descs global array") Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Cc: linux-mips@linux-mips.org Cc: linux-serial@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/17542/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: BMIPS: Fix missing cbr addressJaedon Shin
commit ea4b3afe1eac8f88bb453798a084fba47a1f155a upstream. Fix NULL pointer access in BMIPS3300 RAC flush. Fixes: 738a3f79027b ("MIPS: BMIPS: Add early CPU initialization code") Signed-off-by: Jaedon Shin <jaedon.shin@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16423/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: SMP: Fix deadlock & online raceMatt Redfearn
commit 9e8c399a88f0b87e41a894911475ed2a8f8dff9e upstream. Commit 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") effectively reverted commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online") and thus has reinstated the possibility of deadlock. The commit was based on testing of kernel v4.4, where the CPU hotplug core code issued a BUG() if the starting CPU is not marked online when the boot CPU returns from __cpu_up. The commit fixes this race (in v4.4), but re-introduces the deadlock situation. As noted in the commit message, upstream differs in this area. Commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up") adds a completion event in the CPU hotplug core code, making this race impossible. However, people were unhappy with relying on the core code to do the right thing. To address the issues both commits were trying to fix, add a second completion event in the MIPS smp hotplug path. It removes the possibility of a race, since the MIPS smp hotplug code now synchronises both the boot and secondary CPUs before they return to the hotplug core code. It also addresses the deadlock by ensuring that the secondary CPU is not marked online before it's counters are synchronised. This fix should also be backported to fix the race condition introduced by the backport of commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online"), through really that race only existed before commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up"). Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Fixes: 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") CC: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Patchwork: https://patchwork.linux-mips.org/patch/17376/ Signed-off-by: James Hogan <jhogan@kernel.org> [jhogan@kernel.org: Backported 4.1..4.9] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: Fix race on setting and getting cpu_online_maskMatija Glavinic Pecotic
commit 6f542ebeaee0ee552a902ce3892220fc22c7ec8e upstream. While testing cpu hoptlug (cpu down and up in loops) on kernel 4.4, it was observed that occasionally check for cpu online will fail in kernel/cpu.c, _cpu_up: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/kernel/cpu.c?h=v4.4.79#n485 518 /* Arch-specific enabling code. */ 519 ret = __cpu_up(cpu, idle); 520 521 if (ret != 0) 522 goto out_notify; 523 BUG_ON(!cpu_online(cpu)); Reason is race between start_secondary and _cpu_up. cpu_callin_map is set before cpu_online_mask. In __cpu_up, cpu_callin_map is waited for, but cpu online mask is not, resulting in race in which secondary processor started and set cpu_callin_map, but not yet set the online mask,resulting in above BUG being hit. Upstream differs in the area. cpu_online check is in bringup_wait_for_ap, which is after cpu reached AP_ONLINE_IDLE,where secondary passed its start function. Nonetheless, fix makes start_secondary safe and not depending on other locks throughout the code. It protects as well against cpu_online checks put in between sometimes in the future. Fix this by moving completion after all flags are set. Signed-off-by: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16925/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: SMP: Use a completion event to signal CPU upMatt Redfearn
commit a00eeede507c975087b7b8df8cf2c9f88ba285de upstream. If a secondary CPU failed to start, for any reason, the CPU requesting the secondary to start would get stuck in the loop waiting for the secondary to be present in the cpu_callin_map. Rather than that, use a completion event to signal that the secondary CPU has started and is waiting to synchronise counters. Since the CPU presence will no longer be marked in cpu_callin_map, remove the redundant test from arch_cpu_idle_dead(). Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Maciej W. Rozycki <macro@imgtec.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Qais Yousef <qsyousef@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14502/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: Fix CM region target definitionsPaul Burton
commit 6a6cba1d945a7511cdfaf338526871195e420762 upstream. The default CM target field in the GCR_BASE register is encoded with 0 meaning memory & 1 being reserved. However the definitions we use for those bits effectively get these two values backwards - likely because they were copied from the definitions for the CM regions where the target is encoded differently. This results in use setting up GCR_BASE with the reserved target value by default, rather than targeting memory as intended. Although we currently seem to get away with this it's not a great idea to rely upon. Fix this by changing our macros to match the documentated target values. The incorrect encoding became used as of commit 9f98f3dd0c51 ("MIPS: Add generic CM probe & access code") in the Linux v3.15 cycle, and was likely carried forwards from older but unused code introduced by commit 39b8d5254246 ("[MIPS] Add support for MIPS CMP platform.") in the v2.6.26 cycle. Fixes: 9f98f3dd0c51 ("MIPS: Add generic CM probe & access code") Signed-off-by: Paul Burton <paul.burton@mips.com> Reported-by: Matt Redfearn <matt.redfearn@mips.com> Reviewed-by: James Hogan <jhogan@kernel.org> Cc: Matt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.15+ Patchwork: https://patchwork.linux-mips.org/patch/17562/ Signed-off-by: James Hogan <jhogan@kernel.org> [jhogan@kernel.org: Backported 3.15..4.13] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15MIPS: microMIPS: Fix incorrect mask in insn_table_MMGustavo A. R. Silva
commit 77238e76b9156d28d86c1e31c00ed2960df0e4de upstream. It seems that this is a typo error and the proper bit masking is "RT | RS" instead of "RS | RS". This issue was detected with the help of Coccinelle. Fixes: d6b3314b49e1 ("MIPS: uasm: Add lh uam instruction") Reported-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com> Reviewed-by: James Hogan <jhogan@kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/17551/ Signed-off-by: James Hogan <jhogan@kernel.org> [jhogan@kernel.org: Backported 3.16..4.12] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ARM: 8720/1: ensure dump_instr() checks addr_limitMark Rutland
commit b9dd05c7002ee0ca8b676428b2268c26399b5e31 upstream. When CONFIG_DEBUG_USER is enabled, it's possible for a user to deliberately trigger dump_instr() with a chosen kernel address. Let's avoid problems resulting from this by using get_user() rather than __get_user(), ensuring that we don't erroneously access kernel memory. So that we can use the same code to dump user instructions and kernel instructions, the common dumping code is factored out to __dump_instr(), with the fs manipulated appropriately in dump_instr() around calls to this. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15crypto: x86/sha256-mb - fix panic due to unaligned accessAndrey Ryabinin
commit 5dfeaac15f2b1abb5a53c9146041c7235eb9aa04 upstream. struct sha256_ctx_mgr allocated in sha256_mb_mod_init() via kzalloc() and later passed in sha256_mb_flusher_mgr_flush_avx2() function where instructions vmovdqa used to access the struct. vmovdqa requires 16-bytes aligned argument, but nothing guarantees that struct sha256_ctx_mgr will have that alignment. Unaligned vmovdqa will generate GP fault. Fix this by replacing vmovdqa with vmovdqu which doesn't have alignment requirements. Fixes: a377c6b1876e ("crypto: sha256-mb - submit/flush routines for AVX2") Reported-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Tim Chen Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15crypto: x86/sha1-mb - fix panic due to unaligned accessAndrey Ryabinin
commit d041b557792c85677f17e08eee535eafbd6b9aa2 upstream. struct sha1_ctx_mgr allocated in sha1_mb_mod_init() via kzalloc() and later passed in sha1_mb_flusher_mgr_flush_avx2() function where instructions vmovdqa used to access the struct. vmovdqa requires 16-bytes aligned argument, but nothing guarantees that struct sha1_ctx_mgr will have that alignment. Unaligned vmovdqa will generate GP fault. Fix this by replacing vmovdqa with vmovdqu which doesn't have alignment requirements. Fixes: 2249cbb53ead ("crypto: sha-mb - SHA1 multibuffer submit and flush routines for AVX2") Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15x86/uaccess, sched/preempt: Verify access_ok() contextPeter Zijlstra
commit 7c4788950ba5922fde976d80b72baf46f14dee8d upstream. I recently encountered wreckage because access_ok() was used where it should not be, add an explicit WARN when access_ok() is used wrongly. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ARM: dts: STiH410-family: fix wrong parent clock frequencyPatrice Chotard
[ Upstream commit b9ec866d223f38eb0bf2a7c836e10031ee17f7af ] The clock parent was lower than child clock which is not correct. In some use case, it leads to division by zero. Signed-off-by: Gabriel Fernandez <gabriel.fernandez@st.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15sched/cputime, powerpc32: Fix stale scaled stime on context switchFrederic Weisbecker
[ Upstream commit 90d08ba2b9b4be4aeca6a5b5a4b09fbcde30194d ] On context switch with powerpc32, the cputime is accumulated in the thread_info struct. So the switching-in task must move forward its start time snapshot to the current time in order to later compute the delta spent in system mode. This is what we do for the normal cputime by initializing the starttime field to the value of the previous task's starttime which got freshly updated. But we are missing the update of the scaled cputime start time. As a result we may be accounting too much scaled cputime later. Fix this by initializing the scaled cputime the same way we do for normal cputime. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1483636310-6557-2-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15s390/topology: make "topology=off" parameter workHeiko Carstens
[ Upstream commit 68cc795d1933285705ced6d841ef66c00ce98cbe ] The "topology=off" kernel parameter is supposed to prevent the kernel to use hardware topology information to generate scheduling domains etc. For an unknown reason I implemented this in a very odd way back then: instead of simply clearing the MACHINE_HAS_TOPOLOGY flag within the lowcore I added a second variable which indicated that topology information should not be used. This is more than suboptimal since it partially doesn't work. For the fake NUMA case topology information is still considered and scheduling domains will be created based on this. To fix this and to simplify the code get rid of the extra variable and implement the "topology=off" case like it is done for other features. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15serial: sh-sci: Fix register offsets for the IRDA serial portLaurent Pinchart
[ Upstream commit a752ba18af8285e3eeda572f40dddaebff0c3621 ] Even though most of its registers are 8-bit wide, the IRDA has two 16-bit registers that make it a 16-bit peripheral and not a 8-bit peripheral with addresses shifted by one. Fix the registers offset in the driver and the platform data regshift value. Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15arm64: dma-mapping: Only swizzle DMA ops for IOMMU_DOMAIN_DMAWill Deacon
[ Upstream commit 4a8d8a14c0d08c2437cb80c05e88f6cc1ca3fb2c ] The arm64 DMA-mapping implementation sets the DMA ops to the IOMMU DMA ops if we detect that an IOMMU is present for the master and the DMA ranges are valid. In the case when the IOMMU domain for the device is not of type IOMMU_DOMAIN_DMA, then we have no business swizzling the ops, since we're not in control of the underlying address space. This patch leaves the DMA ops alone for masters attached to non-DMA IOMMU domains. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ARM: omap2plus_defconfig: Fix probe errors on UARTs 5 and 6Tony Lindgren
[ Upstream commit 4cd6a59f5c1a9b0cca0da09fbba42b9450ffc899 ] We have more than four uarts on some SoCs and that can cause noise with errors while booting. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15powerpc/corenet: explicitly disable the SDHC controller on kmcoge4Valentin Longchamp
[ Upstream commit a674c7d470bb47e82f4eb1fa944eadeac2f6bbaf ] It is not implemented on the kmcoge4 hardware and if not disabled it leads to error messages with the corenet32_smp_defconfig. Signed-off-by: Valentin Longchamp <valentin.longchamp@keymile.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15KVM: PPC: Book 3S: XICS: correct the real mode ICP rejecting counterLi Zhong
[ Upstream commit 37451bc95dee0e666927d6ffdda302dbbaaae6fa ] Some counters are added in Commit 6e0365b78273 ("KVM: PPC: Book3S HV: Add ICP real mode counters"), to provide some performance statistics to determine whether further optimizing is needed for real mode functions. The n_reject counter counts how many times ICP rejects an irq because of priority in real mode. The redelivery of an lsi that is still asserted after eoi doesn't fall into this category, so the increasement there is removed. Also, it needs to be increased in icp_rm_deliver_irq() if it rejects another one. Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ARM: dts: imx53-qsb-common: fix FEC pinmux configPatrick Bruenn
[ Upstream commit 8b649e426336d7d4800ff9c82858328f4215ba01 ] The pinmux configuration in device tree was different from manual muxing in <u-boot>/board/freescale/mx53loco/mx53loco.c All pins were configured as NO_PAD_CTL(1 << 31), which was fine as the bootloader already did the correct pinmuxing for us. But recently u-boot is migrating to reuse device tree files from the kernel tree, so it seems to be better to have the correct pinmuxing in our files, too. Signed-off-by: Patrick Bruenn <p.bruenn@beckhoff.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ARM: dts: mvebu: pl310-cache disable double-linefillYan Markman
commit cda80a82ac3e89309706c027ada6ab232be1d640 upstream. Under heavy system stress mvebu SoC using Cortex A9 sporadically encountered instability issues. The "double linefill" feature of L2 cache was identified as causing dependency between read and write which lead to the deadlock. Especially, it was the cause of deadlock seen under heavy PCIe traffic, as this dependency violates PCIE overtaking rule. Fixes: c8f5a878e554 ("ARM: mvebu: use DT properties to fine-tune the L2 configuration") Signed-off-by: Yan Markman <ymarkman@marvell.com> Signed-off-by: Igal Liberman <igall@marvell.com> Signed-off-by: Nadav Haklai <nadavh@marvell.com> [gregory.clement@free-electrons.com: reformulate commit log, add Armada 375 and add Fixes tag] Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08powerpc/64: Don't try to use radix MMU under a hypervisorPaul Mackerras
[ Upstream commit 18569c1f134e1c5c88228f043c09678ae6052b7c ] Currently, if the kernel is running on a POWER9 processor under a hypervisor, it will try to use the radix MMU even though it doesn't have the necessary code to use radix under a hypervisor (it doesn't negotiate use of radix, and it doesn't do the H_REGISTER_PROC_TBL hcall). The result is that the guest kernel will crash when it tries to turn on the MMU. This fixes it by looking for the /chosen/ibm,architecture-vec-5 property, and if it exists, clears the radix MMU feature bit, before we decide whether to initialize for radix or HPT. This property is created by the hypervisor as a result of the guest calling the ibm,client-architecture-support method to indicate its capabilities, so it will indicate whether the hypervisor agreed to us using radix. Systems without a hypervisor may have this property also (for example, skiboot creates it), so we check the HV bit in the MSR to see whether we are running as a guest or not. If we are in hypervisor mode, then we can do whatever we like including using the radix MMU. The reason for using this property is that in future, when we have support for using radix under a hypervisor, we will need to check this property to see whether the hypervisor agreed to us using radix. Fixes: 2bfd65e45e87 ("powerpc/mm/radix: Add radix callbacks for early init routines") Cc: stable@vger.kernel.org # v4.7+ Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08s390/crypto: Extend key length check for AES-XTS in fips mode.Harald Freudenberger
[ Upstream commit a4f2779ecf2f42b0997fedef6fd20a931c40a3e3 ] In fips mode only xts keys with 128 bit or 125 bit are allowed. This fix extends the xts_aes_set_key function to check for these valid key lengths in fips mode. Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08s390/prng: Adjust generation of entropy to produce real 256 bits.Harald Freudenberger
[ Upstream commit d34b1acb78af41b8b8d5c60972b6555ea19f7564 ] The generate_entropy function used a sha256 for compacting together 256 bits of entropy into 32 bytes hash. However, it is questionable if a sha256 can really be used here, as potential collisions may reduce the max entropy fitting into a 32 byte hash value. So this batch introduces the use of sha512 instead and the required buffer adjustments for the calling functions. Further more the working buffer for the generate_entropy function has been widened from one page to two pages. So now 1024 stckf invocations are used to gather 256 bits of entropy. This has been done to be on the save side if the jitters of stckf values isn't as good as supposed. Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ARM: 8715/1: add a private asm/unaligned.hArnd Bergmann
commit 1cce91dfc8f7990ca3aea896bfb148f240b12860 upstream. The asm-generic/unaligned.h header provides two different implementations for accessing unaligned variables: the access_ok.h version used when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers are in fact aligned, while the le_struct.h version convinces gcc that the alignment of a pointer is '1', to make it issue the correct load/store instructions depending on the architecture flags. On ARMv5 and older, we always use the second version, to let the compiler use byte accesses. On ARMv6 and newer, we currently use the access_ok.h version, so the compiler can use any instruction including stm/ldm and ldrd/strd that will cause an alignment trap. This trap can significantly impact performance when we have to do a lot of fixups and, worse, has led to crashes in the LZ4 decompressor code that does not have a trap handler. This adds an ARM specific version of asm/unaligned.h that uses the le_struct.h/be_struct.h implementation unconditionally. This should lead to essentially the same code on ARMv6+ as before, with the exception of using regular load/store instructions instead of the trapping instructions multi-register variants. The crash in the LZ4 decompressor code was probably introduced by the patch replacing the LZ4 implementation, commit 4e1a33b105dd ("lib: update LZ4 compressor module"), so linux-4.11 and higher would be affected most. However, we probably want to have this backported to all older stable kernels as well, to help with the performance issues. There are two follow-ups that I think we should also work on, but not backport to stable kernels, first to change the asm-generic version of the header to remove the ARM special case, and second to review all other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they might be affected by the same problem on ARM. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm/arm64: kvm: Disable branch profiling in HYP codeJulien Thierry
commit f9b269f3098121b5d54aaf822e0898c8ed1d3fec upstream. When HYP code runs into branch profiling code, it attempts to jump to unmapped memory, causing a HYP Panic. Disable the branch profiling for code designed to run at HYP mode. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm/arm64: KVM: set right LR register value for 32 bit guest when inject abortDongjiu Geng
commit fd6c8c206fc5d0717b0433b191de0715122f33bb upstream. When a exception is trapped to EL2, hardware uses ELR_ELx to hold the current fault instruction address. If KVM wants to inject a abort to 32 bit guest, it needs to set the LR register for the guest to emulate this abort happened in the guest. Because ARM32 architecture is pipelined execution, so the LR value has an offset to the fault instruction address. The offsets applied to Link value for exceptions as shown below, which should be added for the ARM32 link register(LR). Table taken from ARMv8 ARM DDI0487B-B, table G1-10: Exception Offset, for PE state of: A32 T32 Undefined Instruction +4 +2 Prefetch Abort +4 +4 Data Abort +8 +8 IRQ or FIQ +4 +4 [ Removed unused variables in inject_abt to avoid compile warnings. -- Christoffer ] Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Tested-by: Haibin Zhang <zhanghaibin7@huawei.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm64: ensure __dump_instr() checks addr_limitMark Rutland
commit 7a7003b1da010d2b0d1dc8bf21c10f5c73b389f1 upstream. It's possible for a user to deliberately trigger __dump_instr with a chosen kernel address. Let's avoid problems resulting from this by using get_user() rather than __get_user(), ensuring that we don't erroneously access kernel memory. Where we use __dump_instr() on kernel text, we already switch to KERNEL_DS, so this shouldn't adversely affect those cases. Fixes: 60ffc30d5652810d ("arm64: Exception handling") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02KVM: PPC: Fix oops when checking KVM_CAP_PPC_HTMGreg Kurz
commit ac64115a66c18c01745bbd3c47a36b124e5fd8c0 upstream. The following program causes a kernel oops: #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/ioctl.h> #include <linux/kvm.h> main() { int fd = open("/dev/kvm", O_RDWR); ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_PPC_HTM); } This happens because when using the global KVM fd with KVM_CHECK_EXTENSION, kvm_vm_ioctl_check_extension() gets called with a NULL kvm argument, which gets dereferenced in is_kvmppc_hv_enabled(). Spotted while reading the code. Let's use the hv_enabled fallback variable, like everywhere else in this function. Fixes: 23528bb21ee2 ("KVM: PPC: Introduce KVM_CAP_PPC_HTM") Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27x86/microcode/intel: Disable late loading on model 79Borislav Petkov
commit 723f2828a98c8ca19842042f418fb30dd8cfc0f7 upstream. Blacklist Broadwell X model 79 for late loading due to an erratum. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Tony Luck <tony.luck@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20171018111225.25635-1-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27parisc: Fix double-word compare and exchange in LWS code on 32-bit kernelsJohn David Anglin
commit 374b3bf8e8b519f61eb9775888074c6e46b3bf0c upstream. As discussed on the debian-hppa list, double-wordcompare and exchange operations fail on 32-bit kernels. Looking at the code, I realized that the ",ma" completer does the wrong thing in the "ldw,ma 4(%r26), %r29" instruction. This increments %r26 and causes the following store to write to the wrong location. Note by Helge Deller: The patch applies cleanly to stable kernel series if this upstream commit is merged in advance: f4125cfdb300 ("parisc: Avoid trashing sr2 and sr3 in LWS code"). Signed-off-by: John David Anglin <dave.anglin@bell.net> Tested-by: Christoph Biedl <debian.axhn@manchmal.in-ulm.de> Fixes: 89206491201c ("parisc: Implement new LWS CAS supporting 64 bit operations.") Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21powerpc/perf: Add restrictions to PMC5 in power9 DD1Madhavan Srinivasan
[ Upstream commit 8d911904f3ce412b20874a9c95f82009dcbb007c ] PMC5 on POWER9 DD1 may not provide right counts in all sampling scenarios, hence use PM_INST_DISP event instead in PMC2 or PMC3 in preference. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>