diff options
142 files changed, 1044 insertions, 12715 deletions
@@ -2187,6 +2187,10 @@ D: Various ACPI fixes, keeping correct battery state through suspend D: various lockdep annotations, autofs and other random bugfixes S: Prague, Czech Republic +N: Ishizaki Kou +E: kou.ishizaki@toshiba.co.jp +D: Spidernet driver for PowerPC Cell platforms + N: Gene Kozin E: 74604.152@compuserve.com W: https://www.sangoma.com @@ -2197,6 +2201,9 @@ S: Markham, Ontario S: L3R 8B2 S: Canada +N: Christian Krafft +D: PowerPC Cell support + N: Maxim Krasnyansky E: maxk@qualcomm.com W: http://vtun.sf.net @@ -2389,6 +2396,10 @@ S: ICP vortex GmbH S: Neckarsulm S: Germany +N: Geoff Levand +E: geoff@infradead.org +D: Spidernet driver for PowerPC Cell platforms + N: Phil Lewis E: beans@bucket.ualr.edu D: Promised to send money if I would put his name in the source tree. diff --git a/Documentation/ABI/testing/sysfs-kernel-fadump b/Documentation/ABI/testing/sysfs-kernel-fadump index 2f9daa7ca55b..b64b7622e6fc 100644 --- a/Documentation/ABI/testing/sysfs-kernel-fadump +++ b/Documentation/ABI/testing/sysfs-kernel-fadump @@ -55,4 +55,5 @@ Date: May 2024 Contact: linuxppc-dev@lists.ozlabs.org Description: read/write This is a special sysfs file available to setup additional - parameters to be passed to capture kernel. + parameters to be passed to capture kernel. For HASH MMU it + is exported only if RMA size higher than 768MB. diff --git a/Documentation/admin-guide/kernel-per-CPU-kthreads.rst b/Documentation/admin-guide/kernel-per-CPU-kthreads.rst index ea7fa2a8bbf0..ee9a6c94f383 100644 --- a/Documentation/admin-guide/kernel-per-CPU-kthreads.rst +++ b/Documentation/admin-guide/kernel-per-CPU-kthreads.rst @@ -278,12 +278,7 @@ To reduce its OS jitter, do any of the following: due to the rtas_event_scan() function. WARNING: Please check your CPU specifications to make sure that this is safe on your particular system. - e. If running on Cell Processor, build your kernel with - CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from - spu_gov_work(). - WARNING: Please check your CPU specifications to - make sure that this is safe on your particular system. - f. If running on PowerMAC, build your kernel with + e. If running on PowerMAC, build your kernel with CONFIG_PMAC_RACKMETER=n to disable the CPU-meter, avoiding OS jitter from rackmeter_do_timer(). diff --git a/Documentation/arch/powerpc/firmware-assisted-dump.rst b/Documentation/arch/powerpc/firmware-assisted-dump.rst index 7e37aadd1f77..7e266e749cd5 100644 --- a/Documentation/arch/powerpc/firmware-assisted-dump.rst +++ b/Documentation/arch/powerpc/firmware-assisted-dump.rst @@ -120,6 +120,28 @@ to ensure that crash data is preserved to process later. e.g. # echo 1 > /sys/firmware/opal/mpipl/release_core +-- Support for Additional Kernel Arguments in Fadump + Fadump has a feature that allows passing additional kernel arguments + to the fadump kernel. This feature was primarily designed to disable + kernel functionalities that are not required for the fadump kernel + and to reduce its memory footprint while collecting the dump. + + Command to Add Additional Kernel Parameters to Fadump: + e.g. + # echo "nr_cpus=16" > /sys/kernel/fadump/bootargs_append + + The above command is sufficient to add additional arguments to fadump. + An explicit service restart is not required. + + Command to Retrieve the Additional Fadump Arguments: + e.g. + # cat /sys/kernel/fadump/bootargs_append + +Note: Additional kernel arguments for fadump with HASH MMU is only + supported if the RMA size is greater than 768 MB. If the RMA + size is less than 768 MB, the kernel does not export the + /sys/kernel/fadump/bootargs_append sysfs node. + Implementation details: ----------------------- diff --git a/Documentation/arch/powerpc/papr_hcalls.rst b/Documentation/arch/powerpc/papr_hcalls.rst index 80d2c0aadab5..805e1cb9bab9 100644 --- a/Documentation/arch/powerpc/papr_hcalls.rst +++ b/Documentation/arch/powerpc/papr_hcalls.rst @@ -289,6 +289,17 @@ to be issued multiple times in order to be completely serviced. The subsequent hcalls to the hypervisor until the hcall is completely serviced at which point H_SUCCESS or other error is returned by the hypervisor. +**H_HTM** + +| Input: flags, target, operation (op), op-param1, op-param2, op-param3 +| Out: *dumphtmbufferdata* +| Return Value: *H_Success,H_Busy,H_LongBusyOrder,H_Partial,H_Parameter, + H_P2,H_P3,H_P4,H_P5,H_P6,H_State,H_Not_Available,H_Authority* + +H_HTM supports setup, configuration, control and dumping of Hardware Trace +Macro (HTM) function and its data. HTM buffer stores tracing data for functions +like core instruction, core LLAT and nest. + References ========== .. [1] "Power Architecture Platform Reference" diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst index 6fc1961492b7..05d822b904b4 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -55,7 +55,6 @@ Contents: ti/cpsw_switchdev ti/am65_nuss_cpsw_switchdev ti/tlan - toshiba/spider_net wangxun/txgbe wangxun/ngbe diff --git a/Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst b/Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst deleted file mode 100644 index fe5b32be15cd..000000000000 --- a/Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst +++ /dev/null @@ -1,202 +0,0 @@ -.. SPDX-License-Identifier: GPL-2.0 - -=========================== -The Spidernet Device Driver -=========================== - -Written by Linas Vepstas <linas@austin.ibm.com> - -Version of 7 June 2007 - -Abstract -======== -This document sketches the structure of portions of the spidernet -device driver in the Linux kernel tree. The spidernet is a gigabit -ethernet device built into the Toshiba southbridge commonly used -in the SONY Playstation 3 and the IBM QS20 Cell blade. - -The Structure of the RX Ring. -============================= -The receive (RX) ring is a circular linked list of RX descriptors, -together with three pointers into the ring that are used to manage its -contents. - -The elements of the ring are called "descriptors" or "descrs"; they -describe the received data. This includes a pointer to a buffer -containing the received data, the buffer size, and various status bits. - -There are three primary states that a descriptor can be in: "empty", -"full" and "not-in-use". An "empty" or "ready" descriptor is ready -to receive data from the hardware. A "full" descriptor has data in it, -and is waiting to be emptied and processed by the OS. A "not-in-use" -descriptor is neither empty or full; it is simply not ready. It may -not even have a data buffer in it, or is otherwise unusable. - -During normal operation, on device startup, the OS (specifically, the -spidernet device driver) allocates a set of RX descriptors and RX -buffers. These are all marked "empty", ready to receive data. This -ring is handed off to the hardware, which sequentially fills in the -buffers, and marks them "full". The OS follows up, taking the full -buffers, processing them, and re-marking them empty. - -This filling and emptying is managed by three pointers, the "head" -and "tail" pointers, managed by the OS, and a hardware current -descriptor pointer (GDACTDPA). The GDACTDPA points at the descr -currently being filled. When this descr is filled, the hardware -marks it full, and advances the GDACTDPA by one. Thus, when there is -flowing RX traffic, every descr behind it should be marked "full", -and everything in front of it should be "empty". If the hardware -discovers that the current descr is not empty, it will signal an -interrupt, and halt processing. - -The tail pointer tails or trails the hardware pointer. When the -hardware is ahead, the tail pointer will be pointing at a "full" -descr. The OS will process this descr, and then mark it "not-in-use", -and advance the tail pointer. Thus, when there is flowing RX traffic, -all of the descrs in front of the tail pointer should be "full", and -all of those behind it should be "not-in-use". When RX traffic is not -flowing, then the tail pointer can catch up to the hardware pointer. -The OS will then note that the current tail is "empty", and halt -processing. - -The head pointer (somewhat mis-named) follows after the tail pointer. -When traffic is flowing, then the head pointer will be pointing at -a "not-in-use" descr. The OS will perform various housekeeping duties -on this descr. This includes allocating a new data buffer and -dma-mapping it so as to make it visible to the hardware. The OS will -then mark the descr as "empty", ready to receive data. Thus, when there -is flowing RX traffic, everything in front of the head pointer should -be "not-in-use", and everything behind it should be "empty". If no -RX traffic is flowing, then the head pointer can catch up to the tail -pointer, at which point the OS will notice that the head descr is -"empty", and it will halt processing. - -Thus, in an idle system, the GDACTDPA, tail and head pointers will -all be pointing at the same descr, which should be "empty". All of the -other descrs in the ring should be "empty" as well. - -The show_rx_chain() routine will print out the locations of the -GDACTDPA, tail and head pointers. It will also summarize the contents -of the ring, starting at the tail pointer, and listing the status -of the descrs that follow. - -A typical example of the output, for a nearly idle system, might be:: - - net eth1: Total number of descrs=256 - net eth1: Chain tail located at descr=20 - net eth1: Chain head is at 20 - net eth1: HW curr desc (GDACTDPA) is at 21 - net eth1: Have 1 descrs with stat=x40800101 - net eth1: HW next desc (GDACNEXTDA) is at 22 - net eth1: Last 255 descrs with stat=xa0800000 - -In the above, the hardware has filled in one descr, number 20. Both -head and tail are pointing at 20, because it has not yet been emptied. -Meanwhile, hw is pointing at 21, which is free. - -The "Have nnn decrs" refers to the descr starting at the tail: in this -case, nnn=1 descr, starting at descr 20. The "Last nnn descrs" refers -to all of the rest of the descrs, from the last status change. The "nnn" -is a count of how many descrs have exactly the same status. - -The status x4... corresponds to "full" and status xa... corresponds -to "empty". The actual value printed is RXCOMST_A. - -In the device driver source code, a different set of names are -used for these same concepts, so that:: - - "empty" == SPIDER_NET_DESCR_CARDOWNED == 0xa - "full" == SPIDER_NET_DESCR_FRAME_END == 0x4 - "not in use" == SPIDER_NET_DESCR_NOT_IN_USE == 0xf - - -The RX RAM full bug/feature -=========================== - -As long as the OS can empty out the RX buffers at a rate faster than -the hardware can fill them, there is no problem. If, for some reason, -the OS fails to empty the RX ring fast enough, the hardware GDACTDPA -pointer will catch up to the head, notice the not-empty condition, -ad stop. However, RX packets may still continue arriving on the wire. -The spidernet chip can save some limited number of these in local RAM. -When this local ram fills up, the spider chip will issue an interrupt -indicating this (GHIINT0STS will show ERRINT, and the GRMFLLINT bit -will be set in GHIINT1STS). When the RX ram full condition occurs, -a certain bug/feature is triggered that has to be specially handled. -This section describes the special handling for this condition. - -When the OS finally has a chance to run, it will empty out the RX ring. -In particular, it will clear the descriptor on which the hardware had -stopped. However, once the hardware has decided that a certain -descriptor is invalid, it will not restart at that descriptor; instead -it will restart at the next descr. This potentially will lead to a -deadlock condition, as the tail pointer will be pointing at this descr, -which, from the OS point of view, is empty; the OS will be waiting for -this descr to be filled. However, the hardware has skipped this descr, -and is filling the next descrs. Since the OS doesn't see this, there -is a potential deadlock, with the OS waiting for one descr to fill, -while the hardware is waiting for a different set of descrs to become -empty. - -A call to show_rx_chain() at this point indicates the nature of the -problem. A typical print when the network is hung shows the following:: - - net eth1: Spider RX RAM full, incoming packets might be discarded! - net eth1: Total number of descrs=256 - net eth1: Chain tail located at descr=255 - net eth1: Chain head is at 255 - net eth1: HW curr desc (GDACTDPA) is at 0 - net eth1: Have 1 descrs with stat=xa0800000 - net eth1: HW next desc (GDACNEXTDA) is at 1 - net eth1: Have 127 descrs with stat=x40800101 - net eth1: Have 1 descrs with stat=x40800001 - net eth1: Have 126 descrs with stat=x40800101 - net eth1: Last 1 descrs with stat=xa0800000 - -Both the tail and head pointers are pointing at descr 255, which is -marked xa... which is "empty". Thus, from the OS point of view, there -is nothing to be done. In particular, there is the implicit assumption -that everything in front of the "empty" descr must surely also be empty, -as explained in the last section. The OS is waiting for descr 255 to -become non-empty, which, in this case, will never happen. - -The HW pointer is at descr 0. This descr is marked 0x4.. or "full". -Since its already full, the hardware can do nothing more, and thus has -halted processing. Notice that descrs 0 through 254 are all marked -"full", while descr 254 and 255 are empty. (The "Last 1 descrs" is -descr 254, since tail was at 255.) Thus, the system is deadlocked, -and there can be no forward progress; the OS thinks there's nothing -to do, and the hardware has nowhere to put incoming data. - -This bug/feature is worked around with the spider_net_resync_head_ptr() -routine. When the driver receives RX interrupts, but an examination -of the RX chain seems to show it is empty, then it is probable that -the hardware has skipped a descr or two (sometimes dozens under heavy -network conditions). The spider_net_resync_head_ptr() subroutine will -search the ring for the next full descr, and the driver will resume -operations there. Since this will leave "holes" in the ring, there -is also a spider_net_resync_tail_ptr() that will skip over such holes. - -As of this writing, the spider_net_resync() strategy seems to work very -well, even under heavy network loads. - - -The TX ring -=========== -The TX ring uses a low-watermark interrupt scheme to make sure that -the TX queue is appropriately serviced for large packet sizes. - -For packet sizes greater than about 1KBytes, the kernel can fill -the TX ring quicker than the device can drain it. Once the ring -is full, the netdev is stopped. When there is room in the ring, -the netdev needs to be reawakened, so that more TX packets are placed -in the ring. The hardware can empty the ring about four times per jiffy, -so its not appropriate to wait for the poll routine to refill, since -the poll routine runs only once per jiffy. The low-watermark mechanism -marks a descr about 1/4th of the way from the bottom of the queue, so -that an interrupt is generated when the descr is processed. This -interrupt wakes up the netdev, which can then refill the queue. -For large packets, this mechanism generates a relatively small number -of interrupts, about 1K/sec. For smaller packets, this will drop to zero -interrupts, as the hardware can empty the queue faster than the kernel -can fill it. diff --git a/MAINTAINERS b/MAINTAINERS index 6b9d13f4f890..94019bbdfc09 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22526,15 +22526,6 @@ F: include/linux/spi/ F: include/uapi/linux/spi/ F: tools/spi/ -SPIDERNET NETWORK DRIVER for CELL -M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp> -M: Geoff Levand <geoff@infradead.org> -L: netdev@vger.kernel.org -L: linuxppc-dev@lists.ozlabs.org -S: Maintained -F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst -F: drivers/net/ethernet/toshiba/spider_net* - SPMI SUBSYSTEM M: Stephen Boyd <sboyd@kernel.org> L: linux-kernel@vger.kernel.org diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 8a2a6e403eb1..b5630f8ad436 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -288,6 +288,7 @@ config PPC select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0) select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0) select HAVE_STATIC_CALL if PPC32 + select HAVE_STATIC_CALL_INLINE if PPC32 select HAVE_SYSCALL_TRACEPOINTS select HAVE_VIRT_CPU_ACCOUNTING select HAVE_VIRT_CPU_ACCOUNTING_GEN @@ -413,12 +414,9 @@ config ARCH_HAS_ADD_PAGES config PPC_DCR_NATIVE bool -config PPC_DCR_MMIO - bool - config PPC_DCR bool - depends on PPC_DCR_NATIVE || PPC_DCR_MMIO + depends on PPC_DCR_NATIVE default y config PPC_PCI_OF_BUS_MAP @@ -444,11 +442,6 @@ config PPC_PCI_BUS_NUM_DOMAIN_DEPENDENT PCI domain dependent and each PCI controller on own domain can have 256 PCI buses, like it is on other Linux architectures. -config PPC_OF_PLATFORM_PCI - bool - depends on PCI - depends on PPC64 # not supported on 32 bits yet - config ARCH_SUPPORTS_UPROBES def_bool y diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug index 20d05605fa83..f15e5920080b 100644 --- a/arch/powerpc/Kconfig.debug +++ b/arch/powerpc/Kconfig.debug @@ -216,13 +216,6 @@ config PPC_EARLY_DEBUG_RTAS_PANEL help Select this to enable early debugging via the RTAS panel. -config PPC_EARLY_DEBUG_RTAS_CONSOLE - bool "RTAS Console" - depends on PPC_RTAS - select UDBG_RTAS_CONSOLE - help - Select this to enable early debugging via the RTAS console. - config PPC_EARLY_DEBUG_PAS_REALMODE bool "PA Semi real mode" depends on PPC_PASEMI diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile index 1ff6ad4f6cd2..184d0680e661 100644 --- a/arch/powerpc/boot/Makefile +++ b/arch/powerpc/boot/Makefile @@ -173,7 +173,6 @@ src-plat-$(CONFIG_PPC_PS3) += ps3-head.S ps3-hvcall.S ps3.c src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c src-plat-$(CONFIG_PPC_PSERIES) += pseries-head.S src-plat-$(CONFIG_PPC_POWERNV) += pseries-head.S -src-plat-$(CONFIG_PPC_IBM_CELL_BLADE) += pseries-head.S src-plat-$(CONFIG_MVME7100) += motload-head.S mvme7100.c src-plat-$(CONFIG_PPC_MICROWATT) += fixed-head.S microwatt.c @@ -276,7 +275,6 @@ quiet_cmd_wrap = WRAP $@ image-$(CONFIG_PPC_PSERIES) += zImage.pseries image-$(CONFIG_PPC_POWERNV) += zImage.pseries -image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries image-$(CONFIG_PPC_PS3) += dtbImage.ps3 image-$(CONFIG_PPC_CHRP) += zImage.chrp image-$(CONFIG_PPC_EFIKA) += zImage.chrp diff --git a/arch/powerpc/boot/dts/microwatt.dts b/arch/powerpc/boot/dts/microwatt.dts index 269e930b3b0b..c4e4d2a9b460 100644 --- a/arch/powerpc/boot/dts/microwatt.dts +++ b/arch/powerpc/boot/dts/microwatt.dts @@ -1,4 +1,5 @@ /dts-v1/; +#include <dt-bindings/gpio/gpio.h> / { #size-cells = <0x02>; @@ -8,6 +9,7 @@ aliases { serial0 = &UART0; + ethernet = &enet0; }; reserved-memory { @@ -35,40 +37,79 @@ ibm,powerpc-cpu-features { display-name = "Microwatt"; - isa = <3000>; + isa = <3010>; device_type = "cpu-features"; compatible = "ibm,powerpc-cpu-features"; mmu-radix { isa = <3000>; - usable-privilege = <2>; + usable-privilege = <6>; + os-support = <0>; }; little-endian { - isa = <2050>; - usable-privilege = <3>; + isa = <0>; + usable-privilege = <7>; + os-support = <0>; hwcap-bit-nr = <1>; }; cache-inhibited-large-page { - isa = <2040>; - usable-privilege = <2>; + isa = <0>; + usable-privilege = <6>; + os-support = <0>; }; fixed-point-v3 { isa = <3000>; - usable-privilege = <3>; + usable-privilege = <7>; }; no-execute { - isa = <2010>; + isa = <0x00>; usable-privilege = <2>; + os-support = <0>; }; floating-point { + hfscr-bit-nr = <0>; hwcap-bit-nr = <27>; isa = <0>; - usable-privilege = <3>; + usable-privilege = <7>; + hv-support = <1>; + os-support = <0>; + }; + + prefixed-instructions { + hfscr-bit-nr = <13>; + fscr-bit-nr = <13>; + isa = <3010>; + usable-privilege = <7>; + os-support = <1>; + hv-support = <1>; + }; + + tar { + hfscr-bit-nr = <8>; + fscr-bit-nr = <8>; + isa = <2070>; + usable-privilege = <7>; + os-support = <1>; + hv-support = <1>; + hwcap-bit-nr = <58>; + }; + + control-register { + isa = <0>; + usable-privilege = <7>; + }; + + system-call-vectored { + isa = <3000>; + usable-privilege = <7>; + os-support = <1>; + fscr-bit-nr = <12>; + hwcap-bit-nr = <52>; }; }; @@ -101,6 +142,36 @@ ibm,mmu-lpid-bits = <12>; ibm,mmu-pid-bits = <20>; }; + + PowerPC,Microwatt@1 { + i-cache-sets = <2>; + ibm,dec-bits = <64>; + reservation-granule-size = <64>; + clock-frequency = <100000000>; + timebase-frequency = <100000000>; + i-tlb-sets = <1>; + ibm,ppc-interrupt-server#s = <1>; + i-cache-block-size = <64>; + d-cache-block-size = <64>; + d-cache-sets = <2>; + i-tlb-size = <64>; + cpu-version = <0x990000>; + status = "okay"; + i-cache-size = <0x1000>; + ibm,processor-radix-AP-encodings = <0x0c 0xa0000010 0x20000015 0x4000001e>; + tlb-size = <0>; + tlb-sets = <0>; + device_type = "cpu"; + d-tlb-size = <128>; + d-tlb-sets = <2>; + reg = <1>; + general-purpose; + 64-bit; + d-cache-size = <0x1000>; + ibm,chip-id = <0>; + ibm,mmu-lpid-bits = <12>; + ibm,mmu-pid-bits = <20>; + }; }; soc@c0000000 { @@ -113,8 +184,8 @@ interrupt-controller@4000 { compatible = "openpower,xics-presentation", "ibm,ppc-xicp"; - ibm,interrupt-server-ranges = <0x0 0x1>; - reg = <0x4000 0x100>; + ibm,interrupt-server-ranges = <0x0 0x2>; + reg = <0x4000 0x10 0x4010 0x10>; }; ICS: interrupt-controller@5000 { @@ -138,7 +209,18 @@ interrupts = <0x10 0x1>; }; - ethernet@8020000 { + gpio: gpio@7000 { + device_type = "gpio"; + compatible = "faraday,ftgpio010"; + gpio-controller; + #gpio-cells = <2>; + reg = <0x7000 0x80>; + interrupts = <0x14 1>; + interrupt-controller; + #interrupt-cells = <2>; + }; + + enet0: ethernet@8020000 { compatible = "litex,liteeth"; reg = <0x8021000 0x100 0x8020800 0x100 @@ -160,7 +242,6 @@ reg-names = "phy", "core", "reader", "writer", "irq"; bus-width = <4>; interrupts = <0x13 1>; - cap-sd-highspeed; clocks = <&sys_clk>; }; }; diff --git a/arch/powerpc/configs/cell_defconfig b/arch/powerpc/configs/cell_defconfig index b33f0034990c..3347192b77b8 100644 --- a/arch/powerpc/configs/cell_defconfig +++ b/arch/powerpc/configs/cell_defconfig @@ -25,7 +25,6 @@ CONFIG_PS3_DISK=y CONFIG_PS3_ROM=m CONFIG_PS3_FLASH=m CONFIG_PS3_LPM=m -CONFIG_PPC_IBM_CELL_BLADE=y CONFIG_RTAS_FLASH=y CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ_GOV_POWERSAVE=y @@ -133,7 +132,6 @@ CONFIG_SKGE=m CONFIG_SKY2=m CONFIG_GELIC_NET=m CONFIG_GELIC_WIRELESS=y -CONFIG_SPIDER_NET=y # CONFIG_INPUT_KEYBOARD is not set # CONFIG_INPUT_MOUSE is not set # CONFIG_SERIO_I8042 is not set diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig index 465eb96c755e..5fa154185efa 100644 --- a/arch/powerpc/configs/ppc64_defconfig +++ b/arch/powerpc/configs/ppc64_defconfig @@ -51,7 +51,6 @@ CONFIG_PS3_DISK=m CONFIG_PS3_ROM=m CONFIG_PS3_FLASH=m CONFIG_PS3_LPM=m -CONFIG_PPC_IBM_CELL_BLADE=y CONFIG_RTAS_FLASH=m CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y CONFIG_CPU_FREQ_GOV_POWERSAVE=y @@ -228,7 +227,6 @@ CONFIG_NETXEN_NIC=m CONFIG_SUNGEM=y CONFIG_GELIC_NET=m CONFIG_GELIC_WIRELESS=y -CONFIG_SPIDER_NET=m CONFIG_BROADCOM_PHY=m CONFIG_MARVELL_PHY=y CONFIG_PPP=m diff --git a/arch/powerpc/crypto/Makefile b/arch/powerpc/crypto/Makefile index 9b38f4a7bc15..2f00b22b0823 100644 --- a/arch/powerpc/crypto/Makefile +++ b/arch/powerpc/crypto/Makefile @@ -51,3 +51,4 @@ $(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y +OBJECT_FILES_NON_STANDARD_ghashp8-ppc.o := y diff --git a/arch/powerpc/include/asm/cell-pmu.h b/arch/powerpc/include/asm/cell-pmu.h index 6a79b5d1c44f..7fbefd64b4fb 100644 --- a/arch/powerpc/include/asm/cell-pmu.h +++ b/arch/powerpc/include/asm/cell-pmu.h @@ -20,36 +20,9 @@ /* Macros for the pm_control register. */ #define CBE_PM_16BIT_CTR(ctr) (1 << (24 - ((ctr) & (NR_PHYS_CTRS - 1)))) -#define CBE_PM_ENABLE_PERF_MON 0x80000000 -#define CBE_PM_STOP_AT_MAX 0x40000000 -#define CBE_PM_TRACE_MODE_GET(pm_control) (((pm_control) >> 28) & 0x3) -#define CBE_PM_TRACE_MODE_SET(mode) (((mode) & 0x3) << 28) -#define CBE_PM_TRACE_BUF_OVFLW(bit) (((bit) & 0x1) << 17) -#define CBE_PM_COUNT_MODE_SET(count) (((count) & 0x3) << 18) -#define CBE_PM_FREEZE_ALL_CTRS 0x00100000 -#define CBE_PM_ENABLE_EXT_TRACE 0x00008000 -#define CBE_PM_SPU_ADDR_TRACE_SET(msk) (((msk) & 0x3) << 9) /* Macros for the trace_address register. */ -#define CBE_PM_TRACE_BUF_FULL 0x00000800 #define CBE_PM_TRACE_BUF_EMPTY 0x00000400 -#define CBE_PM_TRACE_BUF_DATA_COUNT(ta) ((ta) & 0x3ff) -#define CBE_PM_TRACE_BUF_MAX_COUNT 0x400 - -/* Macros for the pm07_control registers. */ -#define CBE_PM_CTR_INPUT_MUX(pm07_control) (((pm07_control) >> 26) & 0x3f) -#define CBE_PM_CTR_INPUT_CONTROL 0x02000000 -#define CBE_PM_CTR_POLARITY 0x01000000 -#define CBE_PM_CTR_COUNT_CYCLES 0x00800000 -#define CBE_PM_CTR_ENABLE 0x00400000 -#define PM07_CTR_INPUT_MUX(x) (((x) & 0x3F) << 26) -#define PM07_CTR_INPUT_CONTROL(x) (((x) & 1) << 25) -#define PM07_CTR_POLARITY(x) (((x) & 1) << 24) -#define PM07_CTR_COUNT_CYCLES(x) (((x) & 1) << 23) -#define PM07_CTR_ENABLE(x) (((x) & 1) << 22) - -/* Macros for the pm_status register. */ -#define CBE_PM_CTR_OVERFLOW_INTR(ctr) (1 << (31 - ((ctr) & 7))) enum pm_reg_name { group_control, @@ -62,33 +35,4 @@ enum pm_reg_name { pm_start_stop, }; -/* Routines for reading/writing the PMU registers. */ -extern u32 cbe_read_phys_ctr(u32 cpu, u32 phys_ctr); -extern void cbe_write_phys_ctr(u32 cpu, u32 phys_ctr, u32 val); -extern u32 cbe_read_ctr(u32 cpu, u32 ctr); -extern void cbe_write_ctr(u32 cpu, u32 ctr, u32 val); - -extern u32 cbe_read_pm07_control(u32 cpu, u32 ctr); -extern void cbe_write_pm07_control(u32 cpu, u32 ctr, u32 val); -extern u32 cbe_read_pm(u32 cpu, enum pm_reg_name reg); -extern void cbe_write_pm(u32 cpu, enum pm_reg_name reg, u32 val); - -extern u32 cbe_get_ctr_size(u32 cpu, u32 phys_ctr); -extern void cbe_set_ctr_size(u32 cpu, u32 phys_ctr, u32 ctr_size); - -extern void cbe_enable_pm(u32 cpu); -extern void cbe_disable_pm(u32 cpu); - -extern void cbe_read_trace_buffer(u32 cpu, u64 *buf); - -extern void cbe_enable_pm_interrupts(u32 cpu, u32 thread, u32 mask); -extern void cbe_disable_pm_interrupts(u32 cpu); -extern u32 cbe_get_and_clear_pm_interrupts(u32 cpu); -extern void cbe_sync_irq(int node); - -#define CBE_COUNT_SUPERVISOR_MODE 0 -#define CBE_COUNT_HYPERVISOR_MODE 1 -#define CBE_COUNT_PROBLEM_MODE 2 -#define CBE_COUNT_ALL_MODES 3 - #endif /* __ASM_CELL_PMU_H__ */ diff --git a/arch/powerpc/include/asm/cell-regs.h b/arch/powerpc/include/asm/cell-regs.h index e1c431ef30e0..20f7339a3d4a 100644 --- a/arch/powerpc/include/asm/cell-regs.h +++ b/arch/powerpc/include/asm/cell-regs.h @@ -18,293 +18,6 @@ #include <asm/cell-pmu.h> -/* - * - * Some HID register definitions - * - */ - -/* CBE specific HID0 bits */ -#define HID0_CBE_THERM_WAKEUP 0x0000020000000000ul -#define HID0_CBE_SYSERR_WAKEUP 0x0000008000000000ul -#define HID0_CBE_THERM_INT_EN 0x0000000400000000ul -#define HID0_CBE_SYSERR_INT_EN 0x0000000200000000ul - -#define MAX_CBE 2 - -/* - * - * Pervasive unit register definitions - * - */ - -union spe_reg { - u64 val; - u8 spe[8]; -}; - -union ppe_spe_reg { - u64 val; - struct { - u32 ppe; - u32 spe; - }; -}; - - -struct cbe_pmd_regs { - /* Debug Bus Control */ - u64 pad_0x0000; /* 0x0000 */ - - u64 group_control; /* 0x0008 */ - - u8 pad_0x0010_0x00a8 [0x00a8 - 0x0010]; /* 0x0010 */ - - u64 debug_bus_control; /* 0x00a8 */ - - u8 pad_0x00b0_0x0100 [0x0100 - 0x00b0]; /* 0x00b0 */ - - u64 trace_aux_data; /* 0x0100 */ - u64 trace_buffer_0_63; /* 0x0108 */ - u64 trace_buffer_64_127; /* 0x0110 */ - u64 trace_address; /* 0x0118 */ - u64 ext_tr_timer; /* 0x0120 */ - - u8 pad_0x0128_0x0400 [0x0400 - 0x0128]; /* 0x0128 */ - - /* Performance Monitor */ - u64 pm_status; /* 0x0400 */ - u64 pm_control; /* 0x0408 */ - u64 pm_interval; /* 0x0410 */ - u64 pm_ctr[4]; /* 0x0418 */ - u64 pm_start_stop; /* 0x0438 */ - u64 pm07_control[8]; /* 0x0440 */ - - u8 pad_0x0480_0x0800 [0x0800 - 0x0480]; /* 0x0480 */ - - /* Thermal Sensor Registers */ - union spe_reg ts_ctsr1; /* 0x0800 */ - u64 ts_ctsr2; /* 0x0808 */ - union spe_reg ts_mtsr1; /* 0x0810 */ - u64 ts_mtsr2; /* 0x0818 */ - union spe_reg ts_itr1; /* 0x0820 */ - u64 ts_itr2; /* 0x0828 */ - u64 ts_gitr; /* 0x0830 */ - u64 ts_isr; /* 0x0838 */ - u64 ts_imr; /* 0x0840 */ - union spe_reg tm_cr1; /* 0x0848 */ - u64 tm_cr2; /* 0x0850 */ - u64 tm_simr; /* 0x0858 */ - union ppe_spe_reg tm_tpr; /* 0x0860 */ - union spe_reg tm_str1; /* 0x0868 */ - u64 tm_str2; /* 0x0870 */ - union ppe_spe_reg tm_tsr; /* 0x0878 */ - - /* Power Management */ - u64 pmcr; /* 0x0880 */ -#define CBE_PMD_PAUSE_ZERO_CONTROL 0x10000 - u64 pmsr; /* 0x0888 */ - - /* Time Base Register */ - u64 tbr; /* 0x0890 */ - - u8 pad_0x0898_0x0c00 [0x0c00 - 0x0898]; /* 0x0898 */ - - /* Fault Isolation Registers */ - u64 checkstop_fir; /* 0x0c00 */ - u64 recoverable_fir; /* 0x0c08 */ - u64 spec_att_mchk_fir; /* 0x0c10 */ - u32 fir_mode_reg; /* 0x0c18 */ - u8 pad_0x0c1c_0x0c20 [4]; /* 0x0c1c */ -#define CBE_PMD_FIR_MODE_M8 0x00800 - u64 fir_enable_mask; /* 0x0c20 */ - - u8 pad_0x0c28_0x0ca8 [0x0ca8 - 0x0c28]; /* 0x0c28 */ - u64 ras_esc_0; /* 0x0ca8 */ - u8 pad_0x0cb0_0x1000 [0x1000 - 0x0cb0]; /* 0x0cb0 */ -}; - -extern struct cbe_pmd_regs __iomem *cbe_get_pmd_regs(struct device_node *np); -extern struct cbe_pmd_regs __iomem *cbe_get_cpu_pmd_regs(int cpu); - -/* - * PMU shadow registers - * - * Many of the registers in the performance monitoring unit are write-only, - * so we need to save a copy of what we write to those registers. - * - * The actual data counters are read/write. However, writing to the counters - * only takes effect if the PMU is enabled. Otherwise the value is stored in - * a hardware latch until the next time the PMU is enabled. So we save a copy - * of the counter values if we need to read them back while the PMU is - * disabled. The counter_value_in_latch field is a bitmap indicating which - * counters currently have a value waiting to be written. - */ - -struct cbe_pmd_shadow_regs { - u32 group_control; - u32 debug_bus_control; - u32 trace_address; - u32 ext_tr_timer; - u32 pm_status; - u32 pm_control; - u32 pm_interval; - u32 pm_start_stop; - u32 pm07_control[NR_CTRS]; - - u32 pm_ctr[NR_PHYS_CTRS]; - u32 counter_value_in_latch; -}; - -extern struct cbe_pmd_shadow_regs *cbe_get_pmd_shadow_regs(struct device_node *np); -extern struct cbe_pmd_shadow_regs *cbe_get_cpu_pmd_shadow_regs(int cpu); - -/* - * - * IIC unit register definitions - * - */ - -struct cbe_iic_pending_bits { - u32 data; - u8 flags; - u8 class; - u8 source; - u8 prio; -}; - -#define CBE_IIC_IRQ_VALID 0x80 -#define CBE_IIC_IRQ_IPI 0x40 - -struct cbe_iic_thread_regs { - struct cbe_iic_pending_bits pending; - struct cbe_iic_pending_bits pending_destr; - u64 generate; - u64 prio; -}; - -struct cbe_iic_regs { - u8 pad_0x0000_0x0400[0x0400 - 0x0000]; /* 0x0000 */ - - /* IIC interrupt registers */ - struct cbe_iic_thread_regs thread[2]; /* 0x0400 */ - - u64 iic_ir; /* 0x0440 */ -#define CBE_IIC_IR_PRIO(x) (((x) & 0xf) << 12) -#define CBE_IIC_IR_DEST_NODE(x) (((x) & 0xf) << 4) -#define CBE_IIC_IR_DEST_UNIT(x) ((x) & 0xf) -#define CBE_IIC_IR_IOC_0 0x0 -#define CBE_IIC_IR_IOC_1S 0xb -#define CBE_IIC_IR_PT_0 0xe -#define CBE_IIC_IR_PT_1 0xf - - u64 iic_is; /* 0x0448 */ -#define CBE_IIC_IS_PMI 0x2 - - u8 pad_0x0450_0x0500[0x0500 - 0x0450]; /* 0x0450 */ - - /* IOC FIR */ - u64 ioc_fir_reset; /* 0x0500 */ - u64 ioc_fir_set; /* 0x0508 */ - u64 ioc_checkstop_enable; /* 0x0510 */ - u64 ioc_fir_error_mask; /* 0x0518 */ - u64 ioc_syserr_enable; /* 0x0520 */ - u64 ioc_fir; /* 0x0528 */ - - u8 pad_0x0530_0x1000[0x1000 - 0x0530]; /* 0x0530 */ -}; - -extern struct cbe_iic_regs __iomem *cbe_get_iic_regs(struct device_node *np); -extern struct cbe_iic_regs __iomem *cbe_get_cpu_iic_regs(int cpu); - - -struct cbe_mic_tm_regs { - u8 pad_0x0000_0x0040[0x0040 - 0x0000]; /* 0x0000 */ - - u64 mic_ctl_cnfg2; /* 0x0040 */ -#define CBE_MIC_ENABLE_AUX_TRC 0x8000000000000000LL -#define CBE_MIC_DISABLE_PWR_SAV_2 0x0200000000000000LL -#define CBE_MIC_DISABLE_AUX_TRC_WRAP 0x0100000000000000LL -#define CBE_MIC_ENABLE_AUX_TRC_INT 0x0080000000000000LL - - u64 pad_0x0048; /* 0x0048 */ - - u64 mic_aux_trc_base; /* 0x0050 */ - u64 mic_aux_trc_max_addr; /* 0x0058 */ - u64 mic_aux_trc_cur_addr; /* 0x0060 */ - u64 mic_aux_trc_grf_addr; /* 0x0068 */ - u64 mic_aux_trc_grf_data; /* 0x0070 */ - - u64 pad_0x0078; /* 0x0078 */ - - u64 mic_ctl_cnfg_0; /* 0x0080 */ -#define CBE_MIC_DISABLE_PWR_SAV_0 0x8000000000000000LL - - u64 pad_0x0088; /* 0x0088 */ - - u64 slow_fast_timer_0; /* 0x0090 */ - u64 slow_next_timer_0; /* 0x0098 */ - - u8 pad_0x00a0_0x00f8[0x00f8 - 0x00a0]; /* 0x00a0 */ - u64 mic_df_ecc_address_0; /* 0x00f8 */ - - u8 pad_0x0100_0x01b8[0x01b8 - 0x0100]; /* 0x0100 */ - u64 mic_df_ecc_address_1; /* 0x01b8 */ - - u64 mic_ctl_cnfg_1; /* 0x01c0 */ -#define CBE_MIC_DISABLE_PWR_SAV_1 0x8000000000000000LL - - u64 pad_0x01c8; /* 0x01c8 */ - - u64 slow_fast_timer_1; /* 0x01d0 */ - u64 slow_next_timer_1; /* 0x01d8 */ - - u8 pad_0x01e0_0x0208[0x0208 - 0x01e0]; /* 0x01e0 */ - u64 mic_exc; /* 0x0208 */ -#define CBE_MIC_EXC_BLOCK_SCRUB 0x0800000000000000ULL -#define CBE_MIC_EXC_FAST_SCRUB 0x0100000000000000ULL - - u64 mic_mnt_cfg; /* 0x0210 */ -#define CBE_MIC_MNT_CFG_CHAN_0_POP 0x0002000000000000ULL -#define CBE_MIC_MNT_CFG_CHAN_1_POP 0x0004000000000000ULL - - u64 mic_df_config; /* 0x0218 */ -#define CBE_MIC_ECC_DISABLE_0 0x4000000000000000ULL -#define CBE_MIC_ECC_REP_SINGLE_0 0x2000000000000000ULL -#define CBE_MIC_ECC_DISABLE_1 0x0080000000000000ULL -#define CBE_MIC_ECC_REP_SINGLE_1 0x0040000000000000ULL - - u8 pad_0x0220_0x0230[0x0230 - 0x0220]; /* 0x0220 */ - u64 mic_fir; /* 0x0230 */ -#define CBE_MIC_FIR_ECC_SINGLE_0_ERR 0x0200000000000000ULL -#define CBE_MIC_FIR_ECC_MULTI_0_ERR 0x0100000000000000ULL -#define CBE_MIC_FIR_ECC_SINGLE_1_ERR 0x0080000000000000ULL -#define CBE_MIC_FIR_ECC_MULTI_1_ERR 0x0040000000000000ULL -#define CBE_MIC_FIR_ECC_ERR_MASK 0xffff000000000000ULL -#define CBE_MIC_FIR_ECC_SINGLE_0_CTE 0x0000020000000000ULL -#define CBE_MIC_FIR_ECC_MULTI_0_CTE 0x0000010000000000ULL -#define CBE_MIC_FIR_ECC_SINGLE_1_CTE 0x0000008000000000ULL -#define CBE_MIC_FIR_ECC_MULTI_1_CTE 0x0000004000000000ULL -#define CBE_MIC_FIR_ECC_CTE_MASK 0x0000ffff00000000ULL -#define CBE_MIC_FIR_ECC_SINGLE_0_RESET 0x0000000002000000ULL -#define CBE_MIC_FIR_ECC_MULTI_0_RESET 0x0000000001000000ULL -#define CBE_MIC_FIR_ECC_SINGLE_1_RESET 0x0000000000800000ULL -#define CBE_MIC_FIR_ECC_MULTI_1_RESET 0x0000000000400000ULL -#define CBE_MIC_FIR_ECC_RESET_MASK 0x00000000ffff0000ULL -#define CBE_MIC_FIR_ECC_SINGLE_0_SET 0x0000000000000200ULL -#define CBE_MIC_FIR_ECC_MULTI_0_SET 0x0000000000000100ULL -#define CBE_MIC_FIR_ECC_SINGLE_1_SET 0x0000000000000080ULL -#define CBE_MIC_FIR_ECC_MULTI_1_SET 0x0000000000000040ULL -#define CBE_MIC_FIR_ECC_SET_MASK 0x000000000000ffffULL - u64 mic_fir_debug; /* 0x0238 */ - - u8 pad_0x0240_0x1000[0x1000 - 0x0240]; /* 0x0240 */ -}; - -extern struct cbe_mic_tm_regs __iomem *cbe_get_mic_tm_regs(struct device_node *np); -extern struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu); - - /* Cell page table entries */ #define CBE_IOPTE_PP_W 0x8000000000000000ul /* protection: write */ #define CBE_IOPTE_PP_R 0x4000000000000000ul /* protection: read */ @@ -315,13 +28,4 @@ extern struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu); #define CBE_IOPTE_H 0x0000000000000800ul /* cache hint */ #define CBE_IOPTE_IOID_Mask 0x00000000000007fful /* ioid */ -/* some utility functions to deal with SMT */ -extern u32 cbe_get_hw_thread_id(int cpu); -extern u32 cbe_cpu_to_node(int cpu); -extern u32 cbe_node_to_cpu(int node); - -/* Init this module early */ -extern void cbe_regs_init(void); - - #endif /* CBE_REGS_H */ diff --git a/arch/powerpc/include/asm/dcr-generic.h b/arch/powerpc/include/asm/dcr-generic.h deleted file mode 100644 index 099c28dd40b9..000000000000 --- a/arch/powerpc/include/asm/dcr-generic.h +++ /dev/null @@ -1,36 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * (c) Copyright 2006 Benjamin Herrenschmidt, IBM Corp. - * <benh@kernel.crashing.org> - */ - -#ifndef _ASM_POWERPC_DCR_GENERIC_H -#define _ASM_POWERPC_DCR_GENERIC_H -#ifdef __KERNEL__ -#ifndef __ASSEMBLY__ - -enum host_type_t {DCR_HOST_MMIO, DCR_HOST_NATIVE, DCR_HOST_INVALID}; - -typedef struct { - enum host_type_t type; - union { - dcr_host_mmio_t mmio; - dcr_host_native_t native; - } host; -} dcr_host_t; - -extern bool dcr_map_ok_generic(dcr_host_t host); - -extern dcr_host_t dcr_map_generic(struct device_node *dev, unsigned int dcr_n, - unsigned int dcr_c); -extern void dcr_unmap_generic(dcr_host_t host, unsigned int dcr_c); - -extern u32 dcr_read_generic(dcr_host_t host, unsigned int dcr_n); - -extern void dcr_write_generic(dcr_host_t host, unsigned int dcr_n, u32 value); - -#endif /* __ASSEMBLY__ */ -#endif /* __KERNEL__ */ -#endif /* _ASM_POWERPC_DCR_GENERIC_H */ - - diff --git a/arch/powerpc/include/asm/dcr-mmio.h b/arch/powerpc/include/asm/dcr-mmio.h deleted file mode 100644 index fc6d93ef4a13..000000000000 --- a/arch/powerpc/include/asm/dcr-mmio.h +++ /dev/null @@ -1,44 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * (c) Copyright 2006 Benjamin Herrenschmidt, IBM Corp. - * <benh@kernel.crashing.org> - */ - -#ifndef _ASM_POWERPC_DCR_MMIO_H -#define _ASM_POWERPC_DCR_MMIO_H -#ifdef __KERNEL__ - -#include <asm/io.h> - -typedef struct { - void __iomem *token; - unsigned int stride; - unsigned int base; -} dcr_host_mmio_t; - -static inline bool dcr_map_ok_mmio(dcr_host_mmio_t host) -{ - return host.token != NULL; -} - -extern dcr_host_mmio_t dcr_map_mmio(struct device_node *dev, - unsigned int dcr_n, - unsigned int dcr_c); -extern void dcr_unmap_mmio(dcr_host_mmio_t host, unsigned int dcr_c); - -static inline u32 dcr_read_mmio(dcr_host_mmio_t host, unsigned int dcr_n) -{ - return in_be32(host.token + ((host.base + dcr_n) * host.stride)); -} - -static inline void dcr_write_mmio(dcr_host_mmio_t host, - unsigned int dcr_n, - u32 value) -{ - out_be32(host.token + ((host.base + dcr_n) * host.stride), value); -} - -#endif /* __KERNEL__ */ -#endif /* _ASM_POWERPC_DCR_MMIO_H */ - - diff --git a/arch/powerpc/include/asm/dcr.h b/arch/powerpc/include/asm/dcr.h index 64030e3a1f30..180021cd0b30 100644 --- a/arch/powerpc/include/asm/dcr.h +++ b/arch/powerpc/include/asm/dcr.h @@ -10,46 +10,14 @@ #ifndef __ASSEMBLY__ #ifdef CONFIG_PPC_DCR -#ifdef CONFIG_PPC_DCR_NATIVE #include <asm/dcr-native.h> -#endif -#ifdef CONFIG_PPC_DCR_MMIO -#include <asm/dcr-mmio.h> -#endif - - -/* Indirection layer for providing both NATIVE and MMIO support. */ - -#if defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) - -#include <asm/dcr-generic.h> - -#define DCR_MAP_OK(host) dcr_map_ok_generic(host) -#define dcr_map(dev, dcr_n, dcr_c) dcr_map_generic(dev, dcr_n, dcr_c) -#define dcr_unmap(host, dcr_c) dcr_unmap_generic(host, dcr_c) -#define dcr_read(host, dcr_n) dcr_read_generic(host, dcr_n) -#define dcr_write(host, dcr_n, value) dcr_write_generic(host, dcr_n, value) - -#else - -#ifdef CONFIG_PPC_DCR_NATIVE typedef dcr_host_native_t dcr_host_t; #define DCR_MAP_OK(host) dcr_map_ok_native(host) #define dcr_map(dev, dcr_n, dcr_c) dcr_map_native(dev, dcr_n, dcr_c) #define dcr_unmap(host, dcr_c) dcr_unmap_native(host, dcr_c) #define dcr_read(host, dcr_n) dcr_read_native(host, dcr_n) #define dcr_write(host, dcr_n, value) dcr_write_native(host, dcr_n, value) -#else -typedef dcr_host_mmio_t dcr_host_t; -#define DCR_MAP_OK(host) dcr_map_ok_mmio(host) -#define dcr_map(dev, dcr_n, dcr_c) dcr_map_mmio(dev, dcr_n, dcr_c) -#define dcr_unmap(host, dcr_c) dcr_unmap_mmio(host, dcr_c) -#define dcr_read(host, dcr_n) dcr_read_mmio(host, dcr_n) -#define dcr_write(host, dcr_n, value) dcr_write_mmio(host, dcr_n, value) -#endif - -#endif /* defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) */ /* * additional helpers to read the DCR * base from the device-tree diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 65d1f291393d..eeef13db2770 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -348,6 +348,7 @@ #define H_SCM_FLUSH 0x44C #define H_GET_ENERGY_SCALE_INFO 0x450 #define H_PKS_SIGNED_UPDATE 0x454 +#define H_HTM 0x458 #define H_WATCHDOG 0x45C #define H_GUEST_GET_CAPABILITIES 0x460 #define H_GUEST_SET_CAPABILITIES 0x464 @@ -498,6 +499,39 @@ #define H_GUEST_CAP_POWER11 (1UL<<(63-3)) #define H_GUEST_CAP_BITMAP2 (1UL<<(63-63)) +/* + * Defines for H_HTM - Macros for hardware trace macro (HTM) function. + */ +#define H_HTM_FLAGS_HARDWARE_TARGET (1ul << 63) +#define H_HTM_FLAGS_LOGICAL_TARGET (1ul << 62) +#define H_HTM_FLAGS_PROCID_TARGET (1ul << 61) +#define H_HTM_FLAGS_NOWRAP (1ul << 60) + +#define H_HTM_OP_SHIFT (63-15) +#define H_HTM_OP(x) ((unsigned long)(x)<<H_HTM_OP_SHIFT) +#define H_HTM_OP_CAPABILITIES 0x01 +#define H_HTM_OP_STATUS 0x02 +#define H_HTM_OP_SETUP 0x03 +#define H_HTM_OP_CONFIGURE 0x04 +#define H_HTM_OP_START 0x05 +#define H_HTM_OP_STOP 0x06 +#define H_HTM_OP_DECONFIGURE 0x07 +#define H_HTM_OP_DUMP_DETAILS 0x08 +#define H_HTM_OP_DUMP_DATA 0x09 +#define H_HTM_OP_DUMP_SYSMEM_CONF 0x0a +#define H_HTM_OP_DUMP_SYSPROC_CONF 0x0b + +#define H_HTM_TYPE_SHIFT (63-31) +#define H_HTM_TYPE(x) ((unsigned long)(x)<<H_HTM_TYPE_SHIFT) +#define H_HTM_TYPE_NEST 0x01 +#define H_HTM_TYPE_CORE 0x02 +#define H_HTM_TYPE_LLAT 0x03 +#define H_HTM_TYPE_GLOBAL 0xff + +#define H_HTM_TARGET_NODE_INDEX(x) ((unsigned long)(x)<<(63-15)) +#define H_HTM_TARGET_NODAL_CHIP_INDEX(x) ((unsigned long)(x)<<(63-31)) +#define H_HTM_TARGET_CORE_INDEX_ON_CHIP(x) ((unsigned long)(x)<<(63-47)) + #ifndef __ASSEMBLY__ #include <linux/types.h> diff --git a/arch/powerpc/include/asm/io-defs.h b/arch/powerpc/include/asm/io-defs.h index faf8617cc574..5c2be9b54a9d 100644 --- a/arch/powerpc/include/asm/io-defs.h +++ b/arch/powerpc/include/asm/io-defs.h @@ -1,61 +1,15 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* This file is meant to be include multiple times by other headers */ -/* last 2 argments are used by platforms/cell/io-workarounds.[ch] */ -DEF_PCI_AC_RET(readb, u8, (const PCI_IO_ADDR addr), (addr), mem, addr) -DEF_PCI_AC_RET(readw, u16, (const PCI_IO_ADDR addr), (addr), mem, addr) -DEF_PCI_AC_RET(readl, u32, (const PCI_IO_ADDR addr), (addr), mem, addr) -DEF_PCI_AC_RET(readw_be, u16, (const PCI_IO_ADDR addr), (addr), mem, addr) -DEF_PCI_AC_RET(readl_be, u32, (const PCI_IO_ADDR addr), (addr), mem, addr) -DEF_PCI_AC_NORET(writeb, (u8 val, PCI_IO_ADDR addr), (val, addr), mem, addr) -DEF_PCI_AC_NORET(writew, (u16 val, PCI_IO_ADDR addr), (val, addr), mem, addr) -DEF_PCI_AC_NORET(writel, (u32 val, PCI_IO_ADDR addr), (val, addr), mem, addr) -DEF_PCI_AC_NORET(writew_be, (u16 val, PCI_IO_ADDR addr), (val, addr), mem, addr) -DEF_PCI_AC_NORET(writel_be, (u32 val, PCI_IO_ADDR addr), (val, addr), mem, addr) - -#ifdef __powerpc64__ -DEF_PCI_AC_RET(readq, u64, (const PCI_IO_ADDR addr), (addr), mem, addr) -DEF_PCI_AC_RET(readq_be, u64, (const PCI_IO_ADDR addr), (addr), mem, addr) -DEF_PCI_AC_NORET(writeq, (u64 val, PCI_IO_ADDR addr), (val, addr), mem, addr) -DEF_PCI_AC_NORET(writeq_be, (u64 val, PCI_IO_ADDR addr), (val, addr), mem, addr) -#endif /* __powerpc64__ */ - -DEF_PCI_AC_RET(inb, u8, (unsigned long port), (port), pio, port) -DEF_PCI_AC_RET(inw, u16, (unsigned long port), (port), pio, port) -DEF_PCI_AC_RET(inl, u32, (unsigned long port), (port), pio, port) -DEF_PCI_AC_NORET(outb, (u8 val, unsigned long port), (val, port), pio, port) -DEF_PCI_AC_NORET(outw, (u16 val, unsigned long port), (val, port), pio, port) -DEF_PCI_AC_NORET(outl, (u32 val, unsigned long port), (val, port), pio, port) - -DEF_PCI_AC_NORET(readsb, (const PCI_IO_ADDR a, void *b, unsigned long c), - (a, b, c), mem, a) -DEF_PCI_AC_NORET(readsw, (const PCI_IO_ADDR a, void *b, unsigned long c), - (a, b, c), mem, a) -DEF_PCI_AC_NORET(readsl, (const PCI_IO_ADDR a, void *b, unsigned long c), - (a, b, c), mem, a) -DEF_PCI_AC_NORET(writesb, (PCI_IO_ADDR a, const void *b, unsigned long c), - (a, b, c), mem, a) -DEF_PCI_AC_NORET(writesw, (PCI_IO_ADDR a, const void *b, unsigned long c), - (a, b, c), mem, a) -DEF_PCI_AC_NORET(writesl, (PCI_IO_ADDR a, const void *b, unsigned long c), - (a, b, c), mem, a) - -DEF_PCI_AC_NORET(insb, (unsigned long p, void *b, unsigned long c), - (p, b, c), pio, p) -DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c), - (p, b, c), pio, p) -DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c), - (p, b, c), pio, p) -DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c), - (p, b, c), pio, p) -DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c), - (p, b, c), pio, p) -DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c), - (p, b, c), pio, p) - -DEF_PCI_AC_NORET(memset_io, (PCI_IO_ADDR a, int c, unsigned long n), - (a, c, n), mem, a) -DEF_PCI_AC_NORET(memcpy_fromio, (void *d, const PCI_IO_ADDR s, unsigned long n), - (d, s, n), mem, s) -DEF_PCI_AC_NORET(memcpy_toio, (PCI_IO_ADDR d, const void *s, unsigned long n), - (d, s, n), mem, d) +DEF_PCI_AC_RET(inb, u8, (unsigned long port), (port)) +DEF_PCI_AC_RET(inw, u16, (unsigned long port), (port)) +DEF_PCI_AC_RET(inl, u32, (unsigned long port), (port)) +DEF_PCI_AC_NORET(outb, (u8 val, unsigned long port), (val, port)) +DEF_PCI_AC_NORET(outw, (u16 val, unsigned long port), (val, port)) +DEF_PCI_AC_NORET(outl, (u32 val, unsigned long port), (val, port)) +DEF_PCI_AC_NORET(insb, (unsigned long p, void *b, unsigned long c), (p, b, c)) +DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c), (p, b, c)) +DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c), (p, b, c)) +DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c), (p, b, c)) +DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c), (p, b, c)) +DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c), (p, b, c)) diff --git a/arch/powerpc/include/asm/io-workarounds.h b/arch/powerpc/include/asm/io-workarounds.h deleted file mode 100644 index 3cce499fbe27..000000000000 --- a/arch/powerpc/include/asm/io-workarounds.h +++ /dev/null @@ -1,55 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * Support PCI IO workaround - * - * (C) Copyright 2007-2008 TOSHIBA CORPORATION - */ - -#ifndef _IO_WORKAROUNDS_H -#define _IO_WORKAROUNDS_H - -#ifdef CONFIG_PPC_IO_WORKAROUNDS -#include <linux/io.h> -#include <asm/pci-bridge.h> - -/* Bus info */ -struct iowa_bus { - struct pci_controller *phb; - struct ppc_pci_io *ops; - void *private; -}; - -void iowa_register_bus(struct pci_controller *, struct ppc_pci_io *, - int (*)(struct iowa_bus *, void *), void *); -struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR); -struct iowa_bus *iowa_pio_find_bus(unsigned long); - -extern struct ppc_pci_io spiderpci_ops; -extern int spiderpci_iowa_init(struct iowa_bus *, void *); - -#define SPIDER_PCI_REG_BASE 0xd000 -#define SPIDER_PCI_REG_SIZE 0x1000 -#define SPIDER_PCI_VCI_CNTL_STAT 0x0110 -#define SPIDER_PCI_DUMMY_READ 0x0810 -#define SPIDER_PCI_DUMMY_READ_BASE 0x0814 - -#endif - -#if defined(CONFIG_PPC_IO_WORKAROUNDS) && defined(CONFIG_PPC_INDIRECT_MMIO) -extern bool io_workaround_inited; - -static inline bool iowa_is_active(void) -{ - return unlikely(io_workaround_inited); -} -#else -static inline bool iowa_is_active(void) -{ - return false; -} -#endif - -void __iomem *iowa_ioremap(phys_addr_t addr, unsigned long size, - pgprot_t prot, void *caller); - -#endif /* _IO_WORKAROUNDS_H */ diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h index d36c4ccaca08..492e8855e00f 100644 --- a/arch/powerpc/include/asm/io.h +++ b/arch/powerpc/include/asm/io.h @@ -65,8 +65,8 @@ extern resource_size_t isa_mem_base; extern bool isa_io_special; #ifdef CONFIG_PPC32 -#if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO) -#error CONFIG_PPC_INDIRECT_{PIO,MMIO} are not yet supported on 32 bits +#ifdef CONFIG_PPC_INDIRECT_PIO +#error CONFIG_PPC_INDIRECT_PIO is not yet supported on 32 bits #endif #endif @@ -80,16 +80,12 @@ extern bool isa_io_special; * * in_8, in_le16, in_be16, in_le32, in_be32, in_le64, in_be64 * out_8, out_le16, out_be16, out_le32, out_be32, out_le64, out_be64 - * _insb, _insw_ns, _insl_ns, _outsb, _outsw_ns, _outsl_ns + * _insb, _insw, _insl, _outsb, _outsw, _outsl * * Those operate directly on a kernel virtual address. Note that the prototype * for the out_* accessors has the arguments in opposite order from the usual * linux PCI accessors. Unlike those, they take the address first and the value * next. - * - * Note: I might drop the _ns suffix on the stream operations soon as it is - * simply normal for stream operations to not swap in the first place. - * */ /* -mprefixed can generate offsets beyond range, fall back hack */ @@ -228,19 +224,10 @@ static inline void out_be64(volatile u64 __iomem *addr, u64 val) */ extern void _insb(const volatile u8 __iomem *addr, void *buf, long count); extern void _outsb(volatile u8 __iomem *addr,const void *buf,long count); -extern void _insw_ns(const volatile u16 __iomem *addr, void *buf, long count); -extern void _outsw_ns(volatile u16 __iomem *addr, const void *buf, long count); -extern void _insl_ns(const volatile u32 __iomem *addr, void *buf, long count); -extern void _outsl_ns(volatile u32 __iomem *addr, const void *buf, long count); - -/* The _ns naming is historical and will be removed. For now, just #define - * the non _ns equivalent names - */ -#define _insw _insw_ns -#define _insl _insl_ns -#define _outsw _outsw_ns -#define _outsl _outsl_ns - +extern void _insw(const volatile u16 __iomem *addr, void *buf, long count); +extern void _outsw(volatile u16 __iomem *addr, const void *buf, long count); +extern void _insl(const volatile u32 __iomem *addr, void *buf, long count); +extern void _outsl(volatile u32 __iomem *addr, const void *buf, long count); /* * memset_io, memcpy_toio, memcpy_fromio base implementations are out of line @@ -261,9 +248,9 @@ extern void _memcpy_toio(volatile void __iomem *dest, const void *src, * for PowerPC is as close as possible to the x86 version of these, and thus * provides fairly heavy weight barriers for the non-raw versions * - * In addition, they support a hook mechanism when CONFIG_PPC_INDIRECT_MMIO - * or CONFIG_PPC_INDIRECT_PIO are set allowing the platform to provide its - * own implementation of some or all of the accessors. + * In addition, they support a hook mechanism when CONFIG_PPC_INDIRECT_PIO + * is set allowing the platform to provide its own implementation of some + * of the accessors. */ /* @@ -274,116 +261,11 @@ extern void _memcpy_toio(volatile void __iomem *dest, const void *src, #include <asm/eeh.h> #endif -/* Shortcut to the MMIO argument pointer */ -#define PCI_IO_ADDR volatile void __iomem * - -/* Indirect IO address tokens: - * - * When CONFIG_PPC_INDIRECT_MMIO is set, the platform can provide hooks - * on all MMIOs. (Note that this is all 64 bits only for now) - * - * To help platforms who may need to differentiate MMIO addresses in - * their hooks, a bitfield is reserved for use by the platform near the - * top of MMIO addresses (not PIO, those have to cope the hard way). - * - * The highest address in the kernel virtual space are: - * - * d0003fffffffffff # with Hash MMU - * c00fffffffffffff # with Radix MMU - * - * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits - * that can be used for the field. - * - * The direct IO mapping operations will then mask off those bits - * before doing the actual access, though that only happen when - * CONFIG_PPC_INDIRECT_MMIO is set, thus be careful when you use that - * mechanism - * - * For PIO, there is a separate CONFIG_PPC_INDIRECT_PIO which makes - * all PIO functions call through a hook. - */ - -#ifdef CONFIG_PPC_INDIRECT_MMIO -#define PCI_IO_IND_TOKEN_SHIFT 52 -#define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT) -#define PCI_FIX_ADDR(addr) \ - ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK)) -#define PCI_GET_ADDR_TOKEN(addr) \ - (((unsigned long)(addr) & PCI_IO_IND_TOKEN_MASK) >> \ - PCI_IO_IND_TOKEN_SHIFT) -#define PCI_SET_ADDR_TOKEN(addr, token) \ -do { \ - unsigned long __a = (unsigned long)(addr); \ - __a &= ~PCI_IO_IND_TOKEN_MASK; \ - __a |= ((unsigned long)(token)) << PCI_IO_IND_TOKEN_SHIFT; \ - (addr) = (void __iomem *)__a; \ -} while(0) -#else -#define PCI_FIX_ADDR(addr) (addr) -#endif - - -/* - * Non ordered and non-swapping "raw" accessors - */ - -static inline unsigned char __raw_readb(const volatile void __iomem *addr) -{ - return *(volatile unsigned char __force *)PCI_FIX_ADDR(addr); -} -#define __raw_readb __raw_readb - -static inline unsigned short __raw_readw(const volatile void __iomem *addr) -{ - return *(volatile unsigned short __force *)PCI_FIX_ADDR(addr); -} -#define __raw_readw __raw_readw - -static inline unsigned int __raw_readl(const volatile void __iomem *addr) -{ - return *(volatile unsigned int __force *)PCI_FIX_ADDR(addr); -} -#define __raw_readl __raw_readl - -static inline void __raw_writeb(unsigned char v, volatile void __iomem *addr) -{ - *(volatile unsigned char __force *)PCI_FIX_ADDR(addr) = v; -} -#define __raw_writeb __raw_writeb - -static inline void __raw_writew(unsigned short v, volatile void __iomem *addr) -{ - *(volatile unsigned short __force *)PCI_FIX_ADDR(addr) = v; -} -#define __raw_writew __raw_writew - -static inline void __raw_writel(unsigned int v, volatile void __iomem *addr) -{ - *(volatile unsigned int __force *)PCI_FIX_ADDR(addr) = v; -} -#define __raw_writel __raw_writel +#define _IO_PORT(port) ((volatile void __iomem *)(_IO_BASE + (port))) #ifdef __powerpc64__ -static inline unsigned long __raw_readq(const volatile void __iomem *addr) -{ - return *(volatile unsigned long __force *)PCI_FIX_ADDR(addr); -} -#define __raw_readq __raw_readq - -static inline void __raw_writeq(unsigned long v, volatile void __iomem *addr) -{ - *(volatile unsigned long __force *)PCI_FIX_ADDR(addr) = v; -} -#define __raw_writeq __raw_writeq - -static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr) -{ - __raw_writeq((__force unsigned long)cpu_to_be64(v), addr); -} -#define __raw_writeq_be __raw_writeq_be - /* - * Real mode versions of the above. Those instructions are only supposed + * Real mode versions of raw accessors. Those instructions are only supposed * to be used in hypervisor real mode as per the architecture spec. */ static inline void __raw_rm_writeb(u8 val, volatile void __iomem *paddr) @@ -551,30 +433,23 @@ __do_out_asm(_rec_outl, "stwbrx") * possible to hook directly at the toplevel PIO operation if they have to * be handled differently */ -#define __do_writeb(val, addr) out_8(PCI_FIX_ADDR(addr), val) -#define __do_writew(val, addr) out_le16(PCI_FIX_ADDR(addr), val) -#define __do_writel(val, addr) out_le32(PCI_FIX_ADDR(addr), val) -#define __do_writeq(val, addr) out_le64(PCI_FIX_ADDR(addr), val) -#define __do_writew_be(val, addr) out_be16(PCI_FIX_ADDR(addr), val) -#define __do_writel_be(val, addr) out_be32(PCI_FIX_ADDR(addr), val) -#define __do_writeq_be(val, addr) out_be64(PCI_FIX_ADDR(addr), val) #ifdef CONFIG_EEH -#define __do_readb(addr) eeh_readb(PCI_FIX_ADDR(addr)) -#define __do_readw(addr) eeh_readw(PCI_FIX_ADDR(addr)) -#define __do_readl(addr) eeh_readl(PCI_FIX_ADDR(addr)) -#define __do_readq(addr) eeh_readq(PCI_FIX_ADDR(addr)) -#define __do_readw_be(addr) eeh_readw_be(PCI_FIX_ADDR(addr)) -#define __do_readl_be(addr) eeh_readl_be(PCI_FIX_ADDR(addr)) -#define __do_readq_be(addr) eeh_readq_be(PCI_FIX_ADDR(addr)) +#define __do_readb(addr) eeh_readb(addr) +#define __do_readw(addr) eeh_readw(addr) +#define __do_readl(addr) eeh_readl(addr) +#define __do_readq(addr) eeh_readq(addr) +#define __do_readw_be(addr) eeh_readw_be(addr) +#define __do_readl_be(addr) eeh_readl_be(addr) +#define __do_readq_be(addr) eeh_readq_be(addr) #else /* CONFIG_EEH */ -#define __do_readb(addr) in_8(PCI_FIX_ADDR(addr)) -#define __do_readw(addr) in_le16(PCI_FIX_ADDR(addr)) -#define __do_readl(addr) in_le32(PCI_FIX_ADDR(addr)) -#define __do_readq(addr) in_le64(PCI_FIX_ADDR(addr)) -#define __do_readw_be(addr) in_be16(PCI_FIX_ADDR(addr)) -#define __do_readl_be(addr) in_be32(PCI_FIX_ADDR(addr)) -#define __do_readq_be(addr) in_be64(PCI_FIX_ADDR(addr)) +#define __do_readb(addr) in_8(addr) +#define __do_readw(addr) in_le16(addr) +#define __do_readl(addr) in_le32(addr) +#define __do_readq(addr) in_le64(addr) +#define __do_readw_be(addr) in_be16(addr) +#define __do_readl_be(addr) in_be32(addr) +#define __do_readq_be(addr) in_be64(addr) #endif /* !defined(CONFIG_EEH) */ #ifdef CONFIG_PPC32 @@ -585,64 +460,185 @@ __do_out_asm(_rec_outl, "stwbrx") #define __do_inw(port) _rec_inw(port) #define __do_inl(port) _rec_inl(port) #else /* CONFIG_PPC32 */ -#define __do_outb(val, port) writeb(val,(PCI_IO_ADDR)(_IO_BASE+port)); -#define __do_outw(val, port) writew(val,(PCI_IO_ADDR)(_IO_BASE+port)); -#define __do_outl(val, port) writel(val,(PCI_IO_ADDR)(_IO_BASE+port)); -#define __do_inb(port) readb((PCI_IO_ADDR)(_IO_BASE + port)); -#define __do_inw(port) readw((PCI_IO_ADDR)(_IO_BASE + port)); -#define __do_inl(port) readl((PCI_IO_ADDR)(_IO_BASE + port)); +#define __do_outb(val, port) writeb(val,_IO_PORT(port)); +#define __do_outw(val, port) writew(val,_IO_PORT(port)); +#define __do_outl(val, port) writel(val,_IO_PORT(port)); +#define __do_inb(port) readb(_IO_PORT(port)); +#define __do_inw(port) readw(_IO_PORT(port)); +#define __do_inl(port) readl(_IO_PORT(port)); #endif /* !CONFIG_PPC32 */ #ifdef CONFIG_EEH -#define __do_readsb(a, b, n) eeh_readsb(PCI_FIX_ADDR(a), (b), (n)) -#define __do_readsw(a, b, n) eeh_readsw(PCI_FIX_ADDR(a), (b), (n)) -#define __do_readsl(a, b, n) eeh_readsl(PCI_FIX_ADDR(a), (b), (n)) +#define __do_readsb(a, b, n) eeh_readsb(a, (b), (n)) +#define __do_readsw(a, b, n) eeh_readsw(a, (b), (n)) +#define __do_readsl(a, b, n) eeh_readsl(a, (b), (n)) #else /* CONFIG_EEH */ -#define __do_readsb(a, b, n) _insb(PCI_FIX_ADDR(a), (b), (n)) -#define __do_readsw(a, b, n) _insw(PCI_FIX_ADDR(a), (b), (n)) -#define __do_readsl(a, b, n) _insl(PCI_FIX_ADDR(a), (b), (n)) +#define __do_readsb(a, b, n) _insb(a, (b), (n)) +#define __do_readsw(a, b, n) _insw(a, (b), (n)) +#define __do_readsl(a, b, n) _insl(a, (b), (n)) #endif /* !CONFIG_EEH */ -#define __do_writesb(a, b, n) _outsb(PCI_FIX_ADDR(a),(b),(n)) -#define __do_writesw(a, b, n) _outsw(PCI_FIX_ADDR(a),(b),(n)) -#define __do_writesl(a, b, n) _outsl(PCI_FIX_ADDR(a),(b),(n)) - -#define __do_insb(p, b, n) readsb((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) -#define __do_insw(p, b, n) readsw((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) -#define __do_insl(p, b, n) readsl((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) -#define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) -#define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) -#define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) - -#define __do_memset_io(addr, c, n) \ - _memset_io(PCI_FIX_ADDR(addr), c, n) -#define __do_memcpy_toio(dst, src, n) \ - _memcpy_toio(PCI_FIX_ADDR(dst), src, n) +#define __do_writesb(a, b, n) _outsb(a, (b), (n)) +#define __do_writesw(a, b, n) _outsw(a, (b), (n)) +#define __do_writesl(a, b, n) _outsl(a, (b), (n)) + +#define __do_insb(p, b, n) readsb(_IO_PORT(p), (b), (n)) +#define __do_insw(p, b, n) readsw(_IO_PORT(p), (b), (n)) +#define __do_insl(p, b, n) readsl(_IO_PORT(p), (b), (n)) +#define __do_outsb(p, b, n) writesb(_IO_PORT(p),(b),(n)) +#define __do_outsw(p, b, n) writesw(_IO_PORT(p),(b),(n)) +#define __do_outsl(p, b, n) writesl(_IO_PORT(p),(b),(n)) #ifdef CONFIG_EEH #define __do_memcpy_fromio(dst, src, n) \ - eeh_memcpy_fromio(dst, PCI_FIX_ADDR(src), n) + eeh_memcpy_fromio(dst, src, n) #else /* CONFIG_EEH */ #define __do_memcpy_fromio(dst, src, n) \ - _memcpy_fromio(dst,PCI_FIX_ADDR(src),n) + _memcpy_fromio(dst, src, n) #endif /* !CONFIG_EEH */ -#ifdef CONFIG_PPC_INDIRECT_PIO -#define DEF_PCI_HOOK_pio(x) x -#else -#define DEF_PCI_HOOK_pio(x) NULL -#endif +static inline u8 readb(const volatile void __iomem *addr) +{ + return __do_readb(addr); +} +#define readb readb + +static inline u16 readw(const volatile void __iomem *addr) +{ + return __do_readw(addr); +} +#define readw readw -#ifdef CONFIG_PPC_INDIRECT_MMIO -#define DEF_PCI_HOOK_mem(x) x +static inline u32 readl(const volatile void __iomem *addr) +{ + return __do_readl(addr); +} +#define readl readl + +static inline u16 readw_be(const volatile void __iomem *addr) +{ + return __do_readw_be(addr); +} + +static inline u32 readl_be(const volatile void __iomem *addr) +{ + return __do_readl_be(addr); +} + +static inline void writeb(u8 val, volatile void __iomem *addr) +{ + out_8(addr, val); +} +#define writeb writeb + +static inline void writew(u16 val, volatile void __iomem *addr) +{ + out_le16(addr, val); +} +#define writew writew + +static inline void writel(u32 val, volatile void __iomem *addr) +{ + out_le32(addr, val); +} +#define writel writel + +static inline void writew_be(u16 val, volatile void __iomem *addr) +{ + out_be16(addr, val); +} + +static inline void writel_be(u32 val, volatile void __iomem *addr) +{ + out_be32(addr, val); +} + +static inline void readsb(const volatile void __iomem *a, void *b, unsigned long c) +{ + __do_readsb(a, b, c); +} +#define readsb readsb + +static inline void readsw(const volatile void __iomem *a, void *b, unsigned long c) +{ + __do_readsw(a, b, c); +} +#define readsw readsw + +static inline void readsl(const volatile void __iomem *a, void *b, unsigned long c) +{ + __do_readsl(a, b, c); +} +#define readsl readsl + +static inline void writesb(volatile void __iomem *a, const void *b, unsigned long c) +{ + __do_writesb(a, b, c); +} +#define writesb writesb + +static inline void writesw(volatile void __iomem *a, const void *b, unsigned long c) +{ + __do_writesw(a, b, c); +} +#define writesw writesw + +static inline void writesl(volatile void __iomem *a, const void *b, unsigned long c) +{ + __do_writesl(a, b, c); +} +#define writesl writesl + +static inline void memset_io(volatile void __iomem *a, int c, unsigned long n) +{ + _memset_io(a, c, n); +} +#define memset_io memset_io + +static inline void memcpy_fromio(void *d, const volatile void __iomem *s, unsigned long n) +{ + __do_memcpy_fromio(d, s, n); +} +#define memcpy_fromio memcpy_fromio + +static inline void memcpy_toio(volatile void __iomem *d, const void *s, unsigned long n) +{ + _memcpy_toio(d, s, n); +} +#define memcpy_toio memcpy_toio + +#ifdef __powerpc64__ +static inline u64 readq(const volatile void __iomem *addr) +{ + return __do_readq(addr); +} + +static inline u64 readq_be(const volatile void __iomem *addr) +{ + return __do_readq_be(addr); +} + +static inline void writeq(u64 val, volatile void __iomem *addr) +{ + out_le64(addr, val); +} + +static inline void writeq_be(u64 val, volatile void __iomem *addr) +{ + out_be64(addr, val); +} +#endif /* __powerpc64__ */ + +#ifdef CONFIG_PPC_INDIRECT_PIO +#define DEF_PCI_HOOK(x) x #else -#define DEF_PCI_HOOK_mem(x) NULL +#define DEF_PCI_HOOK(x) NULL #endif /* Structure containing all the hooks */ extern struct ppc_pci_io { -#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) ret (*name) at; -#define DEF_PCI_AC_NORET(name, at, al, space, aa) void (*name) at; +#define DEF_PCI_AC_RET(name, ret, at, al) ret (*name) at; +#define DEF_PCI_AC_NORET(name, at, al) void (*name) at; #include <asm/io-defs.h> @@ -652,18 +648,18 @@ extern struct ppc_pci_io { } ppc_pci_io; /* The inline wrappers */ -#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) \ +#define DEF_PCI_AC_RET(name, ret, at, al) \ static inline ret name at \ { \ - if (DEF_PCI_HOOK_##space(ppc_pci_io.name) != NULL) \ + if (DEF_PCI_HOOK(ppc_pci_io.name) != NULL) \ return ppc_pci_io.name al; \ return __do_##name al; \ } -#define DEF_PCI_AC_NORET(name, at, al, space, aa) \ +#define DEF_PCI_AC_NORET(name, at, al) \ static inline void name at \ { \ - if (DEF_PCI_HOOK_##space(ppc_pci_io.name) != NULL) \ + if (DEF_PCI_HOOK(ppc_pci_io.name) != NULL) \ ppc_pci_io.name al; \ else \ __do_##name al; \ @@ -674,21 +670,7 @@ static inline void name at \ #undef DEF_PCI_AC_RET #undef DEF_PCI_AC_NORET -/* Some drivers check for the presence of readq & writeq with - * a #ifdef, so we make them happy here. - */ -#define readb readb -#define readw readw -#define readl readl -#define writeb writeb -#define writew writew -#define writel writel -#define readsb readsb -#define readsw readsw -#define readsl readsl -#define writesb writesb -#define writesw writesw -#define writesl writesl +// Signal to asm-generic/io.h that we have implemented these. #define inb inb #define inw inw #define inl inl @@ -705,9 +687,6 @@ static inline void name at \ #define readq readq #define writeq writeq #endif -#define memset_io memset_io -#define memcpy_fromio memcpy_fromio -#define memcpy_toio memcpy_toio /* * We don't do relaxed operations yet, at least not with this semantic @@ -982,6 +961,14 @@ static inline void * bus_to_virt(unsigned long address) #include <asm-generic/io.h> +#ifdef __powerpc64__ +static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr) +{ + __raw_writeq((__force unsigned long)cpu_to_be64(v), addr); +} +#define __raw_writeq_be __raw_writeq_be +#endif // __powerpc64__ + #endif /* __KERNEL__ */ #endif /* _ASM_POWERPC_IO_H */ diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h index 04072b5f8962..b410021ad4c6 100644 --- a/arch/powerpc/include/asm/iommu.h +++ b/arch/powerpc/include/asm/iommu.h @@ -317,12 +317,6 @@ extern void iommu_flush_tce(struct iommu_table *tbl); extern enum dma_data_direction iommu_tce_direction(unsigned long tce); extern unsigned long iommu_direction_to_tce_perm(enum dma_data_direction dir); -#ifdef CONFIG_PPC_CELL_NATIVE -extern bool iommu_fixed_is_weak; -#else -#define iommu_fixed_is_weak false -#endif - extern const struct dma_map_ops dma_iommu_ops; #endif /* __KERNEL__ */ diff --git a/arch/powerpc/include/asm/mmzone.h b/arch/powerpc/include/asm/mmzone.h index d99863cd6cde..049152f8d597 100644 --- a/arch/powerpc/include/asm/mmzone.h +++ b/arch/powerpc/include/asm/mmzone.h @@ -29,6 +29,7 @@ extern cpumask_var_t node_to_cpumask_map[]; #ifdef CONFIG_MEMORY_HOTPLUG extern unsigned long max_pfn; u64 memory_hotplug_max(void); +u64 hot_add_drconf_memory_max(void); #else #define memory_hotplug_max() memblock_end_of_DRAM() #endif diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h index 71648c126970..91be7b885944 100644 --- a/arch/powerpc/include/asm/plpar_wrappers.h +++ b/arch/powerpc/include/asm/plpar_wrappers.h @@ -65,6 +65,27 @@ static inline long register_dtl(unsigned long cpu, unsigned long vpa) return vpa_call(H_VPA_REG_DTL, cpu, vpa); } +static inline long htm_call(unsigned long flags, unsigned long target, + unsigned long operation, unsigned long param1, + unsigned long param2, unsigned long param3) +{ + return plpar_hcall_norets(H_HTM, flags, target, operation, + param1, param2, param3); +} + +static inline long htm_get_dump_hardware(unsigned long nodeindex, + unsigned long nodalchipindex, unsigned long coreindexonchip, + unsigned long type, unsigned long addr, unsigned long size, + unsigned long offset) +{ + return htm_call(H_HTM_FLAGS_HARDWARE_TARGET, + H_HTM_TARGET_NODE_INDEX(nodeindex) | + H_HTM_TARGET_NODAL_CHIP_INDEX(nodalchipindex) | + H_HTM_TARGET_CORE_INDEX_ON_CHIP(coreindexonchip), + H_HTM_OP(H_HTM_OP_DUMP_DATA) | H_HTM_TYPE(type), + addr, size, offset); +} + extern void vpa_init(int cpu); static inline long plpar_pte_enter(unsigned long flags, diff --git a/arch/powerpc/include/asm/pmi.h b/arch/powerpc/include/asm/pmi.h deleted file mode 100644 index 478f0a2fe7f4..000000000000 --- a/arch/powerpc/include/asm/pmi.h +++ /dev/null @@ -1,53 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -#ifndef _POWERPC_PMI_H -#define _POWERPC_PMI_H - -/* - * Definitions for talking with PMI device on PowerPC - * - * PMI (Platform Management Interrupt) is a way to communicate - * with the BMC (Baseboard Management Controller) via interrupts. - * Unlike IPMI it is bidirectional and has a low latency. - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#ifdef __KERNEL__ - -#define PMI_TYPE_FREQ_CHANGE 0x01 -#define PMI_TYPE_POWER_BUTTON 0x02 -#define PMI_READ_TYPE 0 -#define PMI_READ_DATA0 1 -#define PMI_READ_DATA1 2 -#define PMI_READ_DATA2 3 -#define PMI_WRITE_TYPE 4 -#define PMI_WRITE_DATA0 5 -#define PMI_WRITE_DATA1 6 -#define PMI_WRITE_DATA2 7 - -#define PMI_ACK 0x80 - -#define PMI_TIMEOUT 100 - -typedef struct { - u8 type; - u8 data0; - u8 data1; - u8 data2; -} pmi_message_t; - -struct pmi_handler { - struct list_head node; - u8 type; - void (*handle_pmi_message) (pmi_message_t); -}; - -int pmi_register_handler(struct pmi_handler *); -void pmi_unregister_handler(struct pmi_handler *); - -int pmi_send_message(pmi_message_t); - -#endif /* __KERNEL__ */ -#endif /* _POWERPC_PMI_H */ diff --git a/arch/powerpc/include/asm/prom.h b/arch/powerpc/include/asm/prom.h index c0107d8ddd8c..f679a11a7e7f 100644 --- a/arch/powerpc/include/asm/prom.h +++ b/arch/powerpc/include/asm/prom.h @@ -17,6 +17,8 @@ struct device_node; struct property; +#define MIN_RMA 768 /* Minimum RMA (in MB) for CAS negotiation */ + #define OF_DT_BEGIN_NODE 0x1 /* Start of node, full name */ #define OF_DT_END_NODE 0x2 /* End node */ #define OF_DT_PROP 0x3 /* Property: name off, size, diff --git a/arch/powerpc/include/asm/spu_priv1.h b/arch/powerpc/include/asm/spu_priv1.h index 6fee411d973d..66b111fa1cd1 100644 --- a/arch/powerpc/include/asm/spu_priv1.h +++ b/arch/powerpc/include/asm/spu_priv1.h @@ -215,8 +215,6 @@ spu_disable_spu (struct spu_context *ctx) * and only intended to be used by the platform setup code. */ -extern const struct spu_priv1_ops spu_priv1_mmio_ops; - extern const struct spu_management_ops spu_management_of_ops; #endif /* __KERNEL__ */ diff --git a/arch/powerpc/include/asm/static_call.h b/arch/powerpc/include/asm/static_call.h index de1018cc522b..e3d5d3823dac 100644 --- a/arch/powerpc/include/asm/static_call.h +++ b/arch/powerpc/include/asm/static_call.h @@ -26,4 +26,6 @@ #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) __PPC_SCT(name, "blr") #define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) __PPC_SCT(name, "b .+20") +#define CALL_INSN_SIZE 4 + #endif /* _ASM_POWERPC_STATIC_CALL_H */ diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 9bdd8080299b..f8885586efaf 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -89,9 +89,6 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp) #define mulhdu(x, y) mul_u64_u64_shr(x, y, 64) #endif -extern void div128_by_32(u64 dividend_high, u64 dividend_low, - unsigned divisor, struct div_result *dr); - extern void secondary_cpu_time_init(void); extern void __init time_init(void); diff --git a/arch/powerpc/include/asm/xics.h b/arch/powerpc/include/asm/xics.h index 89090485bec1..60ef312dab05 100644 --- a/arch/powerpc/include/asm/xics.h +++ b/arch/powerpc/include/asm/xics.h @@ -31,7 +31,6 @@ #ifdef CONFIG_PPC_ICP_NATIVE extern int icp_native_init(void); extern void icp_native_flush_interrupt(void); -extern void icp_native_cause_ipi_rm(int cpu); #else static inline int icp_native_init(void) { return -ENODEV; } #endif diff --git a/arch/powerpc/include/asm/xmon.h b/arch/powerpc/include/asm/xmon.h index f2d44b44f46c..535cdb1e411a 100644 --- a/arch/powerpc/include/asm/xmon.h +++ b/arch/powerpc/include/asm/xmon.h @@ -12,13 +12,11 @@ #ifdef CONFIG_XMON extern void xmon_setup(void); -void __init xmon_register_spus(struct list_head *list); struct pt_regs; extern int xmon(struct pt_regs *excp); extern irqreturn_t xmon_irq(int, void *); #else static inline void xmon_setup(void) { } -static inline void xmon_register_spus(struct list_head *list) { } #endif #if defined(CONFIG_XMON) && defined(CONFIG_SMP) diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index f43c1198768c..6ac621155ec3 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -70,7 +70,7 @@ obj-y := cputable.o syscalls.o switch.o \ signal.o sysfs.o cacheinfo.o time.o \ prom.o traps.o setup-common.o \ udbg.o misc.o io.o misc_$(BITS).o \ - of_platform.o prom_parse.o firmware.o \ + prom_parse.o firmware.o \ hw_breakpoint_constraints.o interrupt.o \ kdebugfs.o stacktrace.o syscall.o obj-y += ptrace/ @@ -152,8 +152,6 @@ obj-$(CONFIG_PCI_MSI) += msi.o obj-$(CONFIG_AUDIT) += audit.o obj64-$(CONFIG_AUDIT) += compat_audit.o -obj-$(CONFIG_PPC_IO_WORKAROUNDS) += io-workarounds.o - obj-y += trace/ ifneq ($(CONFIG_PPC_INDIRECT_PIO),y) diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c index f0ae39e77e37..4d64a5db50f3 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -136,7 +136,7 @@ static bool dma_iommu_bypass_supported(struct device *dev, u64 mask) struct pci_dev *pdev = to_pci_dev(dev); struct pci_controller *phb = pci_bus_to_host(pdev->bus); - if (iommu_fixed_is_weak || !phb->controller_ops.iommu_bypass_supported) + if (!phb->controller_ops.iommu_bypass_supported) return false; return phb->controller_ops.iommu_bypass_supported(pdev, mask); } diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 195b075d116c..b7229430ca94 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -2537,27 +2537,8 @@ EXC_REAL_NONE(0x1000, 0x100) EXC_VIRT_NONE(0x5000, 0x100) EXC_REAL_NONE(0x1100, 0x100) EXC_VIRT_NONE(0x5100, 0x100) - -#ifdef CONFIG_CBE_RAS -INT_DEFINE_BEGIN(cbe_system_error) - IVEC=0x1200 - IHSRR=1 -INT_DEFINE_END(cbe_system_error) - -EXC_REAL_BEGIN(cbe_system_error, 0x1200, 0x100) - GEN_INT_ENTRY cbe_system_error, virt=0 -EXC_REAL_END(cbe_system_error, 0x1200, 0x100) -EXC_VIRT_NONE(0x5200, 0x100) -EXC_COMMON_BEGIN(cbe_system_error_common) - GEN_COMMON cbe_system_error - addi r3,r1,STACK_INT_FRAME_REGS - bl CFUNC(cbe_system_error_exception) - b interrupt_return_hsrr - -#else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) -#endif /** * Interrupt 0x1300 - Instruction Address Breakpoint Interrupt. @@ -2708,26 +2689,8 @@ EXC_COMMON_BEGIN(denorm_exception_common) b interrupt_return_hsrr -#ifdef CONFIG_CBE_RAS -INT_DEFINE_BEGIN(cbe_maintenance) - IVEC=0x1600 - IHSRR=1 -INT_DEFINE_END(cbe_maintenance) - -EXC_REAL_BEGIN(cbe_maintenance, 0x1600, 0x100) - GEN_INT_ENTRY cbe_maintenance, virt=0 -EXC_REAL_END(cbe_maintenance, 0x1600, 0x100) -EXC_VIRT_NONE(0x5600, 0x100) -EXC_COMMON_BEGIN(cbe_maintenance_common) - GEN_COMMON cbe_maintenance - addi r3,r1,STACK_INT_FRAME_REGS - bl CFUNC(cbe_maintenance_exception) - b interrupt_return_hsrr - -#else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) -#endif INT_DEFINE_BEGIN(altivec_assist) @@ -2755,26 +2718,8 @@ EXC_COMMON_BEGIN(altivec_assist_common) b interrupt_return_srr -#ifdef CONFIG_CBE_RAS -INT_DEFINE_BEGIN(cbe_thermal) - IVEC=0x1800 - IHSRR=1 -INT_DEFINE_END(cbe_thermal) - -EXC_REAL_BEGIN(cbe_thermal, 0x1800, 0x100) - GEN_INT_ENTRY cbe_thermal, virt=0 -EXC_REAL_END(cbe_thermal, 0x1800, 0x100) -EXC_VIRT_NONE(0x5800, 0x100) -EXC_COMMON_BEGIN(cbe_thermal_common) - GEN_COMMON cbe_thermal - addi r3,r1,STACK_INT_FRAME_REGS - bl CFUNC(cbe_thermal_exception) - b interrupt_return_hsrr - -#else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) -#endif #ifdef CONFIG_PPC_WATCHDOG diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index d44349fe8e2b..df16c7f547ab 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -33,6 +33,7 @@ #include <asm/fadump-internal.h> #include <asm/setup.h> #include <asm/interrupt.h> +#include <asm/prom.h> /* * The CPU who acquired the lock to trigger the fadump crash should @@ -1764,19 +1765,19 @@ void __init fadump_setup_param_area(void) range_end = memblock_end_of_DRAM(); } else { /* - * Passing additional parameters is supported for hash MMU only - * if the first memory block size is 768MB or higher. + * Memory range for passing additional parameters for HASH MMU + * must meet the following conditions: + * 1. The first memory block size must be higher than the + * minimum RMA (MIN_RMA) size. Bootloader can use memory + * upto RMA size. So it should be avoided. + * 2. The range should be between MIN_RMA and RMA size (ppc64_rma_size) + * 3. It must not overlap with the fadump reserved area. */ - if (ppc64_rma_size < 0x30000000) + if (ppc64_rma_size < MIN_RMA*1024*1024) return; - /* - * 640 MB to 768 MB is not used by PFW/bootloader. So, try reserving - * memory for passing additional parameters in this range to avoid - * being stomped on by PFW/bootloader. - */ - range_start = 0x2A000000; - range_end = range_start + 0x4000000; + range_start = MIN_RMA * 1024 * 1024; + range_end = min(ppc64_rma_size, fw_dump.boot_mem_top); } fw_dump.param_area = memblock_phys_alloc_range(COMMAND_LINE_SIZE, diff --git a/arch/powerpc/kernel/io-workarounds.c b/arch/powerpc/kernel/io-workarounds.c deleted file mode 100644 index c877f074d174..000000000000 --- a/arch/powerpc/kernel/io-workarounds.c +++ /dev/null @@ -1,197 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Support PCI IO workaround - * - * Copyright (C) 2006 Benjamin Herrenschmidt <benh@kernel.crashing.org> - * IBM, Corp. - * (C) Copyright 2007-2008 TOSHIBA CORPORATION - */ -#undef DEBUG - -#include <linux/kernel.h> -#include <linux/sched/mm.h> /* for init_mm */ -#include <linux/pgtable.h> - -#include <asm/io.h> -#include <asm/machdep.h> -#include <asm/ppc-pci.h> -#include <asm/io-workarounds.h> -#include <asm/pte-walk.h> - - -#define IOWA_MAX_BUS 8 - -static struct iowa_bus iowa_busses[IOWA_MAX_BUS]; -static unsigned int iowa_bus_count; - -static struct iowa_bus *iowa_pci_find(unsigned long vaddr, unsigned long paddr) -{ - int i, j; - struct resource *res; - unsigned long vstart, vend; - - for (i = 0; i < iowa_bus_count; i++) { - struct iowa_bus *bus = &iowa_busses[i]; - struct pci_controller *phb = bus->phb; - - if (vaddr) { - vstart = (unsigned long)phb->io_base_virt; - vend = vstart + phb->pci_io_size - 1; - if ((vaddr >= vstart) && (vaddr <= vend)) - return bus; - } - - if (paddr) - for (j = 0; j < 3; j++) { - res = &phb->mem_resources[j]; - if (paddr >= res->start && paddr <= res->end) - return bus; - } - } - - return NULL; -} - -#ifdef CONFIG_PPC_INDIRECT_MMIO -struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr) -{ - struct iowa_bus *bus; - int token; - - token = PCI_GET_ADDR_TOKEN(addr); - - if (token && token <= iowa_bus_count) - bus = &iowa_busses[token - 1]; - else { - unsigned long vaddr, paddr; - - vaddr = (unsigned long)PCI_FIX_ADDR(addr); - if (vaddr < PHB_IO_BASE || vaddr >= PHB_IO_END) - return NULL; - - paddr = ppc_find_vmap_phys(vaddr); - - bus = iowa_pci_find(vaddr, paddr); - - if (bus == NULL) - return NULL; - } - - return bus; -} -#else /* CONFIG_PPC_INDIRECT_MMIO */ -struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr) -{ - return NULL; -} -#endif /* !CONFIG_PPC_INDIRECT_MMIO */ - -#ifdef CONFIG_PPC_INDIRECT_PIO -struct iowa_bus *iowa_pio_find_bus(unsigned long port) -{ - unsigned long vaddr = (unsigned long)pci_io_base + port; - return iowa_pci_find(vaddr, 0); -} -#else -struct iowa_bus *iowa_pio_find_bus(unsigned long port) -{ - return NULL; -} -#endif - -#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) \ -static ret iowa_##name at \ -{ \ - struct iowa_bus *bus; \ - bus = iowa_##space##_find_bus(aa); \ - if (bus && bus->ops && bus->ops->name) \ - return bus->ops->name al; \ - return __do_##name al; \ -} - -#define DEF_PCI_AC_NORET(name, at, al, space, aa) \ -static void iowa_##name at \ -{ \ - struct iowa_bus *bus; \ - bus = iowa_##space##_find_bus(aa); \ - if (bus && bus->ops && bus->ops->name) { \ - bus->ops->name al; \ - return; \ - } \ - __do_##name al; \ -} - -#include <asm/io-defs.h> - -#undef DEF_PCI_AC_RET -#undef DEF_PCI_AC_NORET - -static const struct ppc_pci_io iowa_pci_io = { - -#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) .name = iowa_##name, -#define DEF_PCI_AC_NORET(name, at, al, space, aa) .name = iowa_##name, - -#include <asm/io-defs.h> - -#undef DEF_PCI_AC_RET -#undef DEF_PCI_AC_NORET - -}; - -#ifdef CONFIG_PPC_INDIRECT_MMIO -void __iomem *iowa_ioremap(phys_addr_t addr, unsigned long size, - pgprot_t prot, void *caller) -{ - struct iowa_bus *bus; - void __iomem *res = __ioremap_caller(addr, size, prot, caller); - int busno; - - bus = iowa_pci_find(0, (unsigned long)addr); - if (bus != NULL) { - busno = bus - iowa_busses; - PCI_SET_ADDR_TOKEN(res, busno + 1); - } - return res; -} -#endif /* !CONFIG_PPC_INDIRECT_MMIO */ - -bool io_workaround_inited; - -/* Enable IO workaround */ -static void io_workaround_init(void) -{ - if (io_workaround_inited) - return; - ppc_pci_io = iowa_pci_io; - io_workaround_inited = true; -} - -/* Register new bus to support workaround */ -void iowa_register_bus(struct pci_controller *phb, struct ppc_pci_io *ops, - int (*initfunc)(struct iowa_bus *, void *), void *data) -{ - struct iowa_bus *bus; - struct device_node *np = phb->dn; - - io_workaround_init(); - - if (iowa_bus_count >= IOWA_MAX_BUS) { - pr_err("IOWA:Too many pci bridges, " - "workarounds disabled for %pOF\n", np); - return; - } - - bus = &iowa_busses[iowa_bus_count]; - bus->phb = phb; - bus->ops = ops; - bus->private = data; - - if (initfunc) - if ((*initfunc)(bus, data)) - return; - - iowa_bus_count++; - - pr_debug("IOWA:[%d]Add bus, %pOF.\n", iowa_bus_count-1, np); -} - diff --git a/arch/powerpc/kernel/io.c b/arch/powerpc/kernel/io.c index 6af535905984..bcc201c01514 100644 --- a/arch/powerpc/kernel/io.c +++ b/arch/powerpc/kernel/io.c @@ -31,13 +31,14 @@ void _insb(const volatile u8 __iomem *port, void *buf, long count) if (unlikely(count <= 0)) return; - asm volatile("sync"); + + mb(); do { tmp = *(const volatile u8 __force *)port; eieio(); *tbuf++ = tmp; } while (--count != 0); - asm volatile("twi 0,%0,0; isync" : : "r" (tmp)); + data_barrier(tmp); } EXPORT_SYMBOL(_insb); @@ -47,75 +48,80 @@ void _outsb(volatile u8 __iomem *port, const void *buf, long count) if (unlikely(count <= 0)) return; - asm volatile("sync"); + + mb(); do { *(volatile u8 __force *)port = *tbuf++; } while (--count != 0); - asm volatile("sync"); + mb(); } EXPORT_SYMBOL(_outsb); -void _insw_ns(const volatile u16 __iomem *port, void *buf, long count) +void _insw(const volatile u16 __iomem *port, void *buf, long count) { u16 *tbuf = buf; u16 tmp; if (unlikely(count <= 0)) return; - asm volatile("sync"); + + mb(); do { tmp = *(const volatile u16 __force *)port; eieio(); *tbuf++ = tmp; } while (--count != 0); - asm volatile("twi 0,%0,0; isync" : : "r" (tmp)); + data_barrier(tmp); } -EXPORT_SYMBOL(_insw_ns); +EXPORT_SYMBOL(_insw); -void _outsw_ns(volatile u16 __iomem *port, const void *buf, long count) +void _outsw(volatile u16 __iomem *port, const void *buf, long count) { const u16 *tbuf = buf; if (unlikely(count <= 0)) return; - asm volatile("sync"); + + mb(); do { *(volatile u16 __force *)port = *tbuf++; } while (--count != 0); - asm volatile("sync"); + mb(); } -EXPORT_SYMBOL(_outsw_ns); +EXPORT_SYMBOL(_outsw); -void _insl_ns(const volatile u32 __iomem *port, void *buf, long count) +void _insl(const volatile u32 __iomem *port, void *buf, long count) { u32 *tbuf = buf; u32 tmp; if (unlikely(count <= 0)) return; - asm volatile("sync"); + + mb(); do { tmp = *(const volatile u32 __force *)port; eieio(); *tbuf++ = tmp; } while (--count != 0); - asm volatile("twi 0,%0,0; isync" : : "r" (tmp)); + data_barrier(tmp); } -EXPORT_SYMBOL(_insl_ns); +EXPORT_SYMBOL(_insl); -void _outsl_ns(volatile u32 __iomem *port, const void *buf, long count) +void _outsl(volatile u32 __iomem *port, const void *buf, long count) { const u32 *tbuf = buf; if (unlikely(count <= 0)) return; - asm volatile("sync"); + + mb(); do { *(volatile u32 __force *)port = *tbuf++; } while (--count != 0); - asm volatile("sync"); + mb(); } -EXPORT_SYMBOL(_outsl_ns); +EXPORT_SYMBOL(_outsl); #define IO_CHECK_ALIGN(v,a) ((((unsigned long)(v)) & ((a) - 1)) == 0) @@ -127,7 +133,7 @@ _memset_io(volatile void __iomem *addr, int c, unsigned long n) lc |= lc << 8; lc |= lc << 16; - __asm__ __volatile__ ("sync" : : : "memory"); + mb(); while(n && !IO_CHECK_ALIGN(p, 4)) { *((volatile u8 *)p) = c; p++; @@ -143,7 +149,7 @@ _memset_io(volatile void __iomem *addr, int c, unsigned long n) p++; n--; } - __asm__ __volatile__ ("sync" : : : "memory"); + mb(); } EXPORT_SYMBOL(_memset_io); @@ -152,7 +158,7 @@ void _memcpy_fromio(void *dest, const volatile void __iomem *src, { void *vsrc = (void __force *) src; - __asm__ __volatile__ ("sync" : : : "memory"); + mb(); while(n && (!IO_CHECK_ALIGN(vsrc, 4) || !IO_CHECK_ALIGN(dest, 4))) { *((u8 *)dest) = *((volatile u8 *)vsrc); eieio(); @@ -174,7 +180,7 @@ void _memcpy_fromio(void *dest, const volatile void __iomem *src, dest++; n--; } - __asm__ __volatile__ ("sync" : : : "memory"); + mb(); } EXPORT_SYMBOL(_memcpy_fromio); @@ -182,7 +188,7 @@ void _memcpy_toio(volatile void __iomem *dest, const void *src, unsigned long n) { void *vdest = (void __force *) dest; - __asm__ __volatile__ ("sync" : : : "memory"); + mb(); while(n && (!IO_CHECK_ALIGN(vdest, 4) || !IO_CHECK_ALIGN(src, 4))) { *((volatile u8 *)vdest) = *((u8 *)src); src++; @@ -201,6 +207,6 @@ void _memcpy_toio(volatile void __iomem *dest, const void *src, unsigned long n) vdest++; n--; } - __asm__ __volatile__ ("sync" : : : "memory"); + mb(); } EXPORT_SYMBOL(_memcpy_toio); diff --git a/arch/powerpc/kernel/of_platform.c b/arch/powerpc/kernel/of_platform.c deleted file mode 100644 index adc76fa58d1e..000000000000 --- a/arch/powerpc/kernel/of_platform.c +++ /dev/null @@ -1,102 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Copyright (C) 2006 Benjamin Herrenschmidt, IBM Corp. - * <benh@kernel.crashing.org> - * and Arnd Bergmann, IBM Corp. - */ - -#undef DEBUG - -#include <linux/string.h> -#include <linux/kernel.h> -#include <linux/init.h> -#include <linux/export.h> -#include <linux/mod_devicetable.h> -#include <linux/pci.h> -#include <linux/platform_device.h> -#include <linux/atomic.h> - -#include <asm/errno.h> -#include <asm/topology.h> -#include <asm/pci-bridge.h> -#include <asm/ppc-pci.h> -#include <asm/eeh.h> - -#ifdef CONFIG_PPC_OF_PLATFORM_PCI - -/* The probing of PCI controllers from of_platform is currently - * 64 bits only, mostly due to gratuitous differences between - * the 32 and 64 bits PCI code on PowerPC and the 32 bits one - * lacking some bits needed here. - */ - -static int of_pci_phb_probe(struct platform_device *dev) -{ - struct pci_controller *phb; - - /* Check if we can do that ... */ - if (ppc_md.pci_setup_phb == NULL) - return -ENODEV; - - pr_info("Setting up PCI bus %pOF\n", dev->dev.of_node); - - /* Alloc and setup PHB data structure */ - phb = pcibios_alloc_controller(dev->dev.of_node); - if (!phb) - return -ENODEV; - - /* Setup parent in sysfs */ - phb->parent = &dev->dev; - - /* Setup the PHB using arch provided callback */ - if (ppc_md.pci_setup_phb(phb)) { - pcibios_free_controller(phb); - return -ENODEV; - } - - /* Process "ranges" property */ - pci_process_bridge_OF_ranges(phb, dev->dev.of_node, 0); - - /* Init pci_dn data structures */ - pci_devs_phb_init_dynamic(phb); - - /* Create EEH PE for the PHB */ - eeh_phb_pe_create(phb); - - /* Scan the bus */ - pcibios_scan_phb(phb); - if (phb->bus == NULL) - return -ENXIO; - - /* Claim resources. This might need some rework as well depending - * whether we are doing probe-only or not, like assigning unassigned - * resources etc... - */ - pcibios_claim_one_bus(phb->bus); - - /* Add probed PCI devices to the device model */ - pci_bus_add_devices(phb->bus); - - return 0; -} - -static const struct of_device_id of_pci_phb_ids[] = { - { .type = "pci", }, - { .type = "pcix", }, - { .type = "pcie", }, - { .type = "pciex", }, - { .type = "ht", }, - {} -}; - -static struct platform_driver of_pci_phb_driver = { - .probe = of_pci_phb_probe, - .driver = { - .name = "of-pci", - .of_match_table = of_pci_phb_ids, - }, -}; - -builtin_platform_driver(of_pci_phb_driver); - -#endif /* CONFIG_PPC_OF_PLATFORM_PCI */ diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c index 57082fac4668..827c958677f8 100644 --- a/arch/powerpc/kernel/prom_init.c +++ b/arch/powerpc/kernel/prom_init.c @@ -1061,7 +1061,7 @@ static const struct ibm_arch_vec ibm_architecture_vec_template __initconst = { .virt_base = cpu_to_be32(0xffffffff), .virt_size = cpu_to_be32(0xffffffff), .load_base = cpu_to_be32(0xffffffff), - .min_rma = cpu_to_be32(512), /* 512MB min RMA */ + .min_rma = cpu_to_be32(MIN_RMA), .min_load = cpu_to_be32(0xffffffff), /* full client load */ .min_rma_percent = 0, /* min RMA percentage of total RAM */ .max_pft_size = 48, /* max log_2(hash table size) */ @@ -2889,11 +2889,11 @@ static void __init fixup_device_tree_pmac(void) char type[8]; phandle node; - // Some pmacs are missing #size-cells on escc nodes + // Some pmacs are missing #size-cells on escc or i2s nodes for (node = 0; prom_next_node(&node); ) { type[0] = '\0'; prom_getprop(node, "device_type", type, sizeof(type)); - if (prom_strcmp(type, "escc")) + if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s")) continue; if (prom_getproplen(node, "#size-cells") != PROM_ERROR) diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c index d31c9799cab2..d7a738f1858d 100644 --- a/arch/powerpc/kernel/rtas.c +++ b/arch/powerpc/kernel/rtas.c @@ -798,66 +798,6 @@ void __init udbg_init_rtas_panel(void) udbg_putc = call_rtas_display_status_delay; } -#ifdef CONFIG_UDBG_RTAS_CONSOLE - -/* If you think you're dying before early_init_dt_scan_rtas() does its - * work, you can hard code the token values for your firmware here and - * hardcode rtas.base/entry etc. - */ -static unsigned int rtas_putchar_token = RTAS_UNKNOWN_SERVICE; -static unsigned int rtas_getchar_token = RTAS_UNKNOWN_SERVICE; - -static void udbg_rtascon_putc(char c) -{ - int tries; - - if (!rtas.base) - return; - - /* Add CRs before LFs */ - if (c == '\n') - udbg_rtascon_putc('\r'); - - /* if there is more than one character to be displayed, wait a bit */ - for (tries = 0; tries < 16; tries++) { - if (rtas_call(rtas_putchar_token, 1, 1, NULL, c) == 0) - break; - udelay(1000); - } -} - -static int udbg_rtascon_getc_poll(void) -{ - int c; - - if (!rtas.base) - return -1; - - if (rtas_call(rtas_getchar_token, 0, 2, &c)) - return -1; - - return c; -} - -static int udbg_rtascon_getc(void) -{ - int c; - - while ((c = udbg_rtascon_getc_poll()) == -1) - ; - - return c; -} - - -void __init udbg_init_rtas_console(void) -{ - udbg_putc = udbg_rtascon_putc; - udbg_getc = udbg_rtascon_getc; - udbg_getc_poll = udbg_rtascon_getc_poll; -} -#endif /* CONFIG_UDBG_RTAS_CONSOLE */ - void rtas_progress(char *s, unsigned short hex) { struct device_node *root; @@ -2135,21 +2075,6 @@ int __init early_init_dt_scan_rtas(unsigned long node, rtas.size = *sizep; } -#ifdef CONFIG_UDBG_RTAS_CONSOLE - basep = of_get_flat_dt_prop(node, "put-term-char", NULL); - if (basep) - rtas_putchar_token = *basep; - - basep = of_get_flat_dt_prop(node, "get-term-char", NULL); - if (basep) - rtas_getchar_token = *basep; - - if (rtas_putchar_token != RTAS_UNKNOWN_SERVICE && - rtas_getchar_token != RTAS_UNKNOWN_SERVICE) - udbg_init_rtas_console(); - -#endif - /* break now */ return 1; } diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index e67f3048611f..7284c8021eeb 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -892,7 +892,7 @@ unsigned long memory_block_size_bytes(void) } #endif -#if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO) +#ifdef CONFIG_PPC_INDIRECT_PIO struct ppc_pci_io ppc_pci_io; EXPORT_SYMBOL(ppc_pci_io); #endif diff --git a/arch/powerpc/kernel/static_call.c b/arch/powerpc/kernel/static_call.c index 7cfd0710e757..ec3101f95e53 100644 --- a/arch/powerpc/kernel/static_call.c +++ b/arch/powerpc/kernel/static_call.c @@ -8,26 +8,54 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) { int err; bool is_ret0 = (func == __static_call_return0); - unsigned long target = (unsigned long)(is_ret0 ? tramp + PPC_SCT_RET0 : func); - bool is_short = is_offset_in_branch_range((long)target - (long)tramp); - - if (!tramp) - return; + unsigned long _tramp = (unsigned long)tramp; + unsigned long _func = (unsigned long)func; + unsigned long _ret0 = _tramp + PPC_SCT_RET0; + bool is_short = is_offset_in_branch_range((long)func - (long)(site ? : tramp)); mutex_lock(&text_mutex); - if (func && !is_short) { - err = patch_ulong(tramp + PPC_SCT_DATA, target); - if (err) - goto out; + if (site && tail) { + if (!func) + err = patch_instruction(site, ppc_inst(PPC_RAW_BLR())); + else if (is_ret0) + err = patch_branch(site, _ret0, 0); + else if (is_short) + err = patch_branch(site, _func, 0); + else if (tramp) + err = patch_branch(site, _tramp, 0); + else + err = 0; + } else if (site) { + if (!func) + err = patch_instruction(site, ppc_inst(PPC_RAW_NOP())); + else if (is_ret0) + err = patch_instruction(site, ppc_inst(PPC_RAW_LI(_R3, 0))); + else if (is_short) + err = patch_branch(site, _func, BRANCH_SET_LINK); + else if (tramp) + err = patch_branch(site, _tramp, BRANCH_SET_LINK); + else + err = 0; + } else if (tramp) { + if (func && !is_short) { + err = patch_ulong(tramp + PPC_SCT_DATA, _func); + if (err) + goto out; + } + + if (!func) + err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR())); + else if (is_ret0) + err = patch_branch(tramp, _ret0, 0); + else if (is_short) + err = patch_branch(tramp, _func, 0); + else + err = patch_instruction(tramp, ppc_inst(PPC_RAW_NOP())); + } else { + err = 0; } - if (!func) - err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR())); - else if (is_short) - err = patch_branch(tramp, target, 0); - else - err = patch_instruction(tramp, ppc_inst(PPC_RAW_NOP())); out: mutex_unlock(&text_mutex); diff --git a/arch/powerpc/kernel/switch.S b/arch/powerpc/kernel/switch.S index 608c0ce7cec6..59e3ee99db0e 100644 --- a/arch/powerpc/kernel/switch.S +++ b/arch/powerpc/kernel/switch.S @@ -39,7 +39,6 @@ flush_branch_caches: // Flush the link stack .rept 64 - ANNOTATE_INTRA_FUNCTION_CALL bl .+4 .endr b 1f diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 15784c5c95c7..8224381c1dba 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -901,6 +901,38 @@ void secondary_cpu_time_init(void) register_decrementer_clockevent(smp_processor_id()); } +/* + * Divide a 128-bit dividend by a 32-bit divisor, leaving a 128 bit + * result. + */ +static __init void div128_by_32(u64 dividend_high, u64 dividend_low, + unsigned int divisor, struct div_result *dr) +{ + unsigned long a, b, c, d; + unsigned long w, x, y, z; + u64 ra, rb, rc; + + a = dividend_high >> 32; + b = dividend_high & 0xffffffff; + c = dividend_low >> 32; + d = dividend_low & 0xffffffff; + + w = a / divisor; + ra = ((u64)(a - (w * divisor)) << 32) + b; + + rb = ((u64)do_div(ra, divisor) << 32) + c; + x = ra; + + rc = ((u64)do_div(rb, divisor) << 32) + d; + y = rb; + + do_div(rc, divisor); + z = rc; + + dr->result_high = ((u64)w << 32) + x; + dr->result_low = ((u64)y << 32) + z; +} + /* This function is only called on the boot processor */ void __init time_init(void) { @@ -974,39 +1006,6 @@ void __init time_init(void) enable_sched_clock_irqtime(); } -/* - * Divide a 128-bit dividend by a 32-bit divisor, leaving a 128 bit - * result. - */ -void div128_by_32(u64 dividend_high, u64 dividend_low, - unsigned divisor, struct div_result *dr) -{ - unsigned long a, b, c, d; - unsigned long w, x, y, z; - u64 ra, rb, rc; - - a = dividend_high >> 32; - b = dividend_high & 0xffffffff; - c = dividend_low >> 32; - d = dividend_low & 0xffffffff; - - w = a / divisor; - ra = ((u64)(a - (w * divisor)) << 32) + b; - - rb = ((u64) do_div(ra, divisor) << 32) + c; - x = ra; - - rc = ((u64) do_div(rb, divisor) << 32) + d; - y = rb; - - do_div(rc, divisor); - z = rc; - - dr->result_high = ((u64)w << 32) + x; - dr->result_low = ((u64)y << 32) + z; - -} - /* We don't need to calibrate delay, we use the CPU timebase for that */ void calibrate_delay(void) { diff --git a/arch/powerpc/kernel/udbg.c b/arch/powerpc/kernel/udbg.c index 0a72a537f879..862b22b2b616 100644 --- a/arch/powerpc/kernel/udbg.c +++ b/arch/powerpc/kernel/udbg.c @@ -36,9 +36,6 @@ void __init udbg_early_init(void) #elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_PANEL) /* RTAS panel debug */ udbg_init_rtas_panel(); -#elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_CONSOLE) - /* RTAS console debug */ - udbg_init_rtas_console(); #elif defined(CONFIG_PPC_EARLY_DEBUG_PAS_REALMODE) udbg_init_pas_realmode(); #elif defined(CONFIG_PPC_EARLY_DEBUG_BOOTX) diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S index b4c9decc7a75..de6ee7d35cff 100644 --- a/arch/powerpc/kernel/vmlinux.lds.S +++ b/arch/powerpc/kernel/vmlinux.lds.S @@ -1,10 +1,4 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#ifdef CONFIG_PPC64 -#define PROVIDE32(x) PROVIDE(__unused__##x) -#else -#define PROVIDE32(x) PROVIDE(x) -#endif - #define BSS_FIRST_SECTIONS *(.bss.prominit) #define EMITS_PT_NOTE #define RO_EXCEPTION_TABLE_ALIGN 0 @@ -127,7 +121,6 @@ SECTIONS . = ALIGN(PAGE_SIZE); _etext = .; - PROVIDE32 (etext = .); /* Read-only data */ RO_DATA(PAGE_SIZE) @@ -394,7 +387,6 @@ SECTIONS . = ALIGN(PAGE_SIZE); _edata = .; - PROVIDE32 (edata = .); /* * And finally the bss @@ -404,7 +396,6 @@ SECTIONS . = ALIGN(PAGE_SIZE); _end = . ; - PROVIDE32 (end = .); DWARF_DEBUG ELF_DETAILS diff --git a/arch/powerpc/kexec/relocate_32.S b/arch/powerpc/kexec/relocate_32.S index 104c9911f406..dd86e338307d 100644 --- a/arch/powerpc/kexec/relocate_32.S +++ b/arch/powerpc/kexec/relocate_32.S @@ -348,16 +348,13 @@ write_utlb: rlwinm r10, r24, 0, 22, 27 cmpwi r10, PPC47x_TLB0_4K - bne 0f li r10, 0x1000 /* r10 = 4k */ - ANNOTATE_INTRA_FUNCTION_CALL - bl 1f + beq 0f -0: /* Defaults to 256M */ lis r10, 0x1000 - bcl 20,31,$+4 +0: bcl 20,31,$+4 1: mflr r4 addi r4, r4, (2f-1b) /* virtual address of 2f */ diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c index 5b7212edbb13..c7e4b62642ea 100644 --- a/arch/powerpc/kvm/book3s_32_mmu_host.c +++ b/arch/powerpc/kvm/book3s_32_mmu_host.c @@ -125,8 +125,6 @@ static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr, return (u32*)pteg; } -extern char etext[]; - int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, bool iswrite) { diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index ea7ad200b330..83f7504349d2 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1524,14 +1524,12 @@ kvm_flush_link_stack: /* Flush the link stack. On Power8 it's up to 32 entries in size. */ .rept 32 - ANNOTATE_INTRA_FUNCTION_CALL bl .+4 .endr /* And on Power9 it's up to 64. */ BEGIN_FTR_SECTION .rept 32 - ANNOTATE_INTRA_FUNCTION_CALL bl .+4 .endr END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 61f2b7e007fa..153587741864 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -550,12 +550,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) #ifdef CONFIG_PPC_BOOK3S_64 case KVM_CAP_SPAPR_TCE: + fallthrough; case KVM_CAP_SPAPR_TCE_64: - r = 1; - break; case KVM_CAP_SPAPR_TCE_VFIO: - r = !!cpu_has_feature(CPU_FTR_HVMODE); - break; case KVM_CAP_PPC_RTAS: case KVM_CAP_PPC_FIXUP_HCALL: case KVM_CAP_PPC_ENABLE_HCALL: diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index c8b4fa71d4a7..734610052cf4 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -1358,18 +1358,6 @@ static void __init htab_initialize(void) } else { unsigned long limit = MEMBLOCK_ALLOC_ANYWHERE; -#ifdef CONFIG_PPC_CELL - /* - * Cell may require the hash table down low when using the - * Axon IOMMU in order to fit the dynamic region over it, see - * comments in cell/iommu.c - */ - if (fdt_subnode_offset(initial_boot_params, 0, "axon") > 0) { - limit = 0x80000000; - pr_info("Hash table forced below 2G for Axon IOMMU\n"); - } -#endif /* CONFIG_PPC_CELL */ - table = memblock_phys_alloc_range(htab_size_bytes, htab_size_bytes, 0, limit); diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index ce64abea9e3e..c0c45d033cba 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -587,7 +587,7 @@ int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl, /* * Does the CPU support tlbie? */ -bool tlbie_capable __read_mostly = true; +bool tlbie_capable __read_mostly = IS_ENABLED(CONFIG_PPC_RADIX_BROADCAST_TLBIE); EXPORT_SYMBOL(tlbie_capable); /* @@ -595,7 +595,7 @@ EXPORT_SYMBOL(tlbie_capable); * address spaces? tlbie may still be used for nMMU accelerators, and for KVM * guest address spaces. */ -bool tlbie_enabled __read_mostly = true; +bool tlbie_enabled __read_mostly = IS_ENABLED(CONFIG_PPC_RADIX_BROADCAST_TLBIE); static int __init setup_disable_tlbie(char *str) { diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c index 7b0afcabd89f..70b08bf3dd1f 100644 --- a/arch/powerpc/mm/ioremap.c +++ b/arch/powerpc/mm/ioremap.c @@ -4,7 +4,6 @@ #include <linux/slab.h> #include <linux/mmzone.h> #include <linux/vmalloc.h> -#include <asm/io-workarounds.h> unsigned long ioremap_bot; EXPORT_SYMBOL(ioremap_bot); @@ -14,8 +13,6 @@ void __iomem *ioremap(phys_addr_t addr, unsigned long size) pgprot_t prot = pgprot_noncached(PAGE_KERNEL); void *caller = __builtin_return_address(0); - if (iowa_is_active()) - return iowa_ioremap(addr, size, prot, caller); return __ioremap_caller(addr, size, prot, caller); } EXPORT_SYMBOL(ioremap); @@ -25,8 +22,6 @@ void __iomem *ioremap_wc(phys_addr_t addr, unsigned long size) pgprot_t prot = pgprot_noncached_wc(PAGE_KERNEL); void *caller = __builtin_return_address(0); - if (iowa_is_active()) - return iowa_ioremap(addr, size, prot, caller); return __ioremap_caller(addr, size, prot, caller); } EXPORT_SYMBOL(ioremap_wc); @@ -36,8 +31,6 @@ void __iomem *ioremap_coherent(phys_addr_t addr, unsigned long size) pgprot_t prot = pgprot_cached(PAGE_KERNEL); void *caller = __builtin_return_address(0); - if (iowa_is_active()) - return iowa_ioremap(addr, size, prot, caller); return __ioremap_caller(addr, size, prot, caller); } @@ -50,8 +43,6 @@ void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long flags) if (pte_write(pte)) pte = pte_mkdirty(pte); - if (iowa_is_active()) - return iowa_ioremap(addr, size, pte_pgprot(pte), caller); return __ioremap_caller(addr, size, pte_pgprot(pte), caller); } EXPORT_SYMBOL(ioremap_prot); diff --git a/arch/powerpc/mm/ioremap_64.c b/arch/powerpc/mm/ioremap_64.c index d24e5f166723..fb8b55bd2cd5 100644 --- a/arch/powerpc/mm/ioremap_64.c +++ b/arch/powerpc/mm/ioremap_64.c @@ -52,6 +52,6 @@ void iounmap(volatile void __iomem *token) if (!slab_is_available()) return; - generic_iounmap(PCI_FIX_ADDR(token)); + generic_iounmap(token); } EXPORT_SYMBOL(iounmap); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index c7708c8fad29..34806c858e54 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -319,28 +319,6 @@ void __init mem_init(void) per_cpu(next_tlbcam_idx, smp_processor_id()) = (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) - 1; #endif - -#ifdef CONFIG_PPC32 - pr_info("Kernel virtual memory layout:\n"); -#ifdef CONFIG_KASAN - pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n", - KASAN_SHADOW_START, KASAN_SHADOW_END); -#endif - pr_info(" * 0x%08lx..0x%08lx : fixmap\n", FIXADDR_START, FIXADDR_TOP); -#ifdef CONFIG_HIGHMEM - pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n", - PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP)); -#endif /* CONFIG_HIGHMEM */ - if (ioremap_bot != IOREMAP_TOP) - pr_info(" * 0x%08lx..0x%08lx : early ioremap\n", - ioremap_bot, IOREMAP_TOP); - pr_info(" * 0x%08lx..0x%08lx : vmalloc & ioremap\n", - VMALLOC_START, VMALLOC_END); -#ifdef MODULES_VADDR - pr_info(" * 0x%08lx..0x%08lx : modules\n", - MODULES_VADDR, MODULES_END); -#endif -#endif /* CONFIG_PPC32 */ } void free_initmem(void) diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 3c1da08304d0..603a0f652ba6 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -1336,7 +1336,7 @@ int hot_add_scn_to_nid(unsigned long scn_addr) return nid; } -static u64 hot_add_drconf_memory_max(void) +u64 hot_add_drconf_memory_max(void) { struct device_node *memory = NULL; struct device_node *dn = NULL; diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index f4e03aaabb4c..b906d28f74fd 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -2226,6 +2226,10 @@ static struct pmu power_pmu = { #define PERF_SAMPLE_ADDR_TYPE (PERF_SAMPLE_ADDR | \ PERF_SAMPLE_PHYS_ADDR | \ PERF_SAMPLE_DATA_PAGE_SIZE) + +#define SIER_TYPE_SHIFT 15 +#define SIER_TYPE_MASK (0x7ull << SIER_TYPE_SHIFT) + /* * A counter has overflowed; update its count and record * things if requested. Note that interrupts are hard-disabled @@ -2295,6 +2299,22 @@ static void record_and_restart(struct perf_event *event, unsigned long val, record = 0; /* + * SIER[46-48] presents instruction type of the sampled instruction. + * In ISA v3.0 and before values "0" and "7" are considered reserved. + * In ISA v3.1, value "7" has been used to indicate "larx/stcx". + * Drop the sample if "type" has reserved values for this field with a + * ISA version check. + */ + if (event->attr.sample_type & PERF_SAMPLE_DATA_SRC && + ppmu->get_mem_data_src) { + val = (regs->dar & SIER_TYPE_MASK) >> SIER_TYPE_SHIFT; + if (val == 0 || (val == 7 && !cpu_has_feature(CPU_FTR_ARCH_31))) { + record = 0; + atomic64_inc(&event->lost_samples); + } + } + + /* * Finally record data if requested. */ if (record) { diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c index 56301b2bc8ae..2b3547fdba4a 100644 --- a/arch/powerpc/perf/isa207-common.c +++ b/arch/powerpc/perf/isa207-common.c @@ -319,10 +319,18 @@ void isa207_get_mem_data_src(union perf_mem_data_src *dsrc, u32 flags, return; } - sier = mfspr(SPRN_SIER); + /* + * Use regs-dar for SPRN_SIER which is saved + * during perf_read_regs at the beginning + * of the PMU interrupt handler to avoid multiple + * reads of SPRN_SIER + */ + sier = regs->dar; val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT; - if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) + if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) { + dsrc->val = 0; return; + } idx = (sier & ISA207_SIER_LDST_MASK) >> ISA207_SIER_LDST_SHIFT; sub_idx = (sier & ISA207_SIER_DATA_SRC_MASK) >> ISA207_SIER_DATA_SRC_SHIFT; @@ -338,8 +346,12 @@ void isa207_get_mem_data_src(union perf_mem_data_src *dsrc, u32 flags, * to determine the exact instruction type. If the sampling * criteria is neither load or store, set the type as default * to NA. + * + * Use regs->dsisr for MMCRA which is saved during perf_read_regs + * at the beginning of the PMU interrupt handler to avoid + * multiple reads of SPRN_MMCRA */ - mmcra = mfspr(SPRN_MMCRA); + mmcra = regs->dsisr; op_type = (mmcra >> MMCRA_SAMP_ELIG_SHIFT) & MMCRA_SAMP_ELIG_MASK; switch (op_type) { diff --git a/arch/powerpc/perf/vpa-pmu.c b/arch/powerpc/perf/vpa-pmu.c index 6a5bfd2a13b5..840733468959 100644 --- a/arch/powerpc/perf/vpa-pmu.c +++ b/arch/powerpc/perf/vpa-pmu.c @@ -156,6 +156,7 @@ static void vpa_pmu_del(struct perf_event *event, int flags) } static struct pmu vpa_pmu = { + .module = THIS_MODULE, .task_ctx_nr = perf_sw_context, .name = "vpa_pmu", .event_init = vpa_pmu_event_init, diff --git a/arch/powerpc/platforms/44x/uic.c b/arch/powerpc/platforms/44x/uic.c index e3e148b9dd18..8b03ae4cb3f6 100644 --- a/arch/powerpc/platforms/44x/uic.c +++ b/arch/powerpc/platforms/44x/uic.c @@ -37,7 +37,7 @@ #define UIC_VR 0x7 #define UIC_VCR 0x8 -struct uic *primary_uic; +static struct uic *primary_uic; struct uic { int index; diff --git a/arch/powerpc/platforms/Kconfig b/arch/powerpc/platforms/Kconfig index a454149ae02f..fea3766eac0f 100644 --- a/arch/powerpc/platforms/Kconfig +++ b/arch/powerpc/platforms/Kconfig @@ -70,10 +70,6 @@ config PPC_DT_CPU_FTRS firmware provides this binding. If you're not sure say Y. -config UDBG_RTAS_CONSOLE - bool "RTAS based debug console" - depends on PPC_RTAS - config PPC_SMP_MUXED_IPI bool help @@ -186,12 +182,6 @@ config PPC_INDIRECT_PIO bool select GENERIC_IOMAP -config PPC_INDIRECT_MMIO - bool - -config PPC_IO_WORKAROUNDS - bool - source "drivers/cpufreq/Kconfig" menu "CPUIdle driver" diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index 1453ccc900c4..613b383ed8b3 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -449,6 +449,19 @@ config PPC_RADIX_MMU_DEFAULT If you're unsure, say Y. +config PPC_RADIX_BROADCAST_TLBIE + bool + depends on PPC_RADIX_MMU + help + Power ISA v3.0 and later implementations in the Linux Compliancy Subset + and lower are not required to implement broadcast TLBIE instructions. + Platforms with CPUs that do implement TLBIE broadcast, that is, where + a TLB invalidation instruction performed on one CPU operates on the + TLBs of all CPUs in the system, should select this option. If this + option is selected, the disable_tlbie kernel command line option can + be used to cause global TLB invalidations to be done via IPIs; without + it, IPIs will be used unconditionally. + config PPC_KERNEL_PREFIXED depends on PPC_HAVE_PREFIXED_SUPPORT depends on CC_HAS_PREFIXED diff --git a/arch/powerpc/platforms/cell/Kconfig b/arch/powerpc/platforms/cell/Kconfig index 34669b060f36..db65bfcd1e74 100644 --- a/arch/powerpc/platforms/cell/Kconfig +++ b/arch/powerpc/platforms/cell/Kconfig @@ -3,42 +3,6 @@ config PPC_CELL select PPC_64S_HASH_MMU if PPC64 bool -config PPC_CELL_COMMON - bool - select PPC_CELL - select PPC_DCR_MMIO - select PPC_INDIRECT_PIO - select PPC_INDIRECT_MMIO - select PPC_HASH_MMU_NATIVE - select PPC_RTAS - select IRQ_EDGE_EOI_HANDLER - -config PPC_CELL_NATIVE - bool - select PPC_CELL_COMMON - select MPIC - select PPC_IO_WORKAROUNDS - select IBM_EMAC_EMAC4 if IBM_EMAC - select IBM_EMAC_RGMII if IBM_EMAC - select IBM_EMAC_ZMII if IBM_EMAC #test only - select IBM_EMAC_TAH if IBM_EMAC #test only - -config PPC_IBM_CELL_BLADE - bool "IBM Cell Blade" - depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN - select PPC_CELL_NATIVE - select PPC_OF_PLATFORM_PCI - select FORCE_PCI - select MMIO_NVRAM - select PPC_UDBG_16550 - select UDBG_RTAS_CONSOLE - -config AXON_MSI - bool - depends on PPC_IBM_CELL_BLADE && PCI_MSI - select IRQ_DOMAIN_NOMAP - default y - menu "Cell Broadband Engine options" depends on PPC_CELL @@ -57,48 +21,4 @@ config SPU_BASE bool select PPC_COPRO_BASE -config CBE_RAS - bool "RAS features for bare metal Cell BE" - depends on PPC_CELL_NATIVE - default y - -config PPC_IBM_CELL_RESETBUTTON - bool "IBM Cell Blade Pinhole reset button" - depends on CBE_RAS && PPC_IBM_CELL_BLADE - default y - help - Support Pinhole Resetbutton on IBM Cell blades. - This adds a method to trigger system reset via front panel pinhole button. - -config PPC_IBM_CELL_POWERBUTTON - tristate "IBM Cell Blade power button" - depends on PPC_IBM_CELL_BLADE && INPUT_EVDEV - default y - help - Support Powerbutton on IBM Cell blades. - This will enable the powerbutton as an input device. - -config CBE_THERM - tristate "CBE thermal support" - default m - depends on CBE_RAS && SPU_BASE - -config PPC_PMI - tristate - default y - depends on CPU_FREQ_CBE_PMI || PPC_IBM_CELL_POWERBUTTON - help - PMI (Platform Management Interrupt) is a way to - communicate with the BMC (Baseboard Management Controller). - It is used in some IBM Cell blades. - -config CBE_CPUFREQ_SPU_GOVERNOR - tristate "CBE frequency scaling based on SPU usage" - depends on SPU_FS && CPU_FREQ - default m - help - This governor checks for spu usage to adjust the cpu frequency. - If no spu is running on a given cpu, that cpu will be throttled to - the minimal possible frequency. - endmenu diff --git a/arch/powerpc/platforms/cell/Makefile b/arch/powerpc/platforms/cell/Makefile index 7ea6692f67e2..7e5ff239c376 100644 --- a/arch/powerpc/platforms/cell/Makefile +++ b/arch/powerpc/platforms/cell/Makefile @@ -1,27 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_PPC_CELL_COMMON) += cbe_regs.o interrupt.o pervasive.o - -obj-$(CONFIG_PPC_CELL_NATIVE) += iommu.o setup.o spider-pic.o \ - pmu.o spider-pci.o -obj-$(CONFIG_CBE_RAS) += ras.o - -obj-$(CONFIG_CBE_THERM) += cbe_thermal.o -obj-$(CONFIG_CBE_CPUFREQ_SPU_GOVERNOR) += cpufreq_spudemand.o - -obj-$(CONFIG_PPC_IBM_CELL_POWERBUTTON) += cbe_powerbutton.o - -ifdef CONFIG_SMP -obj-$(CONFIG_PPC_CELL_NATIVE) += smp.o -endif - -# needed only when building loadable spufs.ko -spu-priv1-$(CONFIG_PPC_CELL_COMMON) += spu_priv1_mmio.o -spu-manage-$(CONFIG_PPC_CELL_COMMON) += spu_manage.o - obj-$(CONFIG_SPU_BASE) += spu_callbacks.o spu_base.o \ spu_syscalls.o \ - $(spu-priv1-y) \ - $(spu-manage-y) \ spufs/ - -obj-$(CONFIG_AXON_MSI) += axon_msi.o diff --git a/arch/powerpc/platforms/cell/axon_msi.c b/arch/powerpc/platforms/cell/axon_msi.c deleted file mode 100644 index d243f7fd8982..000000000000 --- a/arch/powerpc/platforms/cell/axon_msi.c +++ /dev/null @@ -1,481 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Copyright 2007, Michael Ellerman, IBM Corporation. - */ - - -#include <linux/interrupt.h> -#include <linux/irq.h> -#include <linux/kernel.h> -#include <linux/pci.h> -#include <linux/msi.h> -#include <linux/export.h> -#include <linux/slab.h> -#include <linux/debugfs.h> -#include <linux/of.h> -#include <linux/of_irq.h> -#include <linux/platform_device.h> - -#include <asm/dcr.h> -#include <asm/machdep.h> - -#include "cell.h" - -/* - * MSIC registers, specified as offsets from dcr_base - */ -#define MSIC_CTRL_REG 0x0 - -/* Base Address registers specify FIFO location in BE memory */ -#define MSIC_BASE_ADDR_HI_REG 0x3 -#define MSIC_BASE_ADDR_LO_REG 0x4 - -/* Hold the read/write offsets into the FIFO */ -#define MSIC_READ_OFFSET_REG 0x5 -#define MSIC_WRITE_OFFSET_REG 0x6 - - -/* MSIC control register flags */ -#define MSIC_CTRL_ENABLE 0x0001 -#define MSIC_CTRL_FIFO_FULL_ENABLE 0x0002 -#define MSIC_CTRL_IRQ_ENABLE 0x0008 -#define MSIC_CTRL_FULL_STOP_ENABLE 0x0010 - -/* - * The MSIC can be configured to use a FIFO of 32KB, 64KB, 128KB or 256KB. - * Currently we're using a 64KB FIFO size. - */ -#define MSIC_FIFO_SIZE_SHIFT 16 -#define MSIC_FIFO_SIZE_BYTES (1 << MSIC_FIFO_SIZE_SHIFT) - -/* - * To configure the FIFO size as (1 << n) bytes, we write (n - 15) into bits - * 8-9 of the MSIC control reg. - */ -#define MSIC_CTRL_FIFO_SIZE (((MSIC_FIFO_SIZE_SHIFT - 15) << 8) & 0x300) - -/* - * We need to mask the read/write offsets to make sure they stay within - * the bounds of the FIFO. Also they should always be 16-byte aligned. - */ -#define MSIC_FIFO_SIZE_MASK ((MSIC_FIFO_SIZE_BYTES - 1) & ~0xFu) - -/* Each entry in the FIFO is 16 bytes, the first 4 bytes hold the irq # */ -#define MSIC_FIFO_ENTRY_SIZE 0x10 - - -struct axon_msic { - struct irq_domain *irq_domain; - __le32 *fifo_virt; - dma_addr_t fifo_phys; - dcr_host_t dcr_host; - u32 read_offset; -#ifdef DEBUG - u32 __iomem *trigger; -#endif -}; - -#ifdef DEBUG -void axon_msi_debug_setup(struct device_node *dn, struct axon_msic *msic); -#else -static inline void axon_msi_debug_setup(struct device_node *dn, - struct axon_msic *msic) { } -#endif - - -static void msic_dcr_write(struct axon_msic *msic, unsigned int dcr_n, u32 val) -{ - pr_devel("axon_msi: dcr_write(0x%x, 0x%x)\n", val, dcr_n); - - dcr_write(msic->dcr_host, dcr_n, val); -} - -static void axon_msi_cascade(struct irq_desc *desc) -{ - struct irq_chip *chip = irq_desc_get_chip(desc); - struct axon_msic *msic = irq_desc_get_handler_data(desc); - u32 write_offset, msi; - int idx; - int retry = 0; - - write_offset = dcr_read(msic->dcr_host, MSIC_WRITE_OFFSET_REG); - pr_devel("axon_msi: original write_offset 0x%x\n", write_offset); - - /* write_offset doesn't wrap properly, so we have to mask it */ - write_offset &= MSIC_FIFO_SIZE_MASK; - - while (msic->read_offset != write_offset && retry < 100) { - idx = msic->read_offset / sizeof(__le32); - msi = le32_to_cpu(msic->fifo_virt[idx]); - msi &= 0xFFFF; - - pr_devel("axon_msi: woff %x roff %x msi %x\n", - write_offset, msic->read_offset, msi); - - if (msi < irq_get_nr_irqs() && irq_get_chip_data(msi) == msic) { - generic_handle_irq(msi); - msic->fifo_virt[idx] = cpu_to_le32(0xffffffff); - } else { - /* - * Reading the MSIC_WRITE_OFFSET_REG does not - * reliably flush the outstanding DMA to the - * FIFO buffer. Here we were reading stale - * data, so we need to retry. - */ - udelay(1); - retry++; - pr_devel("axon_msi: invalid irq 0x%x!\n", msi); - continue; - } - - if (retry) { - pr_devel("axon_msi: late irq 0x%x, retry %d\n", - msi, retry); - retry = 0; - } - - msic->read_offset += MSIC_FIFO_ENTRY_SIZE; - msic->read_offset &= MSIC_FIFO_SIZE_MASK; - } - - if (retry) { - printk(KERN_WARNING "axon_msi: irq timed out\n"); - - msic->read_offset += MSIC_FIFO_ENTRY_SIZE; - msic->read_offset &= MSIC_FIFO_SIZE_MASK; - } - - chip->irq_eoi(&desc->irq_data); -} - -static struct axon_msic *find_msi_translator(struct pci_dev *dev) -{ - struct irq_domain *irq_domain; - struct device_node *dn, *tmp; - const phandle *ph; - struct axon_msic *msic = NULL; - - dn = of_node_get(pci_device_to_OF_node(dev)); - if (!dn) { - dev_dbg(&dev->dev, "axon_msi: no pci_dn found\n"); - return NULL; - } - - for (; dn; dn = of_get_next_parent(dn)) { - ph = of_get_property(dn, "msi-translator", NULL); - if (ph) - break; - } - - if (!ph) { - dev_dbg(&dev->dev, - "axon_msi: no msi-translator property found\n"); - goto out_error; - } - - tmp = dn; - dn = of_find_node_by_phandle(*ph); - of_node_put(tmp); - if (!dn) { - dev_dbg(&dev->dev, - "axon_msi: msi-translator doesn't point to a node\n"); - goto out_error; - } - - irq_domain = irq_find_host(dn); - if (!irq_domain) { - dev_dbg(&dev->dev, "axon_msi: no irq_domain found for node %pOF\n", - dn); - goto out_error; - } - - msic = irq_domain->host_data; - -out_error: - of_node_put(dn); - - return msic; -} - -static int setup_msi_msg_address(struct pci_dev *dev, struct msi_msg *msg) -{ - struct device_node *dn; - int len; - const u32 *prop; - - dn = of_node_get(pci_device_to_OF_node(dev)); - if (!dn) { - dev_dbg(&dev->dev, "axon_msi: no pci_dn found\n"); - return -ENODEV; - } - - for (; dn; dn = of_get_next_parent(dn)) { - if (!dev->no_64bit_msi) { - prop = of_get_property(dn, "msi-address-64", &len); - if (prop) - break; - } - - prop = of_get_property(dn, "msi-address-32", &len); - if (prop) - break; - } - - if (!prop) { - dev_dbg(&dev->dev, - "axon_msi: no msi-address-(32|64) properties found\n"); - of_node_put(dn); - return -ENOENT; - } - - switch (len) { - case 8: - msg->address_hi = prop[0]; - msg->address_lo = prop[1]; - break; - case 4: - msg->address_hi = 0; - msg->address_lo = prop[0]; - break; - default: - dev_dbg(&dev->dev, - "axon_msi: malformed msi-address-(32|64) property\n"); - of_node_put(dn); - return -EINVAL; - } - - of_node_put(dn); - - return 0; -} - -static int axon_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - unsigned int virq, rc; - struct msi_desc *entry; - struct msi_msg msg; - struct axon_msic *msic; - - msic = find_msi_translator(dev); - if (!msic) - return -ENODEV; - - rc = setup_msi_msg_address(dev, &msg); - if (rc) - return rc; - - msi_for_each_desc(entry, &dev->dev, MSI_DESC_NOTASSOCIATED) { - virq = irq_create_direct_mapping(msic->irq_domain); - if (!virq) { - dev_warn(&dev->dev, - "axon_msi: virq allocation failed!\n"); - return -1; - } - dev_dbg(&dev->dev, "axon_msi: allocated virq 0x%x\n", virq); - - irq_set_msi_desc(virq, entry); - msg.data = virq; - pci_write_msi_msg(virq, &msg); - } - - return 0; -} - -static void axon_msi_teardown_msi_irqs(struct pci_dev *dev) -{ - struct msi_desc *entry; - - dev_dbg(&dev->dev, "axon_msi: tearing down msi irqs\n"); - - msi_for_each_desc(entry, &dev->dev, MSI_DESC_ASSOCIATED) { - irq_set_msi_desc(entry->irq, NULL); - irq_dispose_mapping(entry->irq); - entry->irq = 0; - } -} - -static struct irq_chip msic_irq_chip = { - .irq_mask = pci_msi_mask_irq, - .irq_unmask = pci_msi_unmask_irq, - .irq_shutdown = pci_msi_mask_irq, - .name = "AXON-MSI", -}; - -static int msic_host_map(struct irq_domain *h, unsigned int virq, - irq_hw_number_t hw) -{ - irq_set_chip_data(virq, h->host_data); - irq_set_chip_and_handler(virq, &msic_irq_chip, handle_simple_irq); - - return 0; -} - -static const struct irq_domain_ops msic_host_ops = { - .map = msic_host_map, -}; - -static void axon_msi_shutdown(struct platform_device *device) -{ - struct axon_msic *msic = dev_get_drvdata(&device->dev); - u32 tmp; - - pr_devel("axon_msi: disabling %pOF\n", - irq_domain_get_of_node(msic->irq_domain)); - tmp = dcr_read(msic->dcr_host, MSIC_CTRL_REG); - tmp &= ~MSIC_CTRL_ENABLE & ~MSIC_CTRL_IRQ_ENABLE; - msic_dcr_write(msic, MSIC_CTRL_REG, tmp); -} - -static int axon_msi_probe(struct platform_device *device) -{ - struct device_node *dn = device->dev.of_node; - struct axon_msic *msic; - unsigned int virq; - int dcr_base, dcr_len; - - pr_devel("axon_msi: setting up dn %pOF\n", dn); - - msic = kzalloc(sizeof(*msic), GFP_KERNEL); - if (!msic) { - printk(KERN_ERR "axon_msi: couldn't allocate msic for %pOF\n", - dn); - goto out; - } - - dcr_base = dcr_resource_start(dn, 0); - dcr_len = dcr_resource_len(dn, 0); - - if (dcr_base == 0 || dcr_len == 0) { - printk(KERN_ERR - "axon_msi: couldn't parse dcr properties on %pOF\n", - dn); - goto out_free_msic; - } - - msic->dcr_host = dcr_map(dn, dcr_base, dcr_len); - if (!DCR_MAP_OK(msic->dcr_host)) { - printk(KERN_ERR "axon_msi: dcr_map failed for %pOF\n", - dn); - goto out_free_msic; - } - - msic->fifo_virt = dma_alloc_coherent(&device->dev, MSIC_FIFO_SIZE_BYTES, - &msic->fifo_phys, GFP_KERNEL); - if (!msic->fifo_virt) { - printk(KERN_ERR "axon_msi: couldn't allocate fifo for %pOF\n", - dn); - goto out_free_msic; - } - - virq = irq_of_parse_and_map(dn, 0); - if (!virq) { - printk(KERN_ERR "axon_msi: irq parse and map failed for %pOF\n", - dn); - goto out_free_fifo; - } - memset(msic->fifo_virt, 0xff, MSIC_FIFO_SIZE_BYTES); - - /* We rely on being able to stash a virq in a u16, so limit irqs to < 65536 */ - msic->irq_domain = irq_domain_add_nomap(dn, 65536, &msic_host_ops, msic); - if (!msic->irq_domain) { - printk(KERN_ERR "axon_msi: couldn't allocate irq_domain for %pOF\n", - dn); - goto out_free_fifo; - } - - irq_set_handler_data(virq, msic); - irq_set_chained_handler(virq, axon_msi_cascade); - pr_devel("axon_msi: irq 0x%x setup for axon_msi\n", virq); - - /* Enable the MSIC hardware */ - msic_dcr_write(msic, MSIC_BASE_ADDR_HI_REG, msic->fifo_phys >> 32); - msic_dcr_write(msic, MSIC_BASE_ADDR_LO_REG, - msic->fifo_phys & 0xFFFFFFFF); - msic_dcr_write(msic, MSIC_CTRL_REG, - MSIC_CTRL_IRQ_ENABLE | MSIC_CTRL_ENABLE | - MSIC_CTRL_FIFO_SIZE); - - msic->read_offset = dcr_read(msic->dcr_host, MSIC_WRITE_OFFSET_REG) - & MSIC_FIFO_SIZE_MASK; - - dev_set_drvdata(&device->dev, msic); - - cell_pci_controller_ops.setup_msi_irqs = axon_msi_setup_msi_irqs; - cell_pci_controller_ops.teardown_msi_irqs = axon_msi_teardown_msi_irqs; - - axon_msi_debug_setup(dn, msic); - - printk(KERN_DEBUG "axon_msi: setup MSIC on %pOF\n", dn); - - return 0; - -out_free_fifo: - dma_free_coherent(&device->dev, MSIC_FIFO_SIZE_BYTES, msic->fifo_virt, - msic->fifo_phys); -out_free_msic: - kfree(msic); -out: - - return -1; -} - -static const struct of_device_id axon_msi_device_id[] = { - { - .compatible = "ibm,axon-msic" - }, - {} -}; - -static struct platform_driver axon_msi_driver = { - .probe = axon_msi_probe, - .shutdown = axon_msi_shutdown, - .driver = { - .name = "axon-msi", - .of_match_table = axon_msi_device_id, - }, -}; - -static int __init axon_msi_init(void) -{ - return platform_driver_register(&axon_msi_driver); -} -subsys_initcall(axon_msi_init); - - -#ifdef DEBUG -static int msic_set(void *data, u64 val) -{ - struct axon_msic *msic = data; - out_le32(msic->trigger, val); - return 0; -} - -static int msic_get(void *data, u64 *val) -{ - *val = 0; - return 0; -} - -DEFINE_SIMPLE_ATTRIBUTE(fops_msic, msic_get, msic_set, "%llu\n"); - -void axon_msi_debug_setup(struct device_node *dn, struct axon_msic *msic) -{ - char name[8]; - struct resource res; - - if (of_address_to_resource(dn, 0, &res)) { - pr_devel("axon_msi: couldn't get reg property\n"); - return; - } - - msic->trigger = ioremap(res.start, 0x4); - if (!msic->trigger) { - pr_devel("axon_msi: ioremap failed\n"); - return; - } - - snprintf(name, sizeof(name), "msic_%d", of_node_to_nid(dn)); - - debugfs_create_file(name, 0600, arch_debugfs_dir, msic, &fops_msic); -} -#endif /* DEBUG */ diff --git a/arch/powerpc/platforms/cell/cbe_powerbutton.c b/arch/powerpc/platforms/cell/cbe_powerbutton.c deleted file mode 100644 index 3d121acdf69b..000000000000 --- a/arch/powerpc/platforms/cell/cbe_powerbutton.c +++ /dev/null @@ -1,106 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * driver for powerbutton on IBM cell blades - * - * (C) Copyright IBM Corp. 2005-2008 - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#include <linux/input.h> -#include <linux/module.h> -#include <linux/of.h> -#include <linux/platform_device.h> -#include <asm/pmi.h> - -static struct input_dev *button_dev; -static struct platform_device *button_pdev; - -static void cbe_powerbutton_handle_pmi(pmi_message_t pmi_msg) -{ - BUG_ON(pmi_msg.type != PMI_TYPE_POWER_BUTTON); - - input_report_key(button_dev, KEY_POWER, 1); - input_sync(button_dev); - input_report_key(button_dev, KEY_POWER, 0); - input_sync(button_dev); -} - -static struct pmi_handler cbe_pmi_handler = { - .type = PMI_TYPE_POWER_BUTTON, - .handle_pmi_message = cbe_powerbutton_handle_pmi, -}; - -static int __init cbe_powerbutton_init(void) -{ - int ret = 0; - struct input_dev *dev; - - if (!of_machine_is_compatible("IBM,CBPLUS-1.0")) { - printk(KERN_ERR "%s: Not a cell blade.\n", __func__); - ret = -ENODEV; - goto out; - } - - dev = input_allocate_device(); - if (!dev) { - ret = -ENOMEM; - printk(KERN_ERR "%s: Not enough memory.\n", __func__); - goto out; - } - - set_bit(EV_KEY, dev->evbit); - set_bit(KEY_POWER, dev->keybit); - - dev->name = "Power Button"; - dev->id.bustype = BUS_HOST; - - /* this makes the button look like an acpi power button - * no clue whether anyone relies on that though */ - dev->id.product = 0x02; - dev->phys = "LNXPWRBN/button/input0"; - - button_pdev = platform_device_register_simple("power_button", 0, NULL, 0); - if (IS_ERR(button_pdev)) { - ret = PTR_ERR(button_pdev); - goto out_free_input; - } - - dev->dev.parent = &button_pdev->dev; - ret = input_register_device(dev); - if (ret) { - printk(KERN_ERR "%s: Failed to register device\n", __func__); - goto out_free_pdev; - } - - button_dev = dev; - - ret = pmi_register_handler(&cbe_pmi_handler); - if (ret) { - printk(KERN_ERR "%s: Failed to register with pmi.\n", __func__); - goto out_free_pdev; - } - - goto out; - -out_free_pdev: - platform_device_unregister(button_pdev); -out_free_input: - input_free_device(dev); -out: - return ret; -} - -static void __exit cbe_powerbutton_exit(void) -{ - pmi_unregister_handler(&cbe_pmi_handler); - platform_device_unregister(button_pdev); - input_free_device(button_dev); -} - -module_init(cbe_powerbutton_init); -module_exit(cbe_powerbutton_exit); - -MODULE_DESCRIPTION("Driver for powerbutton on IBM cell blades"); -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>"); diff --git a/arch/powerpc/platforms/cell/cbe_regs.c b/arch/powerpc/platforms/cell/cbe_regs.c deleted file mode 100644 index 99b3558753e9..000000000000 --- a/arch/powerpc/platforms/cell/cbe_regs.c +++ /dev/null @@ -1,298 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * cbe_regs.c - * - * Accessor routines for the various MMIO register blocks of the CBE - * - * (c) 2006 Benjamin Herrenschmidt <benh@kernel.crashing.org>, IBM Corp. - */ - -#include <linux/percpu.h> -#include <linux/types.h> -#include <linux/export.h> -#include <linux/of.h> -#include <linux/of_address.h> -#include <linux/pgtable.h> - -#include <asm/io.h> -#include <asm/ptrace.h> -#include <asm/cell-regs.h> - -/* - * Current implementation uses "cpu" nodes. We build our own mapping - * array of cpu numbers to cpu nodes locally for now to allow interrupt - * time code to have a fast path rather than call of_get_cpu_node(). If - * we implement cpu hotplug, we'll have to install an appropriate notifier - * in order to release references to the cpu going away - */ -static struct cbe_regs_map -{ - struct device_node *cpu_node; - struct device_node *be_node; - struct cbe_pmd_regs __iomem *pmd_regs; - struct cbe_iic_regs __iomem *iic_regs; - struct cbe_mic_tm_regs __iomem *mic_tm_regs; - struct cbe_pmd_shadow_regs pmd_shadow_regs; -} cbe_regs_maps[MAX_CBE]; -static int cbe_regs_map_count; - -static struct cbe_thread_map -{ - struct device_node *cpu_node; - struct device_node *be_node; - struct cbe_regs_map *regs; - unsigned int thread_id; - unsigned int cbe_id; -} cbe_thread_map[NR_CPUS]; - -static cpumask_t cbe_local_mask[MAX_CBE] = { [0 ... MAX_CBE-1] = {CPU_BITS_NONE} }; -static cpumask_t cbe_first_online_cpu = { CPU_BITS_NONE }; - -static struct cbe_regs_map *cbe_find_map(struct device_node *np) -{ - int i; - struct device_node *tmp_np; - - if (!of_node_is_type(np, "spe")) { - for (i = 0; i < cbe_regs_map_count; i++) - if (cbe_regs_maps[i].cpu_node == np || - cbe_regs_maps[i].be_node == np) - return &cbe_regs_maps[i]; - return NULL; - } - - if (np->data) - return np->data; - - /* walk up path until cpu or be node was found */ - tmp_np = np; - do { - tmp_np = tmp_np->parent; - /* on a correct devicetree we wont get up to root */ - BUG_ON(!tmp_np); - } while (!of_node_is_type(tmp_np, "cpu") || - !of_node_is_type(tmp_np, "be")); - - np->data = cbe_find_map(tmp_np); - - return np->data; -} - -struct cbe_pmd_regs __iomem *cbe_get_pmd_regs(struct device_node *np) -{ - struct cbe_regs_map *map = cbe_find_map(np); - if (map == NULL) - return NULL; - return map->pmd_regs; -} -EXPORT_SYMBOL_GPL(cbe_get_pmd_regs); - -struct cbe_pmd_regs __iomem *cbe_get_cpu_pmd_regs(int cpu) -{ - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; - if (map == NULL) - return NULL; - return map->pmd_regs; -} -EXPORT_SYMBOL_GPL(cbe_get_cpu_pmd_regs); - -struct cbe_pmd_shadow_regs *cbe_get_pmd_shadow_regs(struct device_node *np) -{ - struct cbe_regs_map *map = cbe_find_map(np); - if (map == NULL) - return NULL; - return &map->pmd_shadow_regs; -} - -struct cbe_pmd_shadow_regs *cbe_get_cpu_pmd_shadow_regs(int cpu) -{ - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; - if (map == NULL) - return NULL; - return &map->pmd_shadow_regs; -} - -struct cbe_iic_regs __iomem *cbe_get_iic_regs(struct device_node *np) -{ - struct cbe_regs_map *map = cbe_find_map(np); - if (map == NULL) - return NULL; - return map->iic_regs; -} - -struct cbe_iic_regs __iomem *cbe_get_cpu_iic_regs(int cpu) -{ - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; - if (map == NULL) - return NULL; - return map->iic_regs; -} - -struct cbe_mic_tm_regs __iomem *cbe_get_mic_tm_regs(struct device_node *np) -{ - struct cbe_regs_map *map = cbe_find_map(np); - if (map == NULL) - return NULL; - return map->mic_tm_regs; -} - -struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu) -{ - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; - if (map == NULL) - return NULL; - return map->mic_tm_regs; -} -EXPORT_SYMBOL_GPL(cbe_get_cpu_mic_tm_regs); - -u32 cbe_get_hw_thread_id(int cpu) -{ - return cbe_thread_map[cpu].thread_id; -} -EXPORT_SYMBOL_GPL(cbe_get_hw_thread_id); - -u32 cbe_cpu_to_node(int cpu) -{ - return cbe_thread_map[cpu].cbe_id; -} -EXPORT_SYMBOL_GPL(cbe_cpu_to_node); - -u32 cbe_node_to_cpu(int node) -{ - return cpumask_first(&cbe_local_mask[node]); - -} -EXPORT_SYMBOL_GPL(cbe_node_to_cpu); - -static struct device_node *__init cbe_get_be_node(int cpu_id) -{ - struct device_node *np; - - for_each_node_by_type (np, "be") { - int len,i; - const phandle *cpu_handle; - - cpu_handle = of_get_property(np, "cpus", &len); - - /* - * the CAB SLOF tree is non compliant, so we just assume - * there is only one node - */ - if (WARN_ON_ONCE(!cpu_handle)) - return np; - - for (i = 0; i < len; i++) { - struct device_node *ch_np = of_find_node_by_phandle(cpu_handle[i]); - struct device_node *ci_np = of_get_cpu_node(cpu_id, NULL); - - of_node_put(ch_np); - of_node_put(ci_np); - - if (ch_np == ci_np) - return np; - } - } - - return NULL; -} - -static void __init cbe_fill_regs_map(struct cbe_regs_map *map) -{ - if(map->be_node) { - struct device_node *be, *np, *parent_np; - - be = map->be_node; - - for_each_node_by_type(np, "pervasive") { - parent_np = of_get_parent(np); - if (parent_np == be) - map->pmd_regs = of_iomap(np, 0); - of_node_put(parent_np); - } - - for_each_node_by_type(np, "CBEA-Internal-Interrupt-Controller") { - parent_np = of_get_parent(np); - if (parent_np == be) - map->iic_regs = of_iomap(np, 2); - of_node_put(parent_np); - } - - for_each_node_by_type(np, "mic-tm") { - parent_np = of_get_parent(np); - if (parent_np == be) - map->mic_tm_regs = of_iomap(np, 0); - of_node_put(parent_np); - } - } else { - struct device_node *cpu; - /* That hack must die die die ! */ - const struct address_prop { - unsigned long address; - unsigned int len; - } __attribute__((packed)) *prop; - - cpu = map->cpu_node; - - prop = of_get_property(cpu, "pervasive", NULL); - if (prop != NULL) - map->pmd_regs = ioremap(prop->address, prop->len); - - prop = of_get_property(cpu, "iic", NULL); - if (prop != NULL) - map->iic_regs = ioremap(prop->address, prop->len); - - prop = of_get_property(cpu, "mic-tm", NULL); - if (prop != NULL) - map->mic_tm_regs = ioremap(prop->address, prop->len); - } -} - - -void __init cbe_regs_init(void) -{ - int i; - unsigned int thread_id; - struct device_node *cpu; - - /* Build local fast map of CPUs */ - for_each_possible_cpu(i) { - cbe_thread_map[i].cpu_node = of_get_cpu_node(i, &thread_id); - cbe_thread_map[i].be_node = cbe_get_be_node(i); - cbe_thread_map[i].thread_id = thread_id; - } - - /* Find maps for each device tree CPU */ - for_each_node_by_type(cpu, "cpu") { - struct cbe_regs_map *map; - unsigned int cbe_id; - - cbe_id = cbe_regs_map_count++; - map = &cbe_regs_maps[cbe_id]; - - if (cbe_regs_map_count > MAX_CBE) { - printk(KERN_ERR "cbe_regs: More BE chips than supported" - "!\n"); - cbe_regs_map_count--; - of_node_put(cpu); - return; - } - of_node_put(map->cpu_node); - map->cpu_node = of_node_get(cpu); - - for_each_possible_cpu(i) { - struct cbe_thread_map *thread = &cbe_thread_map[i]; - - if (thread->cpu_node == cpu) { - thread->regs = map; - thread->cbe_id = cbe_id; - map->be_node = thread->be_node; - cpumask_set_cpu(i, &cbe_local_mask[cbe_id]); - if(thread->thread_id == 0) - cpumask_set_cpu(i, &cbe_first_online_cpu); - } - } - - cbe_fill_regs_map(map); - } -} - diff --git a/arch/powerpc/platforms/cell/cbe_thermal.c b/arch/powerpc/platforms/cell/cbe_thermal.c deleted file mode 100644 index c295c6714f9b..000000000000 --- a/arch/powerpc/platforms/cell/cbe_thermal.c +++ /dev/null @@ -1,387 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * thermal support for the cell processor - * - * This module adds some sysfs attributes to cpu and spu nodes. - * Base for measurements are the digital thermal sensors (DTS) - * located on the chip. - * The accuracy is 2 degrees, starting from 65 up to 125 degrees celsius - * The attributes can be found under - * /sys/devices/system/cpu/cpuX/thermal - * /sys/devices/system/spu/spuX/thermal - * - * The following attributes are added for each node: - * temperature: - * contains the current temperature measured by the DTS - * throttle_begin: - * throttling begins when temperature is greater or equal to - * throttle_begin. Setting this value to 125 prevents throttling. - * throttle_end: - * throttling is being ceased, if the temperature is lower than - * throttle_end. Due to a delay between applying throttling and - * a reduced temperature this value should be less than throttle_begin. - * A value equal to throttle_begin provides only a very little hysteresis. - * throttle_full_stop: - * If the temperatrue is greater or equal to throttle_full_stop, - * full throttling is applied to the cpu or spu. This value should be - * greater than throttle_begin and throttle_end. Setting this value to - * 65 prevents the unit from running code at all. - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#include <linux/module.h> -#include <linux/device.h> -#include <linux/kernel.h> -#include <linux/cpu.h> -#include <linux/stringify.h> -#include <asm/spu.h> -#include <asm/io.h> -#include <asm/cell-regs.h> - -#include "spu_priv1_mmio.h" - -#define TEMP_MIN 65 -#define TEMP_MAX 125 - -#define DEVICE_PREFIX_ATTR(_prefix,_name,_mode) \ -struct device_attribute attr_ ## _prefix ## _ ## _name = { \ - .attr = { .name = __stringify(_name), .mode = _mode }, \ - .show = _prefix ## _show_ ## _name, \ - .store = _prefix ## _store_ ## _name, \ -}; - -static inline u8 reg_to_temp(u8 reg_value) -{ - return ((reg_value & 0x3f) << 1) + TEMP_MIN; -} - -static inline u8 temp_to_reg(u8 temp) -{ - return ((temp - TEMP_MIN) >> 1) & 0x3f; -} - -static struct cbe_pmd_regs __iomem *get_pmd_regs(struct device *dev) -{ - struct spu *spu; - - spu = container_of(dev, struct spu, dev); - - return cbe_get_pmd_regs(spu_devnode(spu)); -} - -/* returns the value for a given spu in a given register */ -static u8 spu_read_register_value(struct device *dev, union spe_reg __iomem *reg) -{ - union spe_reg value; - struct spu *spu; - - spu = container_of(dev, struct spu, dev); - value.val = in_be64(®->val); - - return value.spe[spu->spe_id]; -} - -static ssize_t spu_show_temp(struct device *dev, struct device_attribute *attr, - char *buf) -{ - u8 value; - struct cbe_pmd_regs __iomem *pmd_regs; - - pmd_regs = get_pmd_regs(dev); - - value = spu_read_register_value(dev, &pmd_regs->ts_ctsr1); - - return sprintf(buf, "%d\n", reg_to_temp(value)); -} - -static ssize_t show_throttle(struct cbe_pmd_regs __iomem *pmd_regs, char *buf, int pos) -{ - u64 value; - - value = in_be64(&pmd_regs->tm_tpr.val); - /* access the corresponding byte */ - value >>= pos; - value &= 0x3F; - - return sprintf(buf, "%d\n", reg_to_temp(value)); -} - -static ssize_t store_throttle(struct cbe_pmd_regs __iomem *pmd_regs, const char *buf, size_t size, int pos) -{ - u64 reg_value; - unsigned int temp; - u64 new_value; - int ret; - - ret = sscanf(buf, "%u", &temp); - - if (ret != 1 || temp < TEMP_MIN || temp > TEMP_MAX) - return -EINVAL; - - new_value = temp_to_reg(temp); - - reg_value = in_be64(&pmd_regs->tm_tpr.val); - - /* zero out bits for new value */ - reg_value &= ~(0xffull << pos); - /* set bits to new value */ - reg_value |= new_value << pos; - - out_be64(&pmd_regs->tm_tpr.val, reg_value); - return size; -} - -static ssize_t spu_show_throttle_end(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return show_throttle(get_pmd_regs(dev), buf, 0); -} - -static ssize_t spu_show_throttle_begin(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return show_throttle(get_pmd_regs(dev), buf, 8); -} - -static ssize_t spu_show_throttle_full_stop(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return show_throttle(get_pmd_regs(dev), buf, 16); -} - -static ssize_t spu_store_throttle_end(struct device *dev, - struct device_attribute *attr, const char *buf, size_t size) -{ - return store_throttle(get_pmd_regs(dev), buf, size, 0); -} - -static ssize_t spu_store_throttle_begin(struct device *dev, - struct device_attribute *attr, const char *buf, size_t size) -{ - return store_throttle(get_pmd_regs(dev), buf, size, 8); -} - -static ssize_t spu_store_throttle_full_stop(struct device *dev, - struct device_attribute *attr, const char *buf, size_t size) -{ - return store_throttle(get_pmd_regs(dev), buf, size, 16); -} - -static ssize_t ppe_show_temp(struct device *dev, char *buf, int pos) -{ - struct cbe_pmd_regs __iomem *pmd_regs; - u64 value; - - pmd_regs = cbe_get_cpu_pmd_regs(dev->id); - value = in_be64(&pmd_regs->ts_ctsr2); - - value = (value >> pos) & 0x3f; - - return sprintf(buf, "%d\n", reg_to_temp(value)); -} - - -/* shows the temperature of the DTS on the PPE, - * located near the linear thermal sensor */ -static ssize_t ppe_show_temp0(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return ppe_show_temp(dev, buf, 32); -} - -/* shows the temperature of the second DTS on the PPE */ -static ssize_t ppe_show_temp1(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return ppe_show_temp(dev, buf, 0); -} - -static ssize_t ppe_show_throttle_end(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 32); -} - -static ssize_t ppe_show_throttle_begin(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 40); -} - -static ssize_t ppe_show_throttle_full_stop(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 48); -} - -static ssize_t ppe_store_throttle_end(struct device *dev, - struct device_attribute *attr, const char *buf, size_t size) -{ - return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 32); -} - -static ssize_t ppe_store_throttle_begin(struct device *dev, - struct device_attribute *attr, const char *buf, size_t size) -{ - return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 40); -} - -static ssize_t ppe_store_throttle_full_stop(struct device *dev, - struct device_attribute *attr, const char *buf, size_t size) -{ - return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 48); -} - - -static struct device_attribute attr_spu_temperature = { - .attr = {.name = "temperature", .mode = 0400 }, - .show = spu_show_temp, -}; - -static DEVICE_PREFIX_ATTR(spu, throttle_end, 0600); -static DEVICE_PREFIX_ATTR(spu, throttle_begin, 0600); -static DEVICE_PREFIX_ATTR(spu, throttle_full_stop, 0600); - - -static struct attribute *spu_attributes[] = { - &attr_spu_temperature.attr, - &attr_spu_throttle_end.attr, - &attr_spu_throttle_begin.attr, - &attr_spu_throttle_full_stop.attr, - NULL, -}; - -static const struct attribute_group spu_attribute_group = { - .name = "thermal", - .attrs = spu_attributes, -}; - -static struct device_attribute attr_ppe_temperature0 = { - .attr = {.name = "temperature0", .mode = 0400 }, - .show = ppe_show_temp0, -}; - -static struct device_attribute attr_ppe_temperature1 = { - .attr = {.name = "temperature1", .mode = 0400 }, - .show = ppe_show_temp1, -}; - -static DEVICE_PREFIX_ATTR(ppe, throttle_end, 0600); -static DEVICE_PREFIX_ATTR(ppe, throttle_begin, 0600); -static DEVICE_PREFIX_ATTR(ppe, throttle_full_stop, 0600); - -static struct attribute *ppe_attributes[] = { - &attr_ppe_temperature0.attr, - &attr_ppe_temperature1.attr, - &attr_ppe_throttle_end.attr, - &attr_ppe_throttle_begin.attr, - &attr_ppe_throttle_full_stop.attr, - NULL, -}; - -static struct attribute_group ppe_attribute_group = { - .name = "thermal", - .attrs = ppe_attributes, -}; - -/* - * initialize throttling with default values - */ -static int __init init_default_values(void) -{ - int cpu; - struct cbe_pmd_regs __iomem *pmd_regs; - struct device *dev; - union ppe_spe_reg tpr; - union spe_reg str1; - u64 str2; - union spe_reg cr1; - u64 cr2; - - /* TPR defaults */ - /* ppe - * 1F - no full stop - * 08 - dynamic throttling starts if over 80 degrees - * 03 - dynamic throttling ceases if below 70 degrees */ - tpr.ppe = 0x1F0803; - /* spe - * 10 - full stopped when over 96 degrees - * 08 - dynamic throttling starts if over 80 degrees - * 03 - dynamic throttling ceases if below 70 degrees - */ - tpr.spe = 0x100803; - - /* STR defaults */ - /* str1 - * 10 - stop 16 of 32 cycles - */ - str1.val = 0x1010101010101010ull; - /* str2 - * 10 - stop 16 of 32 cycles - */ - str2 = 0x10; - - /* CR defaults */ - /* cr1 - * 4 - normal operation - */ - cr1.val = 0x0404040404040404ull; - /* cr2 - * 4 - normal operation - */ - cr2 = 0x04; - - for_each_possible_cpu (cpu) { - pr_debug("processing cpu %d\n", cpu); - dev = get_cpu_device(cpu); - - if (!dev) { - pr_info("invalid dev pointer for cbe_thermal\n"); - return -EINVAL; - } - - pmd_regs = cbe_get_cpu_pmd_regs(dev->id); - - if (!pmd_regs) { - pr_info("invalid CBE regs pointer for cbe_thermal\n"); - return -EINVAL; - } - - out_be64(&pmd_regs->tm_str2, str2); - out_be64(&pmd_regs->tm_str1.val, str1.val); - out_be64(&pmd_regs->tm_tpr.val, tpr.val); - out_be64(&pmd_regs->tm_cr1.val, cr1.val); - out_be64(&pmd_regs->tm_cr2, cr2); - } - - return 0; -} - - -static int __init thermal_init(void) -{ - int rc = init_default_values(); - - if (rc == 0) { - spu_add_dev_attr_group(&spu_attribute_group); - cpu_add_dev_attr_group(&ppe_attribute_group); - } - - return rc; -} -module_init(thermal_init); - -static void __exit thermal_exit(void) -{ - spu_remove_dev_attr_group(&spu_attribute_group); - cpu_remove_dev_attr_group(&ppe_attribute_group); -} -module_exit(thermal_exit); - -MODULE_DESCRIPTION("Cell processor thermal driver"); -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>"); - diff --git a/arch/powerpc/platforms/cell/cell.h b/arch/powerpc/platforms/cell/cell.h deleted file mode 100644 index d5142e905ab3..000000000000 --- a/arch/powerpc/platforms/cell/cell.h +++ /dev/null @@ -1,15 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * Cell Platform common data structures - * - * Copyright 2015, Daniel Axtens, IBM Corporation - */ - -#ifndef CELL_H -#define CELL_H - -#include <asm/pci-bridge.h> - -extern struct pci_controller_ops cell_pci_controller_ops; - -#endif diff --git a/arch/powerpc/platforms/cell/cpufreq_spudemand.c b/arch/powerpc/platforms/cell/cpufreq_spudemand.c deleted file mode 100644 index 79172ba36eca..000000000000 --- a/arch/powerpc/platforms/cell/cpufreq_spudemand.c +++ /dev/null @@ -1,134 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * spu aware cpufreq governor for the cell processor - * - * © Copyright IBM Corporation 2006-2008 - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#include <linux/cpufreq.h> -#include <linux/sched.h> -#include <linux/sched/loadavg.h> -#include <linux/module.h> -#include <linux/timer.h> -#include <linux/workqueue.h> -#include <linux/atomic.h> -#include <asm/machdep.h> -#include <asm/spu.h> - -#define POLL_TIME 100000 /* in µs */ -#define EXP 753 /* exp(-1) in fixed-point */ - -struct spu_gov_info_struct { - unsigned long busy_spus; /* fixed-point */ - struct cpufreq_policy *policy; - struct delayed_work work; - unsigned int poll_int; /* µs */ -}; -static DEFINE_PER_CPU(struct spu_gov_info_struct, spu_gov_info); - -static int calc_freq(struct spu_gov_info_struct *info) -{ - int cpu; - int busy_spus; - - cpu = info->policy->cpu; - busy_spus = atomic_read(&cbe_spu_info[cpu_to_node(cpu)].busy_spus); - - info->busy_spus = calc_load(info->busy_spus, EXP, busy_spus * FIXED_1); - pr_debug("cpu %d: busy_spus=%d, info->busy_spus=%ld\n", - cpu, busy_spus, info->busy_spus); - - return info->policy->max * info->busy_spus / FIXED_1; -} - -static void spu_gov_work(struct work_struct *work) -{ - struct spu_gov_info_struct *info; - int delay; - unsigned long target_freq; - - info = container_of(work, struct spu_gov_info_struct, work.work); - - /* after cancel_delayed_work_sync we unset info->policy */ - BUG_ON(info->policy == NULL); - - target_freq = calc_freq(info); - __cpufreq_driver_target(info->policy, target_freq, CPUFREQ_RELATION_H); - - delay = usecs_to_jiffies(info->poll_int); - schedule_delayed_work_on(info->policy->cpu, &info->work, delay); -} - -static void spu_gov_init_work(struct spu_gov_info_struct *info) -{ - int delay = usecs_to_jiffies(info->poll_int); - INIT_DEFERRABLE_WORK(&info->work, spu_gov_work); - schedule_delayed_work_on(info->policy->cpu, &info->work, delay); -} - -static void spu_gov_cancel_work(struct spu_gov_info_struct *info) -{ - cancel_delayed_work_sync(&info->work); -} - -static int spu_gov_start(struct cpufreq_policy *policy) -{ - unsigned int cpu = policy->cpu; - struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu); - struct spu_gov_info_struct *affected_info; - int i; - - if (!cpu_online(cpu)) { - printk(KERN_ERR "cpu %d is not online\n", cpu); - return -EINVAL; - } - - if (!policy->cur) { - printk(KERN_ERR "no cpu specified in policy\n"); - return -EINVAL; - } - - /* initialize spu_gov_info for all affected cpus */ - for_each_cpu(i, policy->cpus) { - affected_info = &per_cpu(spu_gov_info, i); - affected_info->policy = policy; - } - - info->poll_int = POLL_TIME; - - /* setup timer */ - spu_gov_init_work(info); - - return 0; -} - -static void spu_gov_stop(struct cpufreq_policy *policy) -{ - unsigned int cpu = policy->cpu; - struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu); - int i; - - /* cancel timer */ - spu_gov_cancel_work(info); - - /* clean spu_gov_info for all affected cpus */ - for_each_cpu (i, policy->cpus) { - info = &per_cpu(spu_gov_info, i); - info->policy = NULL; - } -} - -static struct cpufreq_governor spu_governor = { - .name = "spudemand", - .start = spu_gov_start, - .stop = spu_gov_stop, - .owner = THIS_MODULE, -}; -cpufreq_governor_init(spu_governor); -cpufreq_governor_exit(spu_governor); - -MODULE_DESCRIPTION("SPU-aware cpufreq governor for the cell processor"); -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>"); diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c deleted file mode 100644 index 03ee8152ee97..000000000000 --- a/arch/powerpc/platforms/cell/interrupt.c +++ /dev/null @@ -1,390 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Cell Internal Interrupt Controller - * - * Copyright (C) 2006 Benjamin Herrenschmidt (benh@kernel.crashing.org) - * IBM, Corp. - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 - * - * Author: Arnd Bergmann <arndb@de.ibm.com> - * - * TODO: - * - Fix various assumptions related to HW CPU numbers vs. linux CPU numbers - * vs node numbers in the setup code - * - Implement proper handling of maxcpus=1/2 (that is, routing of irqs from - * a non-active node to the active node) - */ - -#include <linux/interrupt.h> -#include <linux/irq.h> -#include <linux/irqdomain.h> -#include <linux/export.h> -#include <linux/percpu.h> -#include <linux/types.h> -#include <linux/ioport.h> -#include <linux/kernel_stat.h> -#include <linux/pgtable.h> -#include <linux/of_address.h> - -#include <asm/io.h> -#include <asm/ptrace.h> -#include <asm/machdep.h> -#include <asm/cell-regs.h> - -#include "interrupt.h" - -struct iic { - struct cbe_iic_thread_regs __iomem *regs; - u8 target_id; - u8 eoi_stack[16]; - int eoi_ptr; - struct device_node *node; -}; - -static DEFINE_PER_CPU(struct iic, cpu_iic); -#define IIC_NODE_COUNT 2 -static struct irq_domain *iic_host; - -/* Convert between "pending" bits and hw irq number */ -static irq_hw_number_t iic_pending_to_hwnum(struct cbe_iic_pending_bits bits) -{ - unsigned char unit = bits.source & 0xf; - unsigned char node = bits.source >> 4; - unsigned char class = bits.class & 3; - - /* Decode IPIs */ - if (bits.flags & CBE_IIC_IRQ_IPI) - return IIC_IRQ_TYPE_IPI | (bits.prio >> 4); - else - return (node << IIC_IRQ_NODE_SHIFT) | (class << 4) | unit; -} - -static void iic_mask(struct irq_data *d) -{ -} - -static void iic_unmask(struct irq_data *d) -{ -} - -static void iic_eoi(struct irq_data *d) -{ - struct iic *iic = this_cpu_ptr(&cpu_iic); - out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]); - BUG_ON(iic->eoi_ptr < 0); -} - -static struct irq_chip iic_chip = { - .name = "CELL-IIC", - .irq_mask = iic_mask, - .irq_unmask = iic_unmask, - .irq_eoi = iic_eoi, -}; - - -static void iic_ioexc_eoi(struct irq_data *d) -{ -} - -static void iic_ioexc_cascade(struct irq_desc *desc) -{ - struct irq_chip *chip = irq_desc_get_chip(desc); - struct cbe_iic_regs __iomem *node_iic = - (void __iomem *)irq_desc_get_handler_data(desc); - unsigned int irq = irq_desc_get_irq(desc); - unsigned int base = (irq & 0xffffff00) | IIC_IRQ_TYPE_IOEXC; - unsigned long bits, ack; - int cascade; - - for (;;) { - bits = in_be64(&node_iic->iic_is); - if (bits == 0) - break; - /* pre-ack edge interrupts */ - ack = bits & IIC_ISR_EDGE_MASK; - if (ack) - out_be64(&node_iic->iic_is, ack); - /* handle them */ - for (cascade = 63; cascade >= 0; cascade--) - if (bits & (0x8000000000000000UL >> cascade)) - generic_handle_domain_irq(iic_host, - base | cascade); - /* post-ack level interrupts */ - ack = bits & ~IIC_ISR_EDGE_MASK; - if (ack) - out_be64(&node_iic->iic_is, ack); - } - chip->irq_eoi(&desc->irq_data); -} - - -static struct irq_chip iic_ioexc_chip = { - .name = "CELL-IOEX", - .irq_mask = iic_mask, - .irq_unmask = iic_unmask, - .irq_eoi = iic_ioexc_eoi, -}; - -/* Get an IRQ number from the pending state register of the IIC */ -static unsigned int iic_get_irq(void) -{ - struct cbe_iic_pending_bits pending; - struct iic *iic; - unsigned int virq; - - iic = this_cpu_ptr(&cpu_iic); - *(unsigned long *) &pending = - in_be64((u64 __iomem *) &iic->regs->pending_destr); - if (!(pending.flags & CBE_IIC_IRQ_VALID)) - return 0; - virq = irq_linear_revmap(iic_host, iic_pending_to_hwnum(pending)); - if (!virq) - return 0; - iic->eoi_stack[++iic->eoi_ptr] = pending.prio; - BUG_ON(iic->eoi_ptr > 15); - return virq; -} - -void iic_setup_cpu(void) -{ - out_be64(&this_cpu_ptr(&cpu_iic)->regs->prio, 0xff); -} - -u8 iic_get_target_id(int cpu) -{ - return per_cpu(cpu_iic, cpu).target_id; -} - -EXPORT_SYMBOL_GPL(iic_get_target_id); - -#ifdef CONFIG_SMP - -/* Use the highest interrupt priorities for IPI */ -static inline int iic_msg_to_irq(int msg) -{ - return IIC_IRQ_TYPE_IPI + 0xf - msg; -} - -void iic_message_pass(int cpu, int msg) -{ - out_be64(&per_cpu(cpu_iic, cpu).regs->generate, (0xf - msg) << 4); -} - -static void iic_request_ipi(int msg) -{ - int virq; - - virq = irq_create_mapping(iic_host, iic_msg_to_irq(msg)); - if (!virq) { - printk(KERN_ERR - "iic: failed to map IPI %s\n", smp_ipi_name[msg]); - return; - } - - /* - * If smp_request_message_ipi encounters an error it will notify - * the error. If a message is not needed it will return non-zero. - */ - if (smp_request_message_ipi(virq, msg)) - irq_dispose_mapping(virq); -} - -void iic_request_IPIs(void) -{ - iic_request_ipi(PPC_MSG_CALL_FUNCTION); - iic_request_ipi(PPC_MSG_RESCHEDULE); - iic_request_ipi(PPC_MSG_TICK_BROADCAST); - iic_request_ipi(PPC_MSG_NMI_IPI); -} - -#endif /* CONFIG_SMP */ - - -static int iic_host_match(struct irq_domain *h, struct device_node *node, - enum irq_domain_bus_token bus_token) -{ - return of_device_is_compatible(node, - "IBM,CBEA-Internal-Interrupt-Controller"); -} - -static int iic_host_map(struct irq_domain *h, unsigned int virq, - irq_hw_number_t hw) -{ - switch (hw & IIC_IRQ_TYPE_MASK) { - case IIC_IRQ_TYPE_IPI: - irq_set_chip_and_handler(virq, &iic_chip, handle_percpu_irq); - break; - case IIC_IRQ_TYPE_IOEXC: - irq_set_chip_and_handler(virq, &iic_ioexc_chip, - handle_edge_eoi_irq); - break; - default: - irq_set_chip_and_handler(virq, &iic_chip, handle_edge_eoi_irq); - } - return 0; -} - -static int iic_host_xlate(struct irq_domain *h, struct device_node *ct, - const u32 *intspec, unsigned int intsize, - irq_hw_number_t *out_hwirq, unsigned int *out_flags) - -{ - unsigned int node, ext, unit, class; - const u32 *val; - - if (!of_device_is_compatible(ct, - "IBM,CBEA-Internal-Interrupt-Controller")) - return -ENODEV; - if (intsize != 1) - return -ENODEV; - val = of_get_property(ct, "#interrupt-cells", NULL); - if (val == NULL || *val != 1) - return -ENODEV; - - node = intspec[0] >> 24; - ext = (intspec[0] >> 16) & 0xff; - class = (intspec[0] >> 8) & 0xff; - unit = intspec[0] & 0xff; - - /* Check if node is in supported range */ - if (node > 1) - return -EINVAL; - - /* Build up interrupt number, special case for IO exceptions */ - *out_hwirq = (node << IIC_IRQ_NODE_SHIFT); - if (unit == IIC_UNIT_IIC && class == 1) - *out_hwirq |= IIC_IRQ_TYPE_IOEXC | ext; - else - *out_hwirq |= IIC_IRQ_TYPE_NORMAL | - (class << IIC_IRQ_CLASS_SHIFT) | unit; - - /* Dummy flags, ignored by iic code */ - *out_flags = IRQ_TYPE_EDGE_RISING; - - return 0; -} - -static const struct irq_domain_ops iic_host_ops = { - .match = iic_host_match, - .map = iic_host_map, - .xlate = iic_host_xlate, -}; - -static void __init init_one_iic(unsigned int hw_cpu, unsigned long addr, - struct device_node *node) -{ - /* XXX FIXME: should locate the linux CPU number from the HW cpu - * number properly. We are lucky for now - */ - struct iic *iic = &per_cpu(cpu_iic, hw_cpu); - - iic->regs = ioremap(addr, sizeof(struct cbe_iic_thread_regs)); - BUG_ON(iic->regs == NULL); - - iic->target_id = ((hw_cpu & 2) << 3) | ((hw_cpu & 1) ? 0xf : 0xe); - iic->eoi_stack[0] = 0xff; - iic->node = of_node_get(node); - out_be64(&iic->regs->prio, 0); - - printk(KERN_INFO "IIC for CPU %d target id 0x%x : %pOF\n", - hw_cpu, iic->target_id, node); -} - -static int __init setup_iic(void) -{ - struct device_node *dn; - struct resource r0, r1; - unsigned int node, cascade, found = 0; - struct cbe_iic_regs __iomem *node_iic; - const u32 *np; - - for_each_node_by_name(dn, "interrupt-controller") { - if (!of_device_is_compatible(dn, - "IBM,CBEA-Internal-Interrupt-Controller")) - continue; - np = of_get_property(dn, "ibm,interrupt-server-ranges", NULL); - if (np == NULL) { - printk(KERN_WARNING "IIC: CPU association not found\n"); - of_node_put(dn); - return -ENODEV; - } - if (of_address_to_resource(dn, 0, &r0) || - of_address_to_resource(dn, 1, &r1)) { - printk(KERN_WARNING "IIC: Can't resolve addresses\n"); - of_node_put(dn); - return -ENODEV; - } - found++; - init_one_iic(np[0], r0.start, dn); - init_one_iic(np[1], r1.start, dn); - - /* Setup cascade for IO exceptions. XXX cleanup tricks to get - * node vs CPU etc... - * Note that we configure the IIC_IRR here with a hard coded - * priority of 1. We might want to improve that later. - */ - node = np[0] >> 1; - node_iic = cbe_get_cpu_iic_regs(np[0]); - cascade = node << IIC_IRQ_NODE_SHIFT; - cascade |= 1 << IIC_IRQ_CLASS_SHIFT; - cascade |= IIC_UNIT_IIC; - cascade = irq_create_mapping(iic_host, cascade); - if (!cascade) - continue; - /* - * irq_data is a generic pointer that gets passed back - * to us later, so the forced cast is fine. - */ - irq_set_handler_data(cascade, (void __force *)node_iic); - irq_set_chained_handler(cascade, iic_ioexc_cascade); - out_be64(&node_iic->iic_ir, - (1 << 12) /* priority */ | - (node << 4) /* dest node */ | - IIC_UNIT_THREAD_0 /* route them to thread 0 */); - /* Flush pending (make sure it triggers if there is - * anything pending - */ - out_be64(&node_iic->iic_is, 0xfffffffffffffffful); - } - - if (found) - return 0; - else - return -ENODEV; -} - -void __init iic_init_IRQ(void) -{ - /* Setup an irq host data structure */ - iic_host = irq_domain_add_linear(NULL, IIC_SOURCE_COUNT, &iic_host_ops, - NULL); - BUG_ON(iic_host == NULL); - irq_set_default_host(iic_host); - - /* Discover and initialize iics */ - if (setup_iic() < 0) - panic("IIC: Failed to initialize !\n"); - - /* Set master interrupt handling function */ - ppc_md.get_irq = iic_get_irq; - - /* Enable on current CPU */ - iic_setup_cpu(); -} - -void iic_set_interrupt_routing(int cpu, int thread, int priority) -{ - struct cbe_iic_regs __iomem *iic_regs = cbe_get_cpu_iic_regs(cpu); - u64 iic_ir = 0; - int node = cpu >> 1; - - /* Set which node and thread will handle the next interrupt */ - iic_ir |= CBE_IIC_IR_PRIO(priority) | - CBE_IIC_IR_DEST_NODE(node); - if (thread == 0) - iic_ir |= CBE_IIC_IR_DEST_UNIT(CBE_IIC_IR_PT_0); - else - iic_ir |= CBE_IIC_IR_DEST_UNIT(CBE_IIC_IR_PT_1); - out_be64(&iic_regs->iic_ir, iic_ir); -} diff --git a/arch/powerpc/platforms/cell/interrupt.h b/arch/powerpc/platforms/cell/interrupt.h deleted file mode 100644 index a47902248541..000000000000 --- a/arch/powerpc/platforms/cell/interrupt.h +++ /dev/null @@ -1,90 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef ASM_CELL_PIC_H -#define ASM_CELL_PIC_H -#ifdef __KERNEL__ -/* - * Mapping of IIC pending bits into per-node interrupt numbers. - * - * Interrupt numbers are in the range 0...0x1ff where the top bit - * (0x100) represent the source node. Only 2 nodes are supported with - * the current code though it's trivial to extend that if necessary using - * higher level bits - * - * The bottom 8 bits are split into 2 type bits and 6 data bits that - * depend on the type: - * - * 00 (0x00 | data) : normal interrupt. data is (class << 4) | source - * 01 (0x40 | data) : IO exception. data is the exception number as - * defined by bit numbers in IIC_SR - * 10 (0x80 | data) : IPI. data is the IPI number (obtained from the priority) - * and node is always 0 (IPIs are per-cpu, their source is - * not relevant) - * 11 (0xc0 | data) : reserved - * - * In addition, interrupt number 0x80000000 is defined as always invalid - * (that is the node field is expected to never extend to move than 23 bits) - * - */ - -enum { - IIC_IRQ_INVALID = 0x80000000u, - IIC_IRQ_NODE_MASK = 0x100, - IIC_IRQ_NODE_SHIFT = 8, - IIC_IRQ_MAX = 0x1ff, - IIC_IRQ_TYPE_MASK = 0xc0, - IIC_IRQ_TYPE_NORMAL = 0x00, - IIC_IRQ_TYPE_IOEXC = 0x40, - IIC_IRQ_TYPE_IPI = 0x80, - IIC_IRQ_CLASS_SHIFT = 4, - IIC_IRQ_CLASS_0 = 0x00, - IIC_IRQ_CLASS_1 = 0x10, - IIC_IRQ_CLASS_2 = 0x20, - IIC_SOURCE_COUNT = 0x200, - - /* Here are defined the various source/dest units. Avoid using those - * definitions if you can, they are mostly here for reference - */ - IIC_UNIT_SPU_0 = 0x4, - IIC_UNIT_SPU_1 = 0x7, - IIC_UNIT_SPU_2 = 0x3, - IIC_UNIT_SPU_3 = 0x8, - IIC_UNIT_SPU_4 = 0x2, - IIC_UNIT_SPU_5 = 0x9, - IIC_UNIT_SPU_6 = 0x1, - IIC_UNIT_SPU_7 = 0xa, - IIC_UNIT_IOC_0 = 0x0, - IIC_UNIT_IOC_1 = 0xb, - IIC_UNIT_THREAD_0 = 0xe, /* target only */ - IIC_UNIT_THREAD_1 = 0xf, /* target only */ - IIC_UNIT_IIC = 0xe, /* source only (IO exceptions) */ - - /* Base numbers for the external interrupts */ - IIC_IRQ_EXT_IOIF0 = - IIC_IRQ_TYPE_NORMAL | IIC_IRQ_CLASS_2 | IIC_UNIT_IOC_0, - IIC_IRQ_EXT_IOIF1 = - IIC_IRQ_TYPE_NORMAL | IIC_IRQ_CLASS_2 | IIC_UNIT_IOC_1, - - /* Base numbers for the IIC_ISR interrupts */ - IIC_IRQ_IOEX_TMI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 63, - IIC_IRQ_IOEX_PMI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 62, - IIC_IRQ_IOEX_ATI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 61, - IIC_IRQ_IOEX_MATBFI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 60, - IIC_IRQ_IOEX_ELDI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 59, - - /* Which bits in IIC_ISR are edge sensitive */ - IIC_ISR_EDGE_MASK = 0x4ul, -}; - -extern void iic_init_IRQ(void); -extern void iic_message_pass(int cpu, int msg); -extern void iic_request_IPIs(void); -extern void iic_setup_cpu(void); - -extern u8 iic_get_target_id(int cpu); - -extern void spider_init_IRQ(void); - -extern void iic_set_interrupt_routing(int cpu, int thread, int priority); - -#endif -#endif /* ASM_CELL_PIC_H */ diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c deleted file mode 100644 index 62c9679b8ca3..000000000000 --- a/arch/powerpc/platforms/cell/iommu.c +++ /dev/null @@ -1,1060 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * IOMMU implementation for Cell Broadband Processor Architecture - * - * (C) Copyright IBM Corporation 2006-2008 - * - * Author: Jeremy Kerr <jk@ozlabs.org> - */ - -#undef DEBUG - -#include <linux/kernel.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/irqdomain.h> -#include <linux/notifier.h> -#include <linux/of.h> -#include <linux/of_address.h> -#include <linux/platform_device.h> -#include <linux/slab.h> -#include <linux/memblock.h> - -#include <asm/prom.h> -#include <asm/iommu.h> -#include <asm/machdep.h> -#include <asm/pci-bridge.h> -#include <asm/udbg.h> -#include <asm/firmware.h> -#include <asm/cell-regs.h> - -#include "cell.h" -#include "interrupt.h" - -/* Define CELL_IOMMU_REAL_UNMAP to actually unmap non-used pages - * instead of leaving them mapped to some dummy page. This can be - * enabled once the appropriate workarounds for spider bugs have - * been enabled - */ -#define CELL_IOMMU_REAL_UNMAP - -/* Define CELL_IOMMU_STRICT_PROTECTION to enforce protection of - * IO PTEs based on the transfer direction. That can be enabled - * once spider-net has been fixed to pass the correct direction - * to the DMA mapping functions - */ -#define CELL_IOMMU_STRICT_PROTECTION - - -#define NR_IOMMUS 2 - -/* IOC mmap registers */ -#define IOC_Reg_Size 0x2000 - -#define IOC_IOPT_CacheInvd 0x908 -#define IOC_IOPT_CacheInvd_NE_Mask 0xffe0000000000000ul -#define IOC_IOPT_CacheInvd_IOPTE_Mask 0x000003fffffffff8ul -#define IOC_IOPT_CacheInvd_Busy 0x0000000000000001ul - -#define IOC_IOST_Origin 0x918 -#define IOC_IOST_Origin_E 0x8000000000000000ul -#define IOC_IOST_Origin_HW 0x0000000000000800ul -#define IOC_IOST_Origin_HL 0x0000000000000400ul - -#define IOC_IO_ExcpStat 0x920 -#define IOC_IO_ExcpStat_V 0x8000000000000000ul -#define IOC_IO_ExcpStat_SPF_Mask 0x6000000000000000ul -#define IOC_IO_ExcpStat_SPF_S 0x6000000000000000ul -#define IOC_IO_ExcpStat_SPF_P 0x2000000000000000ul -#define IOC_IO_ExcpStat_ADDR_Mask 0x00000007fffff000ul -#define IOC_IO_ExcpStat_RW_Mask 0x0000000000000800ul -#define IOC_IO_ExcpStat_IOID_Mask 0x00000000000007fful - -#define IOC_IO_ExcpMask 0x928 -#define IOC_IO_ExcpMask_SFE 0x4000000000000000ul -#define IOC_IO_ExcpMask_PFE 0x2000000000000000ul - -#define IOC_IOCmd_Offset 0x1000 - -#define IOC_IOCmd_Cfg 0xc00 -#define IOC_IOCmd_Cfg_TE 0x0000800000000000ul - - -/* Segment table entries */ -#define IOSTE_V 0x8000000000000000ul /* valid */ -#define IOSTE_H 0x4000000000000000ul /* cache hint */ -#define IOSTE_PT_Base_RPN_Mask 0x3ffffffffffff000ul /* base RPN of IOPT */ -#define IOSTE_NPPT_Mask 0x0000000000000fe0ul /* no. pages in IOPT */ -#define IOSTE_PS_Mask 0x0000000000000007ul /* page size */ -#define IOSTE_PS_4K 0x0000000000000001ul /* - 4kB */ -#define IOSTE_PS_64K 0x0000000000000003ul /* - 64kB */ -#define IOSTE_PS_1M 0x0000000000000005ul /* - 1MB */ -#define IOSTE_PS_16M 0x0000000000000007ul /* - 16MB */ - - -/* IOMMU sizing */ -#define IO_SEGMENT_SHIFT 28 -#define IO_PAGENO_BITS(shift) (IO_SEGMENT_SHIFT - (shift)) - -/* The high bit needs to be set on every DMA address */ -#define SPIDER_DMA_OFFSET 0x80000000ul - -struct iommu_window { - struct list_head list; - struct cbe_iommu *iommu; - unsigned long offset; - unsigned long size; - unsigned int ioid; - struct iommu_table table; -}; - -#define NAMESIZE 8 -struct cbe_iommu { - int nid; - char name[NAMESIZE]; - void __iomem *xlate_regs; - void __iomem *cmd_regs; - unsigned long *stab; - unsigned long *ptab; - void *pad_page; - struct list_head windows; -}; - -/* Static array of iommus, one per node - * each contains a list of windows, keyed from dma_window property - * - on bus setup, look for a matching window, or create one - * - on dev setup, assign iommu_table ptr - */ -static struct cbe_iommu iommus[NR_IOMMUS]; -static int cbe_nr_iommus; - -static void invalidate_tce_cache(struct cbe_iommu *iommu, unsigned long *pte, - long n_ptes) -{ - u64 __iomem *reg; - u64 val; - long n; - - reg = iommu->xlate_regs + IOC_IOPT_CacheInvd; - - while (n_ptes > 0) { - /* we can invalidate up to 1 << 11 PTEs at once */ - n = min(n_ptes, 1l << 11); - val = (((n /*- 1*/) << 53) & IOC_IOPT_CacheInvd_NE_Mask) - | (__pa(pte) & IOC_IOPT_CacheInvd_IOPTE_Mask) - | IOC_IOPT_CacheInvd_Busy; - - out_be64(reg, val); - while (in_be64(reg) & IOC_IOPT_CacheInvd_Busy) - ; - - n_ptes -= n; - pte += n; - } -} - -static int tce_build_cell(struct iommu_table *tbl, long index, long npages, - unsigned long uaddr, enum dma_data_direction direction, - unsigned long attrs) -{ - int i; - unsigned long *io_pte, base_pte; - struct iommu_window *window = - container_of(tbl, struct iommu_window, table); - - /* implementing proper protection causes problems with the spidernet - * driver - check mapping directions later, but allow read & write by - * default for now.*/ -#ifdef CELL_IOMMU_STRICT_PROTECTION - /* to avoid referencing a global, we use a trick here to setup the - * protection bit. "prot" is setup to be 3 fields of 4 bits appended - * together for each of the 3 supported direction values. It is then - * shifted left so that the fields matching the desired direction - * lands on the appropriate bits, and other bits are masked out. - */ - const unsigned long prot = 0xc48; - base_pte = - ((prot << (52 + 4 * direction)) & - (CBE_IOPTE_PP_W | CBE_IOPTE_PP_R)) | - CBE_IOPTE_M | CBE_IOPTE_SO_RW | - (window->ioid & CBE_IOPTE_IOID_Mask); -#else - base_pte = CBE_IOPTE_PP_W | CBE_IOPTE_PP_R | CBE_IOPTE_M | - CBE_IOPTE_SO_RW | (window->ioid & CBE_IOPTE_IOID_Mask); -#endif - if (unlikely(attrs & DMA_ATTR_WEAK_ORDERING)) - base_pte &= ~CBE_IOPTE_SO_RW; - - io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset); - - for (i = 0; i < npages; i++, uaddr += (1 << tbl->it_page_shift)) - io_pte[i] = base_pte | (__pa(uaddr) & CBE_IOPTE_RPN_Mask); - - mb(); - - invalidate_tce_cache(window->iommu, io_pte, npages); - - pr_debug("tce_build_cell(index=%lx,n=%lx,dir=%d,base_pte=%lx)\n", - index, npages, direction, base_pte); - return 0; -} - -static void tce_free_cell(struct iommu_table *tbl, long index, long npages) -{ - - int i; - unsigned long *io_pte, pte; - struct iommu_window *window = - container_of(tbl, struct iommu_window, table); - - pr_debug("tce_free_cell(index=%lx,n=%lx)\n", index, npages); - -#ifdef CELL_IOMMU_REAL_UNMAP - pte = 0; -#else - /* spider bridge does PCI reads after freeing - insert a mapping - * to a scratch page instead of an invalid entry */ - pte = CBE_IOPTE_PP_R | CBE_IOPTE_M | CBE_IOPTE_SO_RW | - __pa(window->iommu->pad_page) | - (window->ioid & CBE_IOPTE_IOID_Mask); -#endif - - io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset); - - for (i = 0; i < npages; i++) - io_pte[i] = pte; - - mb(); - - invalidate_tce_cache(window->iommu, io_pte, npages); -} - -static irqreturn_t ioc_interrupt(int irq, void *data) -{ - unsigned long stat, spf; - struct cbe_iommu *iommu = data; - - stat = in_be64(iommu->xlate_regs + IOC_IO_ExcpStat); - spf = stat & IOC_IO_ExcpStat_SPF_Mask; - - /* Might want to rate limit it */ - printk(KERN_ERR "iommu: DMA exception 0x%016lx\n", stat); - printk(KERN_ERR " V=%d, SPF=[%c%c], RW=%s, IOID=0x%04x\n", - !!(stat & IOC_IO_ExcpStat_V), - (spf == IOC_IO_ExcpStat_SPF_S) ? 'S' : ' ', - (spf == IOC_IO_ExcpStat_SPF_P) ? 'P' : ' ', - (stat & IOC_IO_ExcpStat_RW_Mask) ? "Read" : "Write", - (unsigned int)(stat & IOC_IO_ExcpStat_IOID_Mask)); - printk(KERN_ERR " page=0x%016lx\n", - stat & IOC_IO_ExcpStat_ADDR_Mask); - - /* clear interrupt */ - stat &= ~IOC_IO_ExcpStat_V; - out_be64(iommu->xlate_regs + IOC_IO_ExcpStat, stat); - - return IRQ_HANDLED; -} - -static int __init cell_iommu_find_ioc(int nid, unsigned long *base) -{ - struct device_node *np; - struct resource r; - - *base = 0; - - /* First look for new style /be nodes */ - for_each_node_by_name(np, "ioc") { - if (of_node_to_nid(np) != nid) - continue; - if (of_address_to_resource(np, 0, &r)) { - printk(KERN_ERR "iommu: can't get address for %pOF\n", - np); - continue; - } - *base = r.start; - of_node_put(np); - return 0; - } - - /* Ok, let's try the old way */ - for_each_node_by_type(np, "cpu") { - const unsigned int *nidp; - const unsigned long *tmp; - - nidp = of_get_property(np, "node-id", NULL); - if (nidp && *nidp == nid) { - tmp = of_get_property(np, "ioc-translation", NULL); - if (tmp) { - *base = *tmp; - of_node_put(np); - return 0; - } - } - } - - return -ENODEV; -} - -static void __init cell_iommu_setup_stab(struct cbe_iommu *iommu, - unsigned long dbase, unsigned long dsize, - unsigned long fbase, unsigned long fsize) -{ - struct page *page; - unsigned long segments, stab_size; - - segments = max(dbase + dsize, fbase + fsize) >> IO_SEGMENT_SHIFT; - - pr_debug("%s: iommu[%d]: segments: %lu\n", - __func__, iommu->nid, segments); - - /* set up the segment table */ - stab_size = segments * sizeof(unsigned long); - page = alloc_pages_node(iommu->nid, GFP_KERNEL, get_order(stab_size)); - BUG_ON(!page); - iommu->stab = page_address(page); - memset(iommu->stab, 0, stab_size); -} - -static unsigned long *__init cell_iommu_alloc_ptab(struct cbe_iommu *iommu, - unsigned long base, unsigned long size, unsigned long gap_base, - unsigned long gap_size, unsigned long page_shift) -{ - struct page *page; - int i; - unsigned long reg, segments, pages_per_segment, ptab_size, - n_pte_pages, start_seg, *ptab; - - start_seg = base >> IO_SEGMENT_SHIFT; - segments = size >> IO_SEGMENT_SHIFT; - pages_per_segment = 1ull << IO_PAGENO_BITS(page_shift); - /* PTEs for each segment must start on a 4K boundary */ - pages_per_segment = max(pages_per_segment, - (1 << 12) / sizeof(unsigned long)); - - ptab_size = segments * pages_per_segment * sizeof(unsigned long); - pr_debug("%s: iommu[%d]: ptab_size: %lu, order: %d\n", __func__, - iommu->nid, ptab_size, get_order(ptab_size)); - page = alloc_pages_node(iommu->nid, GFP_KERNEL, get_order(ptab_size)); - BUG_ON(!page); - - ptab = page_address(page); - memset(ptab, 0, ptab_size); - - /* number of 4K pages needed for a page table */ - n_pte_pages = (pages_per_segment * sizeof(unsigned long)) >> 12; - - pr_debug("%s: iommu[%d]: stab at %p, ptab at %p, n_pte_pages: %lu\n", - __func__, iommu->nid, iommu->stab, ptab, - n_pte_pages); - - /* initialise the STEs */ - reg = IOSTE_V | ((n_pte_pages - 1) << 5); - - switch (page_shift) { - case 12: reg |= IOSTE_PS_4K; break; - case 16: reg |= IOSTE_PS_64K; break; - case 20: reg |= IOSTE_PS_1M; break; - case 24: reg |= IOSTE_PS_16M; break; - default: BUG(); - } - - gap_base = gap_base >> IO_SEGMENT_SHIFT; - gap_size = gap_size >> IO_SEGMENT_SHIFT; - - pr_debug("Setting up IOMMU stab:\n"); - for (i = start_seg; i < (start_seg + segments); i++) { - if (i >= gap_base && i < (gap_base + gap_size)) { - pr_debug("\toverlap at %d, skipping\n", i); - continue; - } - iommu->stab[i] = reg | (__pa(ptab) + (n_pte_pages << 12) * - (i - start_seg)); - pr_debug("\t[%d] 0x%016lx\n", i, iommu->stab[i]); - } - - return ptab; -} - -static void __init cell_iommu_enable_hardware(struct cbe_iommu *iommu) -{ - int ret; - unsigned long reg, xlate_base; - unsigned int virq; - - if (cell_iommu_find_ioc(iommu->nid, &xlate_base)) - panic("%s: missing IOC register mappings for node %d\n", - __func__, iommu->nid); - - iommu->xlate_regs = ioremap(xlate_base, IOC_Reg_Size); - iommu->cmd_regs = iommu->xlate_regs + IOC_IOCmd_Offset; - - /* ensure that the STEs have updated */ - mb(); - - /* setup interrupts for the iommu. */ - reg = in_be64(iommu->xlate_regs + IOC_IO_ExcpStat); - out_be64(iommu->xlate_regs + IOC_IO_ExcpStat, - reg & ~IOC_IO_ExcpStat_V); - out_be64(iommu->xlate_regs + IOC_IO_ExcpMask, - IOC_IO_ExcpMask_PFE | IOC_IO_ExcpMask_SFE); - - virq = irq_create_mapping(NULL, - IIC_IRQ_IOEX_ATI | (iommu->nid << IIC_IRQ_NODE_SHIFT)); - BUG_ON(!virq); - - ret = request_irq(virq, ioc_interrupt, 0, iommu->name, iommu); - BUG_ON(ret); - - /* set the IOC segment table origin register (and turn on the iommu) */ - reg = IOC_IOST_Origin_E | __pa(iommu->stab) | IOC_IOST_Origin_HW; - out_be64(iommu->xlate_regs + IOC_IOST_Origin, reg); - in_be64(iommu->xlate_regs + IOC_IOST_Origin); - - /* turn on IO translation */ - reg = in_be64(iommu->cmd_regs + IOC_IOCmd_Cfg) | IOC_IOCmd_Cfg_TE; - out_be64(iommu->cmd_regs + IOC_IOCmd_Cfg, reg); -} - -static void __init cell_iommu_setup_hardware(struct cbe_iommu *iommu, - unsigned long base, unsigned long size) -{ - cell_iommu_setup_stab(iommu, base, size, 0, 0); - iommu->ptab = cell_iommu_alloc_ptab(iommu, base, size, 0, 0, - IOMMU_PAGE_SHIFT_4K); - cell_iommu_enable_hardware(iommu); -} - -static inline u32 cell_iommu_get_ioid(struct device_node *np) -{ - const u32 *ioid; - - ioid = of_get_property(np, "ioid", NULL); - if (ioid == NULL) { - printk(KERN_WARNING "iommu: missing ioid for %pOF using 0\n", - np); - return 0; - } - - return *ioid; -} - -static struct iommu_table_ops cell_iommu_ops = { - .set = tce_build_cell, - .clear = tce_free_cell -}; - -static struct iommu_window * __init -cell_iommu_setup_window(struct cbe_iommu *iommu, struct device_node *np, - unsigned long offset, unsigned long size, - unsigned long pte_offset) -{ - struct iommu_window *window; - struct page *page; - u32 ioid; - - ioid = cell_iommu_get_ioid(np); - - window = kzalloc_node(sizeof(*window), GFP_KERNEL, iommu->nid); - BUG_ON(window == NULL); - - window->offset = offset; - window->size = size; - window->ioid = ioid; - window->iommu = iommu; - - window->table.it_blocksize = 16; - window->table.it_base = (unsigned long)iommu->ptab; - window->table.it_index = iommu->nid; - window->table.it_page_shift = IOMMU_PAGE_SHIFT_4K; - window->table.it_offset = - (offset >> window->table.it_page_shift) + pte_offset; - window->table.it_size = size >> window->table.it_page_shift; - window->table.it_ops = &cell_iommu_ops; - - if (!iommu_init_table(&window->table, iommu->nid, 0, 0)) - panic("Failed to initialize iommu table"); - - pr_debug("\tioid %d\n", window->ioid); - pr_debug("\tblocksize %ld\n", window->table.it_blocksize); - pr_debug("\tbase 0x%016lx\n", window->table.it_base); - pr_debug("\toffset 0x%lx\n", window->table.it_offset); - pr_debug("\tsize %ld\n", window->table.it_size); - - list_add(&window->list, &iommu->windows); - - if (offset != 0) - return window; - - /* We need to map and reserve the first IOMMU page since it's used - * by the spider workaround. In theory, we only need to do that when - * running on spider but it doesn't really matter. - * - * This code also assumes that we have a window that starts at 0, - * which is the case on all spider based blades. - */ - page = alloc_pages_node(iommu->nid, GFP_KERNEL, 0); - BUG_ON(!page); - iommu->pad_page = page_address(page); - clear_page(iommu->pad_page); - - __set_bit(0, window->table.it_map); - tce_build_cell(&window->table, window->table.it_offset, 1, - (unsigned long)iommu->pad_page, DMA_TO_DEVICE, 0); - - return window; -} - -static struct cbe_iommu *cell_iommu_for_node(int nid) -{ - int i; - - for (i = 0; i < cbe_nr_iommus; i++) - if (iommus[i].nid == nid) - return &iommus[i]; - return NULL; -} - -static unsigned long cell_dma_nommu_offset; - -static unsigned long dma_iommu_fixed_base; -static bool cell_iommu_enabled; - -/* iommu_fixed_is_weak is set if booted with iommu_fixed=weak */ -bool iommu_fixed_is_weak; - -static struct iommu_table *cell_get_iommu_table(struct device *dev) -{ - struct iommu_window *window; - struct cbe_iommu *iommu; - - /* Current implementation uses the first window available in that - * node's iommu. We -might- do something smarter later though it may - * never be necessary - */ - iommu = cell_iommu_for_node(dev_to_node(dev)); - if (iommu == NULL || list_empty(&iommu->windows)) { - dev_err(dev, "iommu: missing iommu for %pOF (node %d)\n", - dev->of_node, dev_to_node(dev)); - return NULL; - } - window = list_entry(iommu->windows.next, struct iommu_window, list); - - return &window->table; -} - -static u64 cell_iommu_get_fixed_address(struct device *dev); - -static void cell_dma_dev_setup(struct device *dev) -{ - if (cell_iommu_enabled) { - u64 addr = cell_iommu_get_fixed_address(dev); - - if (addr != OF_BAD_ADDR) - dev->archdata.dma_offset = addr + dma_iommu_fixed_base; - set_iommu_table_base(dev, cell_get_iommu_table(dev)); - } else { - dev->archdata.dma_offset = cell_dma_nommu_offset; - } -} - -static void cell_pci_dma_dev_setup(struct pci_dev *dev) -{ - cell_dma_dev_setup(&dev->dev); -} - -static int cell_of_bus_notify(struct notifier_block *nb, unsigned long action, - void *data) -{ - struct device *dev = data; - - /* We are only interested in device addition */ - if (action != BUS_NOTIFY_ADD_DEVICE) - return 0; - - if (cell_iommu_enabled) - dev->dma_ops = &dma_iommu_ops; - cell_dma_dev_setup(dev); - return 0; -} - -static struct notifier_block cell_of_bus_notifier = { - .notifier_call = cell_of_bus_notify -}; - -static int __init cell_iommu_get_window(struct device_node *np, - unsigned long *base, - unsigned long *size) -{ - const __be32 *dma_window; - unsigned long index; - - /* Use ibm,dma-window if available, else, hard code ! */ - dma_window = of_get_property(np, "ibm,dma-window", NULL); - if (dma_window == NULL) { - *base = 0; - *size = 0x80000000u; - return -ENODEV; - } - - of_parse_dma_window(np, dma_window, &index, base, size); - return 0; -} - -static struct cbe_iommu * __init cell_iommu_alloc(struct device_node *np) -{ - struct cbe_iommu *iommu; - int nid, i; - - /* Get node ID */ - nid = of_node_to_nid(np); - if (nid < 0) { - printk(KERN_ERR "iommu: failed to get node for %pOF\n", - np); - return NULL; - } - pr_debug("iommu: setting up iommu for node %d (%pOF)\n", - nid, np); - - /* XXX todo: If we can have multiple windows on the same IOMMU, which - * isn't the case today, we probably want here to check whether the - * iommu for that node is already setup. - * However, there might be issue with getting the size right so let's - * ignore that for now. We might want to completely get rid of the - * multiple window support since the cell iommu supports per-page ioids - */ - - if (cbe_nr_iommus >= NR_IOMMUS) { - printk(KERN_ERR "iommu: too many IOMMUs detected ! (%pOF)\n", - np); - return NULL; - } - - /* Init base fields */ - i = cbe_nr_iommus++; - iommu = &iommus[i]; - iommu->stab = NULL; - iommu->nid = nid; - snprintf(iommu->name, sizeof(iommu->name), "iommu%d", i); - INIT_LIST_HEAD(&iommu->windows); - - return iommu; -} - -static void __init cell_iommu_init_one(struct device_node *np, - unsigned long offset) -{ - struct cbe_iommu *iommu; - unsigned long base, size; - - iommu = cell_iommu_alloc(np); - if (!iommu) - return; - - /* Obtain a window for it */ - cell_iommu_get_window(np, &base, &size); - - pr_debug("\ttranslating window 0x%lx...0x%lx\n", - base, base + size - 1); - - /* Initialize the hardware */ - cell_iommu_setup_hardware(iommu, base, size); - - /* Setup the iommu_table */ - cell_iommu_setup_window(iommu, np, base, size, - offset >> IOMMU_PAGE_SHIFT_4K); -} - -static void __init cell_disable_iommus(void) -{ - int node; - unsigned long base, val; - void __iomem *xregs, *cregs; - - /* Make sure IOC translation is disabled on all nodes */ - for_each_online_node(node) { - if (cell_iommu_find_ioc(node, &base)) - continue; - xregs = ioremap(base, IOC_Reg_Size); - if (xregs == NULL) - continue; - cregs = xregs + IOC_IOCmd_Offset; - - pr_debug("iommu: cleaning up iommu on node %d\n", node); - - out_be64(xregs + IOC_IOST_Origin, 0); - (void)in_be64(xregs + IOC_IOST_Origin); - val = in_be64(cregs + IOC_IOCmd_Cfg); - val &= ~IOC_IOCmd_Cfg_TE; - out_be64(cregs + IOC_IOCmd_Cfg, val); - (void)in_be64(cregs + IOC_IOCmd_Cfg); - - iounmap(xregs); - } -} - -static int __init cell_iommu_init_disabled(void) -{ - struct device_node *np = NULL; - unsigned long base = 0, size; - - /* When no iommu is present, we use direct DMA ops */ - - /* First make sure all IOC translation is turned off */ - cell_disable_iommus(); - - /* If we have no Axon, we set up the spider DMA magic offset */ - np = of_find_node_by_name(NULL, "axon"); - if (!np) - cell_dma_nommu_offset = SPIDER_DMA_OFFSET; - of_node_put(np); - - /* Now we need to check to see where the memory is mapped - * in PCI space. We assume that all busses use the same dma - * window which is always the case so far on Cell, thus we - * pick up the first pci-internal node we can find and check - * the DMA window from there. - */ - for_each_node_by_name(np, "axon") { - if (np->parent == NULL || np->parent->parent != NULL) - continue; - if (cell_iommu_get_window(np, &base, &size) == 0) - break; - } - if (np == NULL) { - for_each_node_by_name(np, "pci-internal") { - if (np->parent == NULL || np->parent->parent != NULL) - continue; - if (cell_iommu_get_window(np, &base, &size) == 0) - break; - } - } - of_node_put(np); - - /* If we found a DMA window, we check if it's big enough to enclose - * all of physical memory. If not, we force enable IOMMU - */ - if (np && size < memblock_end_of_DRAM()) { - printk(KERN_WARNING "iommu: force-enabled, dma window" - " (%ldMB) smaller than total memory (%lldMB)\n", - size >> 20, memblock_end_of_DRAM() >> 20); - return -ENODEV; - } - - cell_dma_nommu_offset += base; - - if (cell_dma_nommu_offset != 0) - cell_pci_controller_ops.dma_dev_setup = cell_pci_dma_dev_setup; - - printk("iommu: disabled, direct DMA offset is 0x%lx\n", - cell_dma_nommu_offset); - - return 0; -} - -/* - * Fixed IOMMU mapping support - * - * This code adds support for setting up a fixed IOMMU mapping on certain - * cell machines. For 64-bit devices this avoids the performance overhead of - * mapping and unmapping pages at runtime. 32-bit devices are unable to use - * the fixed mapping. - * - * The fixed mapping is established at boot, and maps all of physical memory - * 1:1 into device space at some offset. On machines with < 30 GB of memory - * we setup the fixed mapping immediately above the normal IOMMU window. - * - * For example a machine with 4GB of memory would end up with the normal - * IOMMU window from 0-2GB and the fixed mapping window from 2GB to 6GB. In - * this case a 64-bit device wishing to DMA to 1GB would be told to DMA to - * 3GB, plus any offset required by firmware. The firmware offset is encoded - * in the "dma-ranges" property. - * - * On machines with 30GB or more of memory, we are unable to place the fixed - * mapping above the normal IOMMU window as we would run out of address space. - * Instead we move the normal IOMMU window to coincide with the hash page - * table, this region does not need to be part of the fixed mapping as no - * device should ever be DMA'ing to it. We then setup the fixed mapping - * from 0 to 32GB. - */ - -static u64 cell_iommu_get_fixed_address(struct device *dev) -{ - u64 best_size, dev_addr = OF_BAD_ADDR; - struct device_node *np; - struct of_range_parser parser; - struct of_range range; - - /* We can be called for platform devices that have no of_node */ - np = of_node_get(dev->of_node); - if (!np) - goto out; - - while ((np = of_get_next_parent(np))) { - if (of_pci_dma_range_parser_init(&parser, np)) - continue; - - if (of_range_count(&parser)) - break; - } - - if (!np) { - dev_dbg(dev, "iommu: no dma-ranges found\n"); - goto out; - } - - best_size = 0; - for_each_of_range(&parser, &range) { - if (!range.cpu_addr) - continue; - - if (range.size > best_size) { - best_size = range.size; - dev_addr = range.bus_addr; - } - } - - if (!best_size) - dev_dbg(dev, "iommu: no suitable range found!\n"); - -out: - of_node_put(np); - - return dev_addr; -} - -static bool cell_pci_iommu_bypass_supported(struct pci_dev *pdev, u64 mask) -{ - return mask == DMA_BIT_MASK(64) && - cell_iommu_get_fixed_address(&pdev->dev) != OF_BAD_ADDR; -} - -static void __init insert_16M_pte(unsigned long addr, unsigned long *ptab, - unsigned long base_pte) -{ - unsigned long segment, offset; - - segment = addr >> IO_SEGMENT_SHIFT; - offset = (addr >> 24) - (segment << IO_PAGENO_BITS(24)); - ptab = ptab + (segment * (1 << 12) / sizeof(unsigned long)); - - pr_debug("iommu: addr %lx ptab %p segment %lx offset %lx\n", - addr, ptab, segment, offset); - - ptab[offset] = base_pte | (__pa(addr) & CBE_IOPTE_RPN_Mask); -} - -static void __init cell_iommu_setup_fixed_ptab(struct cbe_iommu *iommu, - struct device_node *np, unsigned long dbase, unsigned long dsize, - unsigned long fbase, unsigned long fsize) -{ - unsigned long base_pte, uaddr, ioaddr, *ptab; - - ptab = cell_iommu_alloc_ptab(iommu, fbase, fsize, dbase, dsize, 24); - - dma_iommu_fixed_base = fbase; - - pr_debug("iommu: mapping 0x%lx pages from 0x%lx\n", fsize, fbase); - - base_pte = CBE_IOPTE_PP_W | CBE_IOPTE_PP_R | CBE_IOPTE_M | - (cell_iommu_get_ioid(np) & CBE_IOPTE_IOID_Mask); - - if (iommu_fixed_is_weak) - pr_info("IOMMU: Using weak ordering for fixed mapping\n"); - else { - pr_info("IOMMU: Using strong ordering for fixed mapping\n"); - base_pte |= CBE_IOPTE_SO_RW; - } - - for (uaddr = 0; uaddr < fsize; uaddr += (1 << 24)) { - /* Don't touch the dynamic region */ - ioaddr = uaddr + fbase; - if (ioaddr >= dbase && ioaddr < (dbase + dsize)) { - pr_debug("iommu: fixed/dynamic overlap, skipping\n"); - continue; - } - - insert_16M_pte(uaddr, ptab, base_pte); - } - - mb(); -} - -static int __init cell_iommu_fixed_mapping_init(void) -{ - unsigned long dbase, dsize, fbase, fsize, hbase, hend; - struct cbe_iommu *iommu; - struct device_node *np; - - /* The fixed mapping is only supported on axon machines */ - np = of_find_node_by_name(NULL, "axon"); - of_node_put(np); - - if (!np) { - pr_debug("iommu: fixed mapping disabled, no axons found\n"); - return -1; - } - - /* We must have dma-ranges properties for fixed mapping to work */ - np = of_find_node_with_property(NULL, "dma-ranges"); - of_node_put(np); - - if (!np) { - pr_debug("iommu: no dma-ranges found, no fixed mapping\n"); - return -1; - } - - /* The default setup is to have the fixed mapping sit after the - * dynamic region, so find the top of the largest IOMMU window - * on any axon, then add the size of RAM and that's our max value. - * If that is > 32GB we have to do other shennanigans. - */ - fbase = 0; - for_each_node_by_name(np, "axon") { - cell_iommu_get_window(np, &dbase, &dsize); - fbase = max(fbase, dbase + dsize); - } - - fbase = ALIGN(fbase, 1 << IO_SEGMENT_SHIFT); - fsize = memblock_phys_mem_size(); - - if ((fbase + fsize) <= 0x800000000ul) - hbase = 0; /* use the device tree window */ - else { - /* If we're over 32 GB we need to cheat. We can't map all of - * RAM with the fixed mapping, and also fit the dynamic - * region. So try to place the dynamic region where the hash - * table sits, drivers never need to DMA to it, we don't - * need a fixed mapping for that area. - */ - if (!htab_address) { - pr_debug("iommu: htab is NULL, on LPAR? Huh?\n"); - return -1; - } - hbase = __pa(htab_address); - hend = hbase + htab_size_bytes; - - /* The window must start and end on a segment boundary */ - if ((hbase != ALIGN(hbase, 1 << IO_SEGMENT_SHIFT)) || - (hend != ALIGN(hend, 1 << IO_SEGMENT_SHIFT))) { - pr_debug("iommu: hash window not segment aligned\n"); - return -1; - } - - /* Check the hash window fits inside the real DMA window */ - for_each_node_by_name(np, "axon") { - cell_iommu_get_window(np, &dbase, &dsize); - - if (hbase < dbase || (hend > (dbase + dsize))) { - pr_debug("iommu: hash window doesn't fit in" - "real DMA window\n"); - of_node_put(np); - return -1; - } - } - - fbase = 0; - } - - /* Setup the dynamic regions */ - for_each_node_by_name(np, "axon") { - iommu = cell_iommu_alloc(np); - BUG_ON(!iommu); - - if (hbase == 0) - cell_iommu_get_window(np, &dbase, &dsize); - else { - dbase = hbase; - dsize = htab_size_bytes; - } - - printk(KERN_DEBUG "iommu: node %d, dynamic window 0x%lx-0x%lx " - "fixed window 0x%lx-0x%lx\n", iommu->nid, dbase, - dbase + dsize, fbase, fbase + fsize); - - cell_iommu_setup_stab(iommu, dbase, dsize, fbase, fsize); - iommu->ptab = cell_iommu_alloc_ptab(iommu, dbase, dsize, 0, 0, - IOMMU_PAGE_SHIFT_4K); - cell_iommu_setup_fixed_ptab(iommu, np, dbase, dsize, - fbase, fsize); - cell_iommu_enable_hardware(iommu); - cell_iommu_setup_window(iommu, np, dbase, dsize, 0); - } - - cell_pci_controller_ops.iommu_bypass_supported = - cell_pci_iommu_bypass_supported; - return 0; -} - -static int iommu_fixed_disabled; - -static int __init setup_iommu_fixed(char *str) -{ - struct device_node *pciep; - - if (strcmp(str, "off") == 0) - iommu_fixed_disabled = 1; - - /* If we can find a pcie-endpoint in the device tree assume that - * we're on a triblade or a CAB so by default the fixed mapping - * should be set to be weakly ordered; but only if the boot - * option WASN'T set for strong ordering - */ - pciep = of_find_node_by_type(NULL, "pcie-endpoint"); - - if (strcmp(str, "weak") == 0 || (pciep && strcmp(str, "strong") != 0)) - iommu_fixed_is_weak = true; - - of_node_put(pciep); - - return 1; -} -__setup("iommu_fixed=", setup_iommu_fixed); - -static int __init cell_iommu_init(void) -{ - struct device_node *np; - - /* If IOMMU is disabled or we have little enough RAM to not need - * to enable it, we setup a direct mapping. - * - * Note: should we make sure we have the IOMMU actually disabled ? - */ - if (iommu_is_off || - (!iommu_force_on && memblock_end_of_DRAM() <= 0x80000000ull)) - if (cell_iommu_init_disabled() == 0) - goto bail; - - /* Setup various callbacks */ - cell_pci_controller_ops.dma_dev_setup = cell_pci_dma_dev_setup; - - if (!iommu_fixed_disabled && cell_iommu_fixed_mapping_init() == 0) - goto done; - - /* Create an iommu for each /axon node. */ - for_each_node_by_name(np, "axon") { - if (np->parent == NULL || np->parent->parent != NULL) - continue; - cell_iommu_init_one(np, 0); - } - - /* Create an iommu for each toplevel /pci-internal node for - * old hardware/firmware - */ - for_each_node_by_name(np, "pci-internal") { - if (np->parent == NULL || np->parent->parent != NULL) - continue; - cell_iommu_init_one(np, SPIDER_DMA_OFFSET); - } - done: - /* Setup default PCI iommu ops */ - set_pci_dma_ops(&dma_iommu_ops); - cell_iommu_enabled = true; - bail: - /* Register callbacks on OF platform device addition/removal - * to handle linking them to the right DMA operations - */ - bus_register_notifier(&platform_bus_type, &cell_of_bus_notifier); - - return 0; -} -machine_arch_initcall(cell, cell_iommu_init); diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c deleted file mode 100644 index 58d967ee38b3..000000000000 --- a/arch/powerpc/platforms/cell/pervasive.c +++ /dev/null @@ -1,125 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * CBE Pervasive Monitor and Debug - * - * (C) Copyright IBM Corporation 2005 - * - * Authors: Maximino Aguilar (maguilar@us.ibm.com) - * Michael N. Day (mnday@us.ibm.com) - */ - -#undef DEBUG - -#include <linux/interrupt.h> -#include <linux/irq.h> -#include <linux/percpu.h> -#include <linux/types.h> -#include <linux/kallsyms.h> -#include <linux/pgtable.h> - -#include <asm/io.h> -#include <asm/machdep.h> -#include <asm/reg.h> -#include <asm/cell-regs.h> -#include <asm/cpu_has_feature.h> - -#include "pervasive.h" -#include "ras.h" - -static void cbe_power_save(void) -{ - unsigned long ctrl, thread_switch_control; - - /* Ensure our interrupt state is properly tracked */ - if (!prep_irq_for_idle()) - return; - - ctrl = mfspr(SPRN_CTRLF); - - /* Enable DEC and EE interrupt request */ - thread_switch_control = mfspr(SPRN_TSC_CELL); - thread_switch_control |= TSC_CELL_EE_ENABLE | TSC_CELL_EE_BOOST; - - switch (ctrl & CTRL_CT) { - case CTRL_CT0: - thread_switch_control |= TSC_CELL_DEC_ENABLE_0; - break; - case CTRL_CT1: - thread_switch_control |= TSC_CELL_DEC_ENABLE_1; - break; - default: - printk(KERN_WARNING "%s: unknown configuration\n", - __func__); - break; - } - mtspr(SPRN_TSC_CELL, thread_switch_control); - - /* - * go into low thread priority, medium priority will be - * restored for us after wake-up. - */ - HMT_low(); - - /* - * atomically disable thread execution and runlatch. - * External and Decrementer exceptions are still handled when the - * thread is disabled but now enter in cbe_system_reset_exception() - */ - ctrl &= ~(CTRL_RUNLATCH | CTRL_TE); - mtspr(SPRN_CTRLT, ctrl); - - /* Re-enable interrupts in MSR */ - __hard_irq_enable(); -} - -static int cbe_system_reset_exception(struct pt_regs *regs) -{ - switch (regs->msr & SRR1_WAKEMASK) { - case SRR1_WAKEDEC: - set_dec(1); - break; - case SRR1_WAKEEE: - /* - * Handle these when interrupts get re-enabled and we take - * them as regular exceptions. We are in an NMI context - * and can't handle these here. - */ - break; - case SRR1_WAKEMT: - return cbe_sysreset_hack(); -#ifdef CONFIG_CBE_RAS - case SRR1_WAKESYSERR: - cbe_system_error_exception(regs); - break; - case SRR1_WAKETHERM: - cbe_thermal_exception(regs); - break; -#endif /* CONFIG_CBE_RAS */ - default: - /* do system reset */ - return 0; - } - /* everything handled */ - return 1; -} - -void __init cbe_pervasive_init(void) -{ - int cpu; - - if (!cpu_has_feature(CPU_FTR_PAUSE_ZERO)) - return; - - for_each_possible_cpu(cpu) { - struct cbe_pmd_regs __iomem *regs = cbe_get_cpu_pmd_regs(cpu); - if (!regs) - continue; - - /* Enable Pause(0) control bit */ - out_be64(®s->pmcr, in_be64(®s->pmcr) | - CBE_PMD_PAUSE_ZERO_CONTROL); - } - - ppc_md.power_save = cbe_power_save; - ppc_md.system_reset_exception = cbe_system_reset_exception; -} diff --git a/arch/powerpc/platforms/cell/pervasive.h b/arch/powerpc/platforms/cell/pervasive.h deleted file mode 100644 index 0da74ab10716..000000000000 --- a/arch/powerpc/platforms/cell/pervasive.h +++ /dev/null @@ -1,26 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * Cell Pervasive Monitor and Debug interface and HW structures - * - * (C) Copyright IBM Corporation 2005 - * - * Authors: Maximino Aguilar (maguilar@us.ibm.com) - * David J. Erb (djerb@us.ibm.com) - */ - - -#ifndef PERVASIVE_H -#define PERVASIVE_H - -extern void cbe_pervasive_init(void); - -#ifdef CONFIG_PPC_IBM_CELL_RESETBUTTON -extern int cbe_sysreset_hack(void); -#else -static inline int cbe_sysreset_hack(void) -{ - return 1; -} -#endif /* CONFIG_PPC_IBM_CELL_RESETBUTTON */ - -#endif diff --git a/arch/powerpc/platforms/cell/pmu.c b/arch/powerpc/platforms/cell/pmu.c deleted file mode 100644 index b207a7f99be5..000000000000 --- a/arch/powerpc/platforms/cell/pmu.c +++ /dev/null @@ -1,412 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Cell Broadband Engine Performance Monitor - * - * (C) Copyright IBM Corporation 2001,2006 - * - * Author: - * David Erb (djerb@us.ibm.com) - * Kevin Corry (kevcorry@us.ibm.com) - */ - -#include <linux/interrupt.h> -#include <linux/irqdomain.h> -#include <linux/types.h> -#include <linux/export.h> -#include <asm/io.h> -#include <asm/irq_regs.h> -#include <asm/machdep.h> -#include <asm/pmc.h> -#include <asm/reg.h> -#include <asm/spu.h> -#include <asm/cell-regs.h> - -#include "interrupt.h" - -/* - * When writing to write-only mmio addresses, save a shadow copy. All of the - * registers are 32-bit, but stored in the upper-half of a 64-bit field in - * pmd_regs. - */ - -#define WRITE_WO_MMIO(reg, x) \ - do { \ - u32 _x = (x); \ - struct cbe_pmd_regs __iomem *pmd_regs; \ - struct cbe_pmd_shadow_regs *shadow_regs; \ - pmd_regs = cbe_get_cpu_pmd_regs(cpu); \ - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); \ - out_be64(&(pmd_regs->reg), (((u64)_x) << 32)); \ - shadow_regs->reg = _x; \ - } while (0) - -#define READ_SHADOW_REG(val, reg) \ - do { \ - struct cbe_pmd_shadow_regs *shadow_regs; \ - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); \ - (val) = shadow_regs->reg; \ - } while (0) - -#define READ_MMIO_UPPER32(val, reg) \ - do { \ - struct cbe_pmd_regs __iomem *pmd_regs; \ - pmd_regs = cbe_get_cpu_pmd_regs(cpu); \ - (val) = (u32)(in_be64(&pmd_regs->reg) >> 32); \ - } while (0) - -/* - * Physical counter registers. - * Each physical counter can act as one 32-bit counter or two 16-bit counters. - */ - -u32 cbe_read_phys_ctr(u32 cpu, u32 phys_ctr) -{ - u32 val_in_latch, val = 0; - - if (phys_ctr < NR_PHYS_CTRS) { - READ_SHADOW_REG(val_in_latch, counter_value_in_latch); - - /* Read the latch or the actual counter, whichever is newer. */ - if (val_in_latch & (1 << phys_ctr)) { - READ_SHADOW_REG(val, pm_ctr[phys_ctr]); - } else { - READ_MMIO_UPPER32(val, pm_ctr[phys_ctr]); - } - } - - return val; -} -EXPORT_SYMBOL_GPL(cbe_read_phys_ctr); - -void cbe_write_phys_ctr(u32 cpu, u32 phys_ctr, u32 val) -{ - struct cbe_pmd_shadow_regs *shadow_regs; - u32 pm_ctrl; - - if (phys_ctr < NR_PHYS_CTRS) { - /* Writing to a counter only writes to a hardware latch. - * The new value is not propagated to the actual counter - * until the performance monitor is enabled. - */ - WRITE_WO_MMIO(pm_ctr[phys_ctr], val); - - pm_ctrl = cbe_read_pm(cpu, pm_control); - if (pm_ctrl & CBE_PM_ENABLE_PERF_MON) { - /* The counters are already active, so we need to - * rewrite the pm_control register to "re-enable" - * the PMU. - */ - cbe_write_pm(cpu, pm_control, pm_ctrl); - } else { - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); - shadow_regs->counter_value_in_latch |= (1 << phys_ctr); - } - } -} -EXPORT_SYMBOL_GPL(cbe_write_phys_ctr); - -/* - * "Logical" counter registers. - * These will read/write 16-bits or 32-bits depending on the - * current size of the counter. Counters 4 - 7 are always 16-bit. - */ - -u32 cbe_read_ctr(u32 cpu, u32 ctr) -{ - u32 val; - u32 phys_ctr = ctr & (NR_PHYS_CTRS - 1); - - val = cbe_read_phys_ctr(cpu, phys_ctr); - - if (cbe_get_ctr_size(cpu, phys_ctr) == 16) - val = (ctr < NR_PHYS_CTRS) ? (val >> 16) : (val & 0xffff); - - return val; -} -EXPORT_SYMBOL_GPL(cbe_read_ctr); - -void cbe_write_ctr(u32 cpu, u32 ctr, u32 val) -{ - u32 phys_ctr; - u32 phys_val; - - phys_ctr = ctr & (NR_PHYS_CTRS - 1); - - if (cbe_get_ctr_size(cpu, phys_ctr) == 16) { - phys_val = cbe_read_phys_ctr(cpu, phys_ctr); - - if (ctr < NR_PHYS_CTRS) - val = (val << 16) | (phys_val & 0xffff); - else - val = (val & 0xffff) | (phys_val & 0xffff0000); - } - - cbe_write_phys_ctr(cpu, phys_ctr, val); -} -EXPORT_SYMBOL_GPL(cbe_write_ctr); - -/* - * Counter-control registers. - * Each "logical" counter has a corresponding control register. - */ - -u32 cbe_read_pm07_control(u32 cpu, u32 ctr) -{ - u32 pm07_control = 0; - - if (ctr < NR_CTRS) - READ_SHADOW_REG(pm07_control, pm07_control[ctr]); - - return pm07_control; -} -EXPORT_SYMBOL_GPL(cbe_read_pm07_control); - -void cbe_write_pm07_control(u32 cpu, u32 ctr, u32 val) -{ - if (ctr < NR_CTRS) - WRITE_WO_MMIO(pm07_control[ctr], val); -} -EXPORT_SYMBOL_GPL(cbe_write_pm07_control); - -/* - * Other PMU control registers. Most of these are write-only. - */ - -u32 cbe_read_pm(u32 cpu, enum pm_reg_name reg) -{ - u32 val = 0; - - switch (reg) { - case group_control: - READ_SHADOW_REG(val, group_control); - break; - - case debug_bus_control: - READ_SHADOW_REG(val, debug_bus_control); - break; - - case trace_address: - READ_MMIO_UPPER32(val, trace_address); - break; - - case ext_tr_timer: - READ_SHADOW_REG(val, ext_tr_timer); - break; - - case pm_status: - READ_MMIO_UPPER32(val, pm_status); - break; - - case pm_control: - READ_SHADOW_REG(val, pm_control); - break; - - case pm_interval: - READ_MMIO_UPPER32(val, pm_interval); - break; - - case pm_start_stop: - READ_SHADOW_REG(val, pm_start_stop); - break; - } - - return val; -} -EXPORT_SYMBOL_GPL(cbe_read_pm); - -void cbe_write_pm(u32 cpu, enum pm_reg_name reg, u32 val) -{ - switch (reg) { - case group_control: - WRITE_WO_MMIO(group_control, val); - break; - - case debug_bus_control: - WRITE_WO_MMIO(debug_bus_control, val); - break; - - case trace_address: - WRITE_WO_MMIO(trace_address, val); - break; - - case ext_tr_timer: - WRITE_WO_MMIO(ext_tr_timer, val); - break; - - case pm_status: - WRITE_WO_MMIO(pm_status, val); - break; - - case pm_control: - WRITE_WO_MMIO(pm_control, val); - break; - - case pm_interval: - WRITE_WO_MMIO(pm_interval, val); - break; - - case pm_start_stop: - WRITE_WO_MMIO(pm_start_stop, val); - break; - } -} -EXPORT_SYMBOL_GPL(cbe_write_pm); - -/* - * Get/set the size of a physical counter to either 16 or 32 bits. - */ - -u32 cbe_get_ctr_size(u32 cpu, u32 phys_ctr) -{ - u32 pm_ctrl, size = 0; - - if (phys_ctr < NR_PHYS_CTRS) { - pm_ctrl = cbe_read_pm(cpu, pm_control); - size = (pm_ctrl & CBE_PM_16BIT_CTR(phys_ctr)) ? 16 : 32; - } - - return size; -} -EXPORT_SYMBOL_GPL(cbe_get_ctr_size); - -void cbe_set_ctr_size(u32 cpu, u32 phys_ctr, u32 ctr_size) -{ - u32 pm_ctrl; - - if (phys_ctr < NR_PHYS_CTRS) { - pm_ctrl = cbe_read_pm(cpu, pm_control); - switch (ctr_size) { - case 16: - pm_ctrl |= CBE_PM_16BIT_CTR(phys_ctr); - break; - - case 32: - pm_ctrl &= ~CBE_PM_16BIT_CTR(phys_ctr); - break; - } - cbe_write_pm(cpu, pm_control, pm_ctrl); - } -} -EXPORT_SYMBOL_GPL(cbe_set_ctr_size); - -/* - * Enable/disable the entire performance monitoring unit. - * When we enable the PMU, all pending writes to counters get committed. - */ - -void cbe_enable_pm(u32 cpu) -{ - struct cbe_pmd_shadow_regs *shadow_regs; - u32 pm_ctrl; - - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); - shadow_regs->counter_value_in_latch = 0; - - pm_ctrl = cbe_read_pm(cpu, pm_control) | CBE_PM_ENABLE_PERF_MON; - cbe_write_pm(cpu, pm_control, pm_ctrl); -} -EXPORT_SYMBOL_GPL(cbe_enable_pm); - -void cbe_disable_pm(u32 cpu) -{ - u32 pm_ctrl; - pm_ctrl = cbe_read_pm(cpu, pm_control) & ~CBE_PM_ENABLE_PERF_MON; - cbe_write_pm(cpu, pm_control, pm_ctrl); -} -EXPORT_SYMBOL_GPL(cbe_disable_pm); - -/* - * Reading from the trace_buffer. - * The trace buffer is two 64-bit registers. Reading from - * the second half automatically increments the trace_address. - */ - -void cbe_read_trace_buffer(u32 cpu, u64 *buf) -{ - struct cbe_pmd_regs __iomem *pmd_regs = cbe_get_cpu_pmd_regs(cpu); - - *buf++ = in_be64(&pmd_regs->trace_buffer_0_63); - *buf++ = in_be64(&pmd_regs->trace_buffer_64_127); -} -EXPORT_SYMBOL_GPL(cbe_read_trace_buffer); - -/* - * Enabling/disabling interrupts for the entire performance monitoring unit. - */ - -u32 cbe_get_and_clear_pm_interrupts(u32 cpu) -{ - /* Reading pm_status clears the interrupt bits. */ - return cbe_read_pm(cpu, pm_status); -} -EXPORT_SYMBOL_GPL(cbe_get_and_clear_pm_interrupts); - -void cbe_enable_pm_interrupts(u32 cpu, u32 thread, u32 mask) -{ - /* Set which node and thread will handle the next interrupt. */ - iic_set_interrupt_routing(cpu, thread, 0); - - /* Enable the interrupt bits in the pm_status register. */ - if (mask) - cbe_write_pm(cpu, pm_status, mask); -} -EXPORT_SYMBOL_GPL(cbe_enable_pm_interrupts); - -void cbe_disable_pm_interrupts(u32 cpu) -{ - cbe_get_and_clear_pm_interrupts(cpu); - cbe_write_pm(cpu, pm_status, 0); -} -EXPORT_SYMBOL_GPL(cbe_disable_pm_interrupts); - -static irqreturn_t cbe_pm_irq(int irq, void *dev_id) -{ - perf_irq(get_irq_regs()); - return IRQ_HANDLED; -} - -static int __init cbe_init_pm_irq(void) -{ - unsigned int irq; - int rc, node; - - for_each_online_node(node) { - irq = irq_create_mapping(NULL, IIC_IRQ_IOEX_PMI | - (node << IIC_IRQ_NODE_SHIFT)); - if (!irq) { - printk("ERROR: Unable to allocate irq for node %d\n", - node); - return -EINVAL; - } - - rc = request_irq(irq, cbe_pm_irq, - 0, "cbe-pmu-0", NULL); - if (rc) { - printk("ERROR: Request for irq on node %d failed\n", - node); - return rc; - } - } - - return 0; -} -machine_arch_initcall(cell, cbe_init_pm_irq); - -void cbe_sync_irq(int node) -{ - unsigned int irq; - - irq = irq_find_mapping(NULL, - IIC_IRQ_IOEX_PMI - | (node << IIC_IRQ_NODE_SHIFT)); - - if (!irq) { - printk(KERN_WARNING "ERROR, unable to get existing irq %d " \ - "for node %d\n", irq, node); - return; - } - - synchronize_irq(irq); -} -EXPORT_SYMBOL_GPL(cbe_sync_irq); - diff --git a/arch/powerpc/platforms/cell/ras.c b/arch/powerpc/platforms/cell/ras.c deleted file mode 100644 index f6b87926530c..000000000000 --- a/arch/powerpc/platforms/cell/ras.c +++ /dev/null @@ -1,352 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Copyright 2006-2008, IBM Corporation. - */ - -#undef DEBUG - -#include <linux/types.h> -#include <linux/kernel.h> -#include <linux/slab.h> -#include <linux/smp.h> -#include <linux/reboot.h> -#include <linux/kexec.h> -#include <linux/crash_dump.h> -#include <linux/of.h> - -#include <asm/kexec.h> -#include <asm/reg.h> -#include <asm/io.h> -#include <asm/machdep.h> -#include <asm/rtas.h> -#include <asm/cell-regs.h> - -#include "ras.h" -#include "pervasive.h" - -static void dump_fir(int cpu) -{ - struct cbe_pmd_regs __iomem *pregs = cbe_get_cpu_pmd_regs(cpu); - struct cbe_iic_regs __iomem *iregs = cbe_get_cpu_iic_regs(cpu); - - if (pregs == NULL) - return; - - /* Todo: do some nicer parsing of bits and based on them go down - * to other sub-units FIRs and not only IIC - */ - printk(KERN_ERR "Global Checkstop FIR : 0x%016llx\n", - in_be64(&pregs->checkstop_fir)); - printk(KERN_ERR "Global Recoverable FIR : 0x%016llx\n", - in_be64(&pregs->checkstop_fir)); - printk(KERN_ERR "Global MachineCheck FIR : 0x%016llx\n", - in_be64(&pregs->spec_att_mchk_fir)); - - if (iregs == NULL) - return; - printk(KERN_ERR "IOC FIR : 0x%016llx\n", - in_be64(&iregs->ioc_fir)); - -} - -DEFINE_INTERRUPT_HANDLER(cbe_system_error_exception) -{ - int cpu = smp_processor_id(); - - printk(KERN_ERR "System Error Interrupt on CPU %d !\n", cpu); - dump_fir(cpu); - dump_stack(); -} - -DEFINE_INTERRUPT_HANDLER(cbe_maintenance_exception) -{ - int cpu = smp_processor_id(); - - /* - * Nothing implemented for the maintenance interrupt at this point - */ - - printk(KERN_ERR "Unhandled Maintenance interrupt on CPU %d !\n", cpu); - dump_stack(); -} - -DEFINE_INTERRUPT_HANDLER(cbe_thermal_exception) -{ - int cpu = smp_processor_id(); - - /* - * Nothing implemented for the thermal interrupt at this point - */ - - printk(KERN_ERR "Unhandled Thermal interrupt on CPU %d !\n", cpu); - dump_stack(); -} - -static int cbe_machine_check_handler(struct pt_regs *regs) -{ - int cpu = smp_processor_id(); - - printk(KERN_ERR "Machine Check Interrupt on CPU %d !\n", cpu); - dump_fir(cpu); - - /* No recovery from this code now, lets continue */ - return 0; -} - -struct ptcal_area { - struct list_head list; - int nid; - int order; - struct page *pages; -}; - -static LIST_HEAD(ptcal_list); - -static int ptcal_start_tok, ptcal_stop_tok; - -static int __init cbe_ptcal_enable_on_node(int nid, int order) -{ - struct ptcal_area *area; - int ret = -ENOMEM; - unsigned long addr; - - if (is_kdump_kernel()) - rtas_call(ptcal_stop_tok, 1, 1, NULL, nid); - - area = kmalloc(sizeof(*area), GFP_KERNEL); - if (!area) - goto out_err; - - area->nid = nid; - area->order = order; - area->pages = __alloc_pages_node(area->nid, - GFP_KERNEL|__GFP_THISNODE, - area->order); - - if (!area->pages) { - printk(KERN_WARNING "%s: no page on node %d\n", - __func__, area->nid); - goto out_free_area; - } - - /* - * We move the ptcal area to the middle of the allocated - * page, in order to avoid prefetches in memcpy and similar - * functions stepping on it. - */ - addr = __pa(page_address(area->pages)) + (PAGE_SIZE >> 1); - printk(KERN_DEBUG "%s: enabling PTCAL on node %d address=0x%016lx\n", - __func__, area->nid, addr); - - ret = -EIO; - if (rtas_call(ptcal_start_tok, 3, 1, NULL, area->nid, - (unsigned int)(addr >> 32), - (unsigned int)(addr & 0xffffffff))) { - printk(KERN_ERR "%s: error enabling PTCAL on node %d!\n", - __func__, nid); - goto out_free_pages; - } - - list_add(&area->list, &ptcal_list); - - return 0; - -out_free_pages: - __free_pages(area->pages, area->order); -out_free_area: - kfree(area); -out_err: - return ret; -} - -static int __init cbe_ptcal_enable(void) -{ - const u32 *size; - struct device_node *np; - int order, found_mic = 0; - - np = of_find_node_by_path("/rtas"); - if (!np) - return -ENODEV; - - size = of_get_property(np, "ibm,cbe-ptcal-size", NULL); - if (!size) { - of_node_put(np); - return -ENODEV; - } - - pr_debug("%s: enabling PTCAL, size = 0x%x\n", __func__, *size); - order = get_order(*size); - of_node_put(np); - - /* support for malta device trees, with be@/mic@ nodes */ - for_each_node_by_type(np, "mic-tm") { - cbe_ptcal_enable_on_node(of_node_to_nid(np), order); - found_mic = 1; - } - - if (found_mic) - return 0; - - /* support for older device tree - use cpu nodes */ - for_each_node_by_type(np, "cpu") { - const u32 *nid = of_get_property(np, "node-id", NULL); - if (!nid) { - printk(KERN_ERR "%s: node %pOF is missing node-id?\n", - __func__, np); - continue; - } - cbe_ptcal_enable_on_node(*nid, order); - found_mic = 1; - } - - return found_mic ? 0 : -ENODEV; -} - -static int cbe_ptcal_disable(void) -{ - struct ptcal_area *area, *tmp; - int ret = 0; - - pr_debug("%s: disabling PTCAL\n", __func__); - - list_for_each_entry_safe(area, tmp, &ptcal_list, list) { - /* disable ptcal on this node */ - if (rtas_call(ptcal_stop_tok, 1, 1, NULL, area->nid)) { - printk(KERN_ERR "%s: error disabling PTCAL " - "on node %d!\n", __func__, - area->nid); - ret = -EIO; - continue; - } - - /* ensure we can access the PTCAL area */ - memset(page_address(area->pages), 0, - 1 << (area->order + PAGE_SHIFT)); - - /* clean up */ - list_del(&area->list); - __free_pages(area->pages, area->order); - kfree(area); - } - - return ret; -} - -static int cbe_ptcal_notify_reboot(struct notifier_block *nb, - unsigned long code, void *data) -{ - return cbe_ptcal_disable(); -} - -static void cbe_ptcal_crash_shutdown(void) -{ - cbe_ptcal_disable(); -} - -static struct notifier_block cbe_ptcal_reboot_notifier = { - .notifier_call = cbe_ptcal_notify_reboot -}; - -#ifdef CONFIG_PPC_IBM_CELL_RESETBUTTON -static int sysreset_hack; - -static int __init cbe_sysreset_init(void) -{ - struct cbe_pmd_regs __iomem *regs; - - sysreset_hack = of_machine_is_compatible("IBM,CBPLUS-1.0"); - if (!sysreset_hack) - return 0; - - regs = cbe_get_cpu_pmd_regs(0); - if (!regs) - return 0; - - /* Enable JTAG system-reset hack */ - out_be32(®s->fir_mode_reg, - in_be32(®s->fir_mode_reg) | - CBE_PMD_FIR_MODE_M8); - - return 0; -} -device_initcall(cbe_sysreset_init); - -int cbe_sysreset_hack(void) -{ - struct cbe_pmd_regs __iomem *regs; - - /* - * The BMC can inject user triggered system reset exceptions, - * but cannot set the system reset reason in srr1, - * so check an extra register here. - */ - if (sysreset_hack && (smp_processor_id() == 0)) { - regs = cbe_get_cpu_pmd_regs(0); - if (!regs) - return 0; - if (in_be64(®s->ras_esc_0) & 0x0000ffff) { - out_be64(®s->ras_esc_0, 0); - return 0; - } - } - return 1; -} -#endif /* CONFIG_PPC_IBM_CELL_RESETBUTTON */ - -static int __init cbe_ptcal_init(void) -{ - int ret; - ptcal_start_tok = rtas_function_token(RTAS_FN_IBM_CBE_START_PTCAL); - ptcal_stop_tok = rtas_function_token(RTAS_FN_IBM_CBE_STOP_PTCAL); - - if (ptcal_start_tok == RTAS_UNKNOWN_SERVICE - || ptcal_stop_tok == RTAS_UNKNOWN_SERVICE) - return -ENODEV; - - ret = register_reboot_notifier(&cbe_ptcal_reboot_notifier); - if (ret) - goto out1; - - ret = crash_shutdown_register(&cbe_ptcal_crash_shutdown); - if (ret) - goto out2; - - return cbe_ptcal_enable(); - -out2: - unregister_reboot_notifier(&cbe_ptcal_reboot_notifier); -out1: - printk(KERN_ERR "Can't disable PTCAL, so not enabling\n"); - return ret; -} - -arch_initcall(cbe_ptcal_init); - -void __init cbe_ras_init(void) -{ - unsigned long hid0; - - /* - * Enable System Error & thermal interrupts and wakeup conditions - */ - - hid0 = mfspr(SPRN_HID0); - hid0 |= HID0_CBE_THERM_INT_EN | HID0_CBE_THERM_WAKEUP | - HID0_CBE_SYSERR_INT_EN | HID0_CBE_SYSERR_WAKEUP; - mtspr(SPRN_HID0, hid0); - mb(); - - /* - * Install machine check handler. Leave setting of precise mode to - * what the firmware did for now - */ - ppc_md.machine_check_exception = cbe_machine_check_handler; - mb(); - - /* - * For now, we assume that IOC_FIR is already set to forward some - * error conditions to the System Error handler. If that is not true - * then it will have to be fixed up here. - */ -} diff --git a/arch/powerpc/platforms/cell/ras.h b/arch/powerpc/platforms/cell/ras.h deleted file mode 100644 index 226dbd48efad..000000000000 --- a/arch/powerpc/platforms/cell/ras.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef RAS_H -#define RAS_H - -#include <asm/interrupt.h> - -DECLARE_INTERRUPT_HANDLER(cbe_system_error_exception); -DECLARE_INTERRUPT_HANDLER(cbe_maintenance_exception); -DECLARE_INTERRUPT_HANDLER(cbe_thermal_exception); - -extern void cbe_ras_init(void); - -#endif /* RAS_H */ diff --git a/arch/powerpc/platforms/cell/setup.c b/arch/powerpc/platforms/cell/setup.c deleted file mode 100644 index f64a1ef98aa8..000000000000 --- a/arch/powerpc/platforms/cell/setup.c +++ /dev/null @@ -1,274 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * linux/arch/powerpc/platforms/cell/cell_setup.c - * - * Copyright (C) 1995 Linus Torvalds - * Adapted from 'alpha' version by Gary Thomas - * Modified by Cort Dougan (cort@cs.nmt.edu) - * Modified by PPC64 Team, IBM Corp - * Modified by Cell Team, IBM Deutschland Entwicklung GmbH - */ -#undef DEBUG - -#include <linux/sched.h> -#include <linux/kernel.h> -#include <linux/mm.h> -#include <linux/stddef.h> -#include <linux/export.h> -#include <linux/unistd.h> -#include <linux/user.h> -#include <linux/reboot.h> -#include <linux/init.h> -#include <linux/delay.h> -#include <linux/irq.h> -#include <linux/seq_file.h> -#include <linux/root_dev.h> -#include <linux/console.h> -#include <linux/mutex.h> -#include <linux/memory_hotplug.h> -#include <linux/of_platform.h> -#include <linux/platform_device.h> - -#include <asm/mmu.h> -#include <asm/processor.h> -#include <asm/io.h> -#include <asm/rtas.h> -#include <asm/pci-bridge.h> -#include <asm/iommu.h> -#include <asm/dma.h> -#include <asm/machdep.h> -#include <asm/time.h> -#include <asm/nvram.h> -#include <asm/cputable.h> -#include <asm/ppc-pci.h> -#include <asm/irq.h> -#include <asm/spu.h> -#include <asm/spu_priv1.h> -#include <asm/udbg.h> -#include <asm/mpic.h> -#include <asm/cell-regs.h> -#include <asm/io-workarounds.h> - -#include "cell.h" -#include "interrupt.h" -#include "pervasive.h" -#include "ras.h" - -#ifdef DEBUG -#define DBG(fmt...) udbg_printf(fmt) -#else -#define DBG(fmt...) -#endif - -static void cell_show_cpuinfo(struct seq_file *m) -{ - struct device_node *root; - const char *model = ""; - - root = of_find_node_by_path("/"); - if (root) - model = of_get_property(root, "model", NULL); - seq_printf(m, "machine\t\t: CHRP %s\n", model); - of_node_put(root); -} - -static void cell_progress(char *s, unsigned short hex) -{ - printk("*** %04x : %s\n", hex, s ? s : ""); -} - -static void cell_fixup_pcie_rootcomplex(struct pci_dev *dev) -{ - struct pci_controller *hose; - const char *s; - int i; - - if (!machine_is(cell)) - return; - - /* We're searching for a direct child of the PHB */ - if (dev->bus->self != NULL || dev->devfn != 0) - return; - - hose = pci_bus_to_host(dev->bus); - if (hose == NULL) - return; - - /* Only on PCIE */ - if (!of_device_is_compatible(hose->dn, "pciex")) - return; - - /* And only on axon */ - s = of_get_property(hose->dn, "model", NULL); - if (!s || strcmp(s, "Axon") != 0) - return; - - for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { - dev->resource[i].start = dev->resource[i].end = 0; - dev->resource[i].flags = 0; - } - - printk(KERN_DEBUG "PCI: Hiding resources on Axon PCIE RC %s\n", - pci_name(dev)); -} -DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, cell_fixup_pcie_rootcomplex); - -static int cell_setup_phb(struct pci_controller *phb) -{ - const char *model; - struct device_node *np; - - int rc = rtas_setup_phb(phb); - if (rc) - return rc; - - phb->controller_ops = cell_pci_controller_ops; - - np = phb->dn; - model = of_get_property(np, "model", NULL); - if (model == NULL || !of_node_name_eq(np, "pci")) - return 0; - - /* Setup workarounds for spider */ - if (strcmp(model, "Spider")) - return 0; - - iowa_register_bus(phb, &spiderpci_ops, &spiderpci_iowa_init, - (void *)SPIDER_PCI_REG_BASE); - return 0; -} - -static const struct of_device_id cell_bus_ids[] __initconst = { - { .type = "soc", }, - { .compatible = "soc", }, - { .type = "spider", }, - { .type = "axon", }, - { .type = "plb5", }, - { .type = "plb4", }, - { .type = "opb", }, - { .type = "ebc", }, - {}, -}; - -static int __init cell_publish_devices(void) -{ - struct device_node *root = of_find_node_by_path("/"); - struct device_node *np; - int node; - - /* Publish OF platform devices for southbridge IOs */ - of_platform_bus_probe(NULL, cell_bus_ids, NULL); - - /* On spider based blades, we need to manually create the OF - * platform devices for the PCI host bridges - */ - for_each_child_of_node(root, np) { - if (!of_node_is_type(np, "pci") && !of_node_is_type(np, "pciex")) - continue; - of_platform_device_create(np, NULL, NULL); - } - - of_node_put(root); - - /* There is no device for the MIC memory controller, thus we create - * a platform device for it to attach the EDAC driver to. - */ - for_each_online_node(node) { - if (cbe_get_cpu_mic_tm_regs(cbe_node_to_cpu(node)) == NULL) - continue; - platform_device_register_simple("cbe-mic", node, NULL, 0); - } - - return 0; -} -machine_subsys_initcall(cell, cell_publish_devices); - -static void __init mpic_init_IRQ(void) -{ - struct device_node *dn; - struct mpic *mpic; - - for_each_node_by_name(dn, "interrupt-controller") { - if (!of_device_is_compatible(dn, "CBEA,platform-open-pic")) - continue; - - /* The MPIC driver will get everything it needs from the - * device-tree, just pass 0 to all arguments - */ - mpic = mpic_alloc(dn, 0, MPIC_SECONDARY | MPIC_NO_RESET, - 0, 0, " MPIC "); - if (mpic == NULL) - continue; - mpic_init(mpic); - } -} - - -static void __init cell_init_irq(void) -{ - iic_init_IRQ(); - spider_init_IRQ(); - mpic_init_IRQ(); -} - -static void __init cell_set_dabrx(void) -{ - mtspr(SPRN_DABRX, DABRX_KERNEL | DABRX_USER); -} - -static void __init cell_setup_arch(void) -{ -#ifdef CONFIG_SPU_BASE - spu_priv1_ops = &spu_priv1_mmio_ops; - spu_management_ops = &spu_management_of_ops; -#endif - - cbe_regs_init(); - - cell_set_dabrx(); - -#ifdef CONFIG_CBE_RAS - cbe_ras_init(); -#endif - -#ifdef CONFIG_SMP - smp_init_cell(); -#endif - /* init to some ~sane value until calibrate_delay() runs */ - loops_per_jiffy = 50000000; - - /* Find and initialize PCI host bridges */ - init_pci_config_tokens(); - - cbe_pervasive_init(); - - mmio_nvram_init(); -} - -static int __init cell_probe(void) -{ - if (!of_machine_is_compatible("IBM,CBEA") && - !of_machine_is_compatible("IBM,CPBW-1.0")) - return 0; - - pm_power_off = rtas_power_off; - - return 1; -} - -define_machine(cell) { - .name = "Cell", - .probe = cell_probe, - .setup_arch = cell_setup_arch, - .show_cpuinfo = cell_show_cpuinfo, - .restart = rtas_restart, - .halt = rtas_halt, - .get_boot_time = rtas_get_boot_time, - .get_rtc_time = rtas_get_rtc_time, - .set_rtc_time = rtas_set_rtc_time, - .progress = cell_progress, - .init_IRQ = cell_init_irq, - .pci_setup_phb = cell_setup_phb, -}; - -struct pci_controller_ops cell_pci_controller_ops; diff --git a/arch/powerpc/platforms/cell/smp.c b/arch/powerpc/platforms/cell/smp.c deleted file mode 100644 index 0e8f20ecca08..000000000000 --- a/arch/powerpc/platforms/cell/smp.c +++ /dev/null @@ -1,162 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * SMP support for BPA machines. - * - * Dave Engebretsen, Peter Bergner, and - * Mike Corrigan {engebret|bergner|mikec}@us.ibm.com - * - * Plus various changes from other IBM teams... - */ - -#undef DEBUG - -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/smp.h> -#include <linux/interrupt.h> -#include <linux/delay.h> -#include <linux/init.h> -#include <linux/spinlock.h> -#include <linux/cache.h> -#include <linux/err.h> -#include <linux/device.h> -#include <linux/cpu.h> -#include <linux/pgtable.h> - -#include <asm/ptrace.h> -#include <linux/atomic.h> -#include <asm/irq.h> -#include <asm/page.h> -#include <asm/io.h> -#include <asm/smp.h> -#include <asm/paca.h> -#include <asm/machdep.h> -#include <asm/cputable.h> -#include <asm/firmware.h> -#include <asm/rtas.h> -#include <asm/cputhreads.h> -#include <asm/text-patching.h> - -#include "interrupt.h" -#include <asm/udbg.h> - -#ifdef DEBUG -#define DBG(fmt...) udbg_printf(fmt) -#else -#define DBG(fmt...) -#endif - -/* - * The Primary thread of each non-boot processor was started from the OF client - * interface by prom_hold_cpus and is spinning on secondary_hold_spinloop. - */ -static cpumask_t of_spin_map; - -/** - * smp_startup_cpu() - start the given cpu - * @lcpu: Logical CPU ID of the CPU to be started. - * - * At boot time, there is nothing to do for primary threads which were - * started from Open Firmware. For anything else, call RTAS with the - * appropriate start location. - * - * Returns: - * 0 - failure - * 1 - success - */ -static inline int smp_startup_cpu(unsigned int lcpu) -{ - int status; - unsigned long start_here = - __pa(ppc_function_entry(generic_secondary_smp_init)); - unsigned int pcpu; - int start_cpu; - - if (cpumask_test_cpu(lcpu, &of_spin_map)) - /* Already started by OF and sitting in spin loop */ - return 1; - - pcpu = get_hard_smp_processor_id(lcpu); - - /* - * If the RTAS start-cpu token does not exist then presume the - * cpu is already spinning. - */ - start_cpu = rtas_function_token(RTAS_FN_START_CPU); - if (start_cpu == RTAS_UNKNOWN_SERVICE) - return 1; - - status = rtas_call(start_cpu, 3, 1, NULL, pcpu, start_here, lcpu); - if (status != 0) { - printk(KERN_ERR "start-cpu failed: %i\n", status); - return 0; - } - - return 1; -} - -static void smp_cell_setup_cpu(int cpu) -{ - if (cpu != boot_cpuid) - iic_setup_cpu(); - - /* - * change default DABRX to allow user watchpoints - */ - mtspr(SPRN_DABRX, DABRX_KERNEL | DABRX_USER); -} - -static int smp_cell_kick_cpu(int nr) -{ - if (nr < 0 || nr >= nr_cpu_ids) - return -EINVAL; - - if (!smp_startup_cpu(nr)) - return -ENOENT; - - /* - * The processor is currently spinning, waiting for the - * cpu_start field to become non-zero After we set cpu_start, - * the processor will continue on to secondary_start - */ - paca_ptrs[nr]->cpu_start = 1; - - return 0; -} - -static struct smp_ops_t bpa_iic_smp_ops = { - .message_pass = iic_message_pass, - .probe = iic_request_IPIs, - .kick_cpu = smp_cell_kick_cpu, - .setup_cpu = smp_cell_setup_cpu, - .cpu_bootable = smp_generic_cpu_bootable, -}; - -/* This is called very early */ -void __init smp_init_cell(void) -{ - int i; - - DBG(" -> smp_init_cell()\n"); - - smp_ops = &bpa_iic_smp_ops; - - /* Mark threads which are still spinning in hold loops. */ - if (cpu_has_feature(CPU_FTR_SMT)) { - for_each_present_cpu(i) { - if (cpu_thread_in_core(i) == 0) - cpumask_set_cpu(i, &of_spin_map); - } - } else - cpumask_copy(&of_spin_map, cpu_present_mask); - - cpumask_clear_cpu(boot_cpuid, &of_spin_map); - - /* Non-lpar has additional take/give timebase */ - if (rtas_function_token(RTAS_FN_FREEZE_TIME_BASE) != RTAS_UNKNOWN_SERVICE) { - smp_ops->give_timebase = rtas_give_timebase; - smp_ops->take_timebase = rtas_take_timebase; - } - - DBG(" <- smp_init_cell()\n"); -} diff --git a/arch/powerpc/platforms/cell/spider-pci.c b/arch/powerpc/platforms/cell/spider-pci.c deleted file mode 100644 index 68439445b1c3..000000000000 --- a/arch/powerpc/platforms/cell/spider-pci.c +++ /dev/null @@ -1,170 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * IO workarounds for PCI on Celleb/Cell platform - * - * (C) Copyright 2006-2007 TOSHIBA CORPORATION - */ - -#undef DEBUG - -#include <linux/kernel.h> -#include <linux/of_address.h> -#include <linux/slab.h> -#include <linux/io.h> - -#include <asm/ppc-pci.h> -#include <asm/pci-bridge.h> -#include <asm/io-workarounds.h> - -#define SPIDER_PCI_DISABLE_PREFETCH - -struct spiderpci_iowa_private { - void __iomem *regs; -}; - -static void spiderpci_io_flush(struct iowa_bus *bus) -{ - struct spiderpci_iowa_private *priv; - - priv = bus->private; - in_be32(priv->regs + SPIDER_PCI_DUMMY_READ); - iosync(); -} - -#define SPIDER_PCI_MMIO_READ(name, ret) \ -static ret spiderpci_##name(const PCI_IO_ADDR addr) \ -{ \ - ret val = __do_##name(addr); \ - spiderpci_io_flush(iowa_mem_find_bus(addr)); \ - return val; \ -} - -#define SPIDER_PCI_MMIO_READ_STR(name) \ -static void spiderpci_##name(const PCI_IO_ADDR addr, void *buf, \ - unsigned long count) \ -{ \ - __do_##name(addr, buf, count); \ - spiderpci_io_flush(iowa_mem_find_bus(addr)); \ -} - -SPIDER_PCI_MMIO_READ(readb, u8) -SPIDER_PCI_MMIO_READ(readw, u16) -SPIDER_PCI_MMIO_READ(readl, u32) -SPIDER_PCI_MMIO_READ(readq, u64) -SPIDER_PCI_MMIO_READ(readw_be, u16) -SPIDER_PCI_MMIO_READ(readl_be, u32) -SPIDER_PCI_MMIO_READ(readq_be, u64) -SPIDER_PCI_MMIO_READ_STR(readsb) -SPIDER_PCI_MMIO_READ_STR(readsw) -SPIDER_PCI_MMIO_READ_STR(readsl) - -static void spiderpci_memcpy_fromio(void *dest, const PCI_IO_ADDR src, - unsigned long n) -{ - __do_memcpy_fromio(dest, src, n); - spiderpci_io_flush(iowa_mem_find_bus(src)); -} - -static int __init spiderpci_pci_setup_chip(struct pci_controller *phb, - void __iomem *regs) -{ - void *dummy_page_va; - dma_addr_t dummy_page_da; - -#ifdef SPIDER_PCI_DISABLE_PREFETCH - u32 val = in_be32(regs + SPIDER_PCI_VCI_CNTL_STAT); - pr_debug("SPIDER_IOWA:PVCI_Control_Status was 0x%08x\n", val); - out_be32(regs + SPIDER_PCI_VCI_CNTL_STAT, val | 0x8); -#endif /* SPIDER_PCI_DISABLE_PREFETCH */ - - /* setup dummy read */ - /* - * On CellBlade, we can't know that which XDR memory is used by - * kmalloc() to allocate dummy_page_va. - * In order to improve the performance, the XDR which is used to - * allocate dummy_page_va is the nearest the spider-pci. - * We have to select the CBE which is the nearest the spider-pci - * to allocate memory from the best XDR, but I don't know that - * how to do. - * - * Celleb does not have this problem, because it has only one XDR. - */ - dummy_page_va = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!dummy_page_va) { - pr_err("SPIDERPCI-IOWA:Alloc dummy_page_va failed.\n"); - return -1; - } - - dummy_page_da = dma_map_single(phb->parent, dummy_page_va, - PAGE_SIZE, DMA_FROM_DEVICE); - if (dma_mapping_error(phb->parent, dummy_page_da)) { - pr_err("SPIDER-IOWA:Map dummy page filed.\n"); - kfree(dummy_page_va); - return -1; - } - - out_be32(regs + SPIDER_PCI_DUMMY_READ_BASE, dummy_page_da); - - return 0; -} - -int __init spiderpci_iowa_init(struct iowa_bus *bus, void *data) -{ - void __iomem *regs = NULL; - struct spiderpci_iowa_private *priv; - struct device_node *np = bus->phb->dn; - struct resource r; - unsigned long offset = (unsigned long)data; - - pr_debug("SPIDERPCI-IOWA:Bus initialize for spider(%pOF)\n", - np); - - priv = kzalloc(sizeof(*priv), GFP_KERNEL); - if (!priv) { - pr_err("SPIDERPCI-IOWA:" - "Can't allocate struct spiderpci_iowa_private"); - return -1; - } - - if (of_address_to_resource(np, 0, &r)) { - pr_err("SPIDERPCI-IOWA:Can't get resource.\n"); - goto error; - } - - regs = ioremap(r.start + offset, SPIDER_PCI_REG_SIZE); - if (!regs) { - pr_err("SPIDERPCI-IOWA:ioremap failed.\n"); - goto error; - } - priv->regs = regs; - bus->private = priv; - - if (spiderpci_pci_setup_chip(bus->phb, regs)) - goto error; - - return 0; - -error: - kfree(priv); - bus->private = NULL; - - if (regs) - iounmap(regs); - - return -1; -} - -struct ppc_pci_io spiderpci_ops = { - .readb = spiderpci_readb, - .readw = spiderpci_readw, - .readl = spiderpci_readl, - .readq = spiderpci_readq, - .readw_be = spiderpci_readw_be, - .readl_be = spiderpci_readl_be, - .readq_be = spiderpci_readq_be, - .readsb = spiderpci_readsb, - .readsw = spiderpci_readsw, - .readsl = spiderpci_readsl, - .memcpy_fromio = spiderpci_memcpy_fromio, -}; - diff --git a/arch/powerpc/platforms/cell/spider-pic.c b/arch/powerpc/platforms/cell/spider-pic.c deleted file mode 100644 index 11df737c8c6a..000000000000 --- a/arch/powerpc/platforms/cell/spider-pic.c +++ /dev/null @@ -1,344 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * External Interrupt Controller on Spider South Bridge - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 - * - * Author: Arnd Bergmann <arndb@de.ibm.com> - */ - -#include <linux/interrupt.h> -#include <linux/irq.h> -#include <linux/ioport.h> -#include <linux/of_address.h> -#include <linux/of_irq.h> -#include <linux/pgtable.h> - -#include <asm/io.h> - -#include "interrupt.h" - -/* register layout taken from Spider spec, table 7.4-4 */ -enum { - TIR_DEN = 0x004, /* Detection Enable Register */ - TIR_MSK = 0x084, /* Mask Level Register */ - TIR_EDC = 0x0c0, /* Edge Detection Clear Register */ - TIR_PNDA = 0x100, /* Pending Register A */ - TIR_PNDB = 0x104, /* Pending Register B */ - TIR_CS = 0x144, /* Current Status Register */ - TIR_LCSA = 0x150, /* Level Current Status Register A */ - TIR_LCSB = 0x154, /* Level Current Status Register B */ - TIR_LCSC = 0x158, /* Level Current Status Register C */ - TIR_LCSD = 0x15c, /* Level Current Status Register D */ - TIR_CFGA = 0x200, /* Setting Register A0 */ - TIR_CFGB = 0x204, /* Setting Register B0 */ - /* 0x208 ... 0x3ff Setting Register An/Bn */ - TIR_PPNDA = 0x400, /* Packet Pending Register A */ - TIR_PPNDB = 0x404, /* Packet Pending Register B */ - TIR_PIERA = 0x408, /* Packet Output Error Register A */ - TIR_PIERB = 0x40c, /* Packet Output Error Register B */ - TIR_PIEN = 0x444, /* Packet Output Enable Register */ - TIR_PIPND = 0x454, /* Packet Output Pending Register */ - TIRDID = 0x484, /* Spider Device ID Register */ - REISTIM = 0x500, /* Reissue Command Timeout Time Setting */ - REISTIMEN = 0x504, /* Reissue Command Timeout Setting */ - REISWAITEN = 0x508, /* Reissue Wait Control*/ -}; - -#define SPIDER_CHIP_COUNT 4 -#define SPIDER_SRC_COUNT 64 -#define SPIDER_IRQ_INVALID 63 - -struct spider_pic { - struct irq_domain *host; - void __iomem *regs; - unsigned int node_id; -}; -static struct spider_pic spider_pics[SPIDER_CHIP_COUNT]; - -static struct spider_pic *spider_irq_data_to_pic(struct irq_data *d) -{ - return irq_data_get_irq_chip_data(d); -} - -static void __iomem *spider_get_irq_config(struct spider_pic *pic, - unsigned int src) -{ - return pic->regs + TIR_CFGA + 8 * src; -} - -static void spider_unmask_irq(struct irq_data *d) -{ - struct spider_pic *pic = spider_irq_data_to_pic(d); - void __iomem *cfg = spider_get_irq_config(pic, irqd_to_hwirq(d)); - - out_be32(cfg, in_be32(cfg) | 0x30000000u); -} - -static void spider_mask_irq(struct irq_data *d) -{ - struct spider_pic *pic = spider_irq_data_to_pic(d); - void __iomem *cfg = spider_get_irq_config(pic, irqd_to_hwirq(d)); - - out_be32(cfg, in_be32(cfg) & ~0x30000000u); -} - -static void spider_ack_irq(struct irq_data *d) -{ - struct spider_pic *pic = spider_irq_data_to_pic(d); - unsigned int src = irqd_to_hwirq(d); - - /* Reset edge detection logic if necessary - */ - if (irqd_is_level_type(d)) - return; - - /* Only interrupts 47 to 50 can be set to edge */ - if (src < 47 || src > 50) - return; - - /* Perform the clear of the edge logic */ - out_be32(pic->regs + TIR_EDC, 0x100 | (src & 0xf)); -} - -static int spider_set_irq_type(struct irq_data *d, unsigned int type) -{ - unsigned int sense = type & IRQ_TYPE_SENSE_MASK; - struct spider_pic *pic = spider_irq_data_to_pic(d); - unsigned int hw = irqd_to_hwirq(d); - void __iomem *cfg = spider_get_irq_config(pic, hw); - u32 old_mask; - u32 ic; - - /* Note that only level high is supported for most interrupts */ - if (sense != IRQ_TYPE_NONE && sense != IRQ_TYPE_LEVEL_HIGH && - (hw < 47 || hw > 50)) - return -EINVAL; - - /* Decode sense type */ - switch(sense) { - case IRQ_TYPE_EDGE_RISING: - ic = 0x3; - break; - case IRQ_TYPE_EDGE_FALLING: - ic = 0x2; - break; - case IRQ_TYPE_LEVEL_LOW: - ic = 0x0; - break; - case IRQ_TYPE_LEVEL_HIGH: - case IRQ_TYPE_NONE: - ic = 0x1; - break; - default: - return -EINVAL; - } - - /* Configure the source. One gross hack that was there before and - * that I've kept around is the priority to the BE which I set to - * be the same as the interrupt source number. I don't know whether - * that's supposed to make any kind of sense however, we'll have to - * decide that, but for now, I'm not changing the behaviour. - */ - old_mask = in_be32(cfg) & 0x30000000u; - out_be32(cfg, old_mask | (ic << 24) | (0x7 << 16) | - (pic->node_id << 4) | 0xe); - out_be32(cfg + 4, (0x2 << 16) | (hw & 0xff)); - - return 0; -} - -static struct irq_chip spider_pic = { - .name = "SPIDER", - .irq_unmask = spider_unmask_irq, - .irq_mask = spider_mask_irq, - .irq_ack = spider_ack_irq, - .irq_set_type = spider_set_irq_type, -}; - -static int spider_host_map(struct irq_domain *h, unsigned int virq, - irq_hw_number_t hw) -{ - irq_set_chip_data(virq, h->host_data); - irq_set_chip_and_handler(virq, &spider_pic, handle_level_irq); - - /* Set default irq type */ - irq_set_irq_type(virq, IRQ_TYPE_NONE); - - return 0; -} - -static int spider_host_xlate(struct irq_domain *h, struct device_node *ct, - const u32 *intspec, unsigned int intsize, - irq_hw_number_t *out_hwirq, unsigned int *out_flags) - -{ - /* Spider interrupts have 2 cells, first is the interrupt source, - * second, well, I don't know for sure yet ... We mask the top bits - * because old device-trees encode a node number in there - */ - *out_hwirq = intspec[0] & 0x3f; - *out_flags = IRQ_TYPE_LEVEL_HIGH; - return 0; -} - -static const struct irq_domain_ops spider_host_ops = { - .map = spider_host_map, - .xlate = spider_host_xlate, -}; - -static void spider_irq_cascade(struct irq_desc *desc) -{ - struct irq_chip *chip = irq_desc_get_chip(desc); - struct spider_pic *pic = irq_desc_get_handler_data(desc); - unsigned int cs; - - cs = in_be32(pic->regs + TIR_CS) >> 24; - if (cs != SPIDER_IRQ_INVALID) - generic_handle_domain_irq(pic->host, cs); - - chip->irq_eoi(&desc->irq_data); -} - -/* For hooking up the cascade we have a problem. Our device-tree is - * crap and we don't know on which BE iic interrupt we are hooked on at - * least not the "standard" way. We can reconstitute it based on two - * informations though: which BE node we are connected to and whether - * we are connected to IOIF0 or IOIF1. Right now, we really only care - * about the IBM cell blade and we know that its firmware gives us an - * interrupt-map property which is pretty strange. - */ -static unsigned int __init spider_find_cascade_and_node(struct spider_pic *pic) -{ - unsigned int virq; - const u32 *imap, *tmp; - int imaplen, intsize, unit; - struct device_node *iic; - struct device_node *of_node; - - of_node = irq_domain_get_of_node(pic->host); - - /* First, we check whether we have a real "interrupts" in the device - * tree in case the device-tree is ever fixed - */ - virq = irq_of_parse_and_map(of_node, 0); - if (virq) - return virq; - - /* Now do the horrible hacks */ - tmp = of_get_property(of_node, "#interrupt-cells", NULL); - if (tmp == NULL) - return 0; - intsize = *tmp; - imap = of_get_property(of_node, "interrupt-map", &imaplen); - if (imap == NULL || imaplen < (intsize + 1)) - return 0; - iic = of_find_node_by_phandle(imap[intsize]); - if (iic == NULL) - return 0; - imap += intsize + 1; - tmp = of_get_property(iic, "#interrupt-cells", NULL); - if (tmp == NULL) { - of_node_put(iic); - return 0; - } - intsize = *tmp; - /* Assume unit is last entry of interrupt specifier */ - unit = imap[intsize - 1]; - /* Ok, we have a unit, now let's try to get the node */ - tmp = of_get_property(iic, "ibm,interrupt-server-ranges", NULL); - if (tmp == NULL) { - of_node_put(iic); - return 0; - } - /* ugly as hell but works for now */ - pic->node_id = (*tmp) >> 1; - of_node_put(iic); - - /* Ok, now let's get cracking. You may ask me why I just didn't match - * the iic host from the iic OF node, but that way I'm still compatible - * with really really old old firmwares for which we don't have a node - */ - /* Manufacture an IIC interrupt number of class 2 */ - virq = irq_create_mapping(NULL, - (pic->node_id << IIC_IRQ_NODE_SHIFT) | - (2 << IIC_IRQ_CLASS_SHIFT) | - unit); - if (!virq) - printk(KERN_ERR "spider_pic: failed to map cascade !"); - return virq; -} - - -static void __init spider_init_one(struct device_node *of_node, int chip, - unsigned long addr) -{ - struct spider_pic *pic = &spider_pics[chip]; - int i, virq; - - /* Map registers */ - pic->regs = ioremap(addr, 0x1000); - if (pic->regs == NULL) - panic("spider_pic: can't map registers !"); - - /* Allocate a host */ - pic->host = irq_domain_add_linear(of_node, SPIDER_SRC_COUNT, - &spider_host_ops, pic); - if (pic->host == NULL) - panic("spider_pic: can't allocate irq host !"); - - /* Go through all sources and disable them */ - for (i = 0; i < SPIDER_SRC_COUNT; i++) { - void __iomem *cfg = pic->regs + TIR_CFGA + 8 * i; - out_be32(cfg, in_be32(cfg) & ~0x30000000u); - } - - /* do not mask any interrupts because of level */ - out_be32(pic->regs + TIR_MSK, 0x0); - - /* enable interrupt packets to be output */ - out_be32(pic->regs + TIR_PIEN, in_be32(pic->regs + TIR_PIEN) | 0x1); - - /* Hook up the cascade interrupt to the iic and nodeid */ - virq = spider_find_cascade_and_node(pic); - if (!virq) - return; - irq_set_handler_data(virq, pic); - irq_set_chained_handler(virq, spider_irq_cascade); - - printk(KERN_INFO "spider_pic: node %d, addr: 0x%lx %pOF\n", - pic->node_id, addr, of_node); - - /* Enable the interrupt detection enable bit. Do this last! */ - out_be32(pic->regs + TIR_DEN, in_be32(pic->regs + TIR_DEN) | 0x1); -} - -void __init spider_init_IRQ(void) -{ - struct resource r; - struct device_node *dn; - int chip = 0; - - /* XXX node numbers are totally bogus. We _hope_ we get the device - * nodes in the right order here but that's definitely not guaranteed, - * we need to get the node from the device tree instead. - * There is currently no proper property for it (but our whole - * device-tree is bogus anyway) so all we can do is pray or maybe test - * the address and deduce the node-id - */ - for_each_node_by_name(dn, "interrupt-controller") { - if (of_device_is_compatible(dn, "CBEA,platform-spider-pic")) { - if (of_address_to_resource(dn, 0, &r)) { - printk(KERN_WARNING "spider-pic: Failed\n"); - continue; - } - } else if (of_device_is_compatible(dn, "sti,platform-spider-pic") - && (chip < 2)) { - static long hard_coded_pics[] = - { 0x24000008000ul, 0x34000008000ul}; - r.start = hard_coded_pics[chip]; - } else - continue; - spider_init_one(dn, chip++, r.start); - } -} diff --git a/arch/powerpc/platforms/cell/spu_base.c b/arch/powerpc/platforms/cell/spu_base.c index dea6f0f25897..2c07387201d0 100644 --- a/arch/powerpc/platforms/cell/spu_base.c +++ b/arch/powerpc/platforms/cell/spu_base.c @@ -23,7 +23,6 @@ #include <asm/spu.h> #include <asm/spu_priv1.h> #include <asm/spu_csa.h> -#include <asm/xmon.h> #include <asm/kexec.h> const struct spu_management_ops *spu_management_ops; @@ -772,7 +771,6 @@ static int __init init_spu_base(void) fb_append_extra_logo(&logo_spe_clut224, ret); mutex_lock(&spu_full_list_mutex); - xmon_register_spus(&spu_full_list); crash_register_spus(&spu_full_list); mutex_unlock(&spu_full_list_mutex); spu_add_dev_attr(&dev_attr_stat); diff --git a/arch/powerpc/platforms/cell/spu_manage.c b/arch/powerpc/platforms/cell/spu_manage.c deleted file mode 100644 index f464a1f2e568..000000000000 --- a/arch/powerpc/platforms/cell/spu_manage.c +++ /dev/null @@ -1,530 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * spu management operations for of based platforms - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 - * Copyright 2006 Sony Corp. - * (C) Copyright 2007 TOSHIBA CORPORATION - */ - -#include <linux/interrupt.h> -#include <linux/list.h> -#include <linux/export.h> -#include <linux/ptrace.h> -#include <linux/wait.h> -#include <linux/mm.h> -#include <linux/io.h> -#include <linux/mutex.h> -#include <linux/device.h> -#include <linux/of_address.h> -#include <linux/of_irq.h> - -#include <asm/spu.h> -#include <asm/spu_priv1.h> -#include <asm/firmware.h> - -#include "spufs/spufs.h" -#include "interrupt.h" -#include "spu_priv1_mmio.h" - -struct device_node *spu_devnode(struct spu *spu) -{ - return spu->devnode; -} - -EXPORT_SYMBOL_GPL(spu_devnode); - -static u64 __init find_spu_unit_number(struct device_node *spe) -{ - const unsigned int *prop; - int proplen; - - /* new device trees should provide the physical-id attribute */ - prop = of_get_property(spe, "physical-id", &proplen); - if (proplen == 4) - return (u64)*prop; - - /* celleb device tree provides the unit-id */ - prop = of_get_property(spe, "unit-id", &proplen); - if (proplen == 4) - return (u64)*prop; - - /* legacy device trees provide the id in the reg attribute */ - prop = of_get_property(spe, "reg", &proplen); - if (proplen == 4) - return (u64)*prop; - - return 0; -} - -static void spu_unmap(struct spu *spu) -{ - if (!firmware_has_feature(FW_FEATURE_LPAR)) - iounmap(spu->priv1); - iounmap(spu->priv2); - iounmap(spu->problem); - iounmap((__force u8 __iomem *)spu->local_store); -} - -static int __init spu_map_interrupts_old(struct spu *spu, - struct device_node *np) -{ - unsigned int isrc; - const u32 *tmp; - int nid; - - /* Get the interrupt source unit from the device-tree */ - tmp = of_get_property(np, "isrc", NULL); - if (!tmp) - return -ENODEV; - isrc = tmp[0]; - - tmp = of_get_property(np->parent->parent, "node-id", NULL); - if (!tmp) { - printk(KERN_WARNING "%s: can't find node-id\n", __func__); - nid = spu->node; - } else - nid = tmp[0]; - - /* Add the node number */ - isrc |= nid << IIC_IRQ_NODE_SHIFT; - - /* Now map interrupts of all 3 classes */ - spu->irqs[0] = irq_create_mapping(NULL, IIC_IRQ_CLASS_0 | isrc); - spu->irqs[1] = irq_create_mapping(NULL, IIC_IRQ_CLASS_1 | isrc); - spu->irqs[2] = irq_create_mapping(NULL, IIC_IRQ_CLASS_2 | isrc); - - /* Right now, we only fail if class 2 failed */ - if (!spu->irqs[2]) - return -EINVAL; - - return 0; -} - -static void __iomem * __init spu_map_prop_old(struct spu *spu, - struct device_node *n, - const char *name) -{ - const struct address_prop { - unsigned long address; - unsigned int len; - } __attribute__((packed)) *prop; - int proplen; - - prop = of_get_property(n, name, &proplen); - if (prop == NULL || proplen != sizeof (struct address_prop)) - return NULL; - - return ioremap(prop->address, prop->len); -} - -static int __init spu_map_device_old(struct spu *spu) -{ - struct device_node *node = spu->devnode; - const char *prop; - int ret; - - ret = -ENODEV; - spu->name = of_get_property(node, "name", NULL); - if (!spu->name) - goto out; - - prop = of_get_property(node, "local-store", NULL); - if (!prop) - goto out; - spu->local_store_phys = *(unsigned long *)prop; - - /* we use local store as ram, not io memory */ - spu->local_store = (void __force *) - spu_map_prop_old(spu, node, "local-store"); - if (!spu->local_store) - goto out; - - prop = of_get_property(node, "problem", NULL); - if (!prop) - goto out_unmap; - spu->problem_phys = *(unsigned long *)prop; - - spu->problem = spu_map_prop_old(spu, node, "problem"); - if (!spu->problem) - goto out_unmap; - - spu->priv2 = spu_map_prop_old(spu, node, "priv2"); - if (!spu->priv2) - goto out_unmap; - - if (!firmware_has_feature(FW_FEATURE_LPAR)) { - spu->priv1 = spu_map_prop_old(spu, node, "priv1"); - if (!spu->priv1) - goto out_unmap; - } - - ret = 0; - goto out; - -out_unmap: - spu_unmap(spu); -out: - return ret; -} - -static int __init spu_map_interrupts(struct spu *spu, struct device_node *np) -{ - int i; - - for (i=0; i < 3; i++) { - spu->irqs[i] = irq_of_parse_and_map(np, i); - if (!spu->irqs[i]) - goto err; - } - return 0; - -err: - pr_debug("failed to map irq %x for spu %s\n", i, spu->name); - for (; i >= 0; i--) { - if (spu->irqs[i]) - irq_dispose_mapping(spu->irqs[i]); - } - return -EINVAL; -} - -static int __init spu_map_resource(struct spu *spu, int nr, - void __iomem** virt, unsigned long *phys) -{ - struct device_node *np = spu->devnode; - struct resource resource = { }; - unsigned long len; - int ret; - - ret = of_address_to_resource(np, nr, &resource); - if (ret) - return ret; - if (phys) - *phys = resource.start; - len = resource_size(&resource); - *virt = ioremap(resource.start, len); - if (!*virt) - return -EINVAL; - return 0; -} - -static int __init spu_map_device(struct spu *spu) -{ - struct device_node *np = spu->devnode; - int ret = -ENODEV; - - spu->name = of_get_property(np, "name", NULL); - if (!spu->name) - goto out; - - ret = spu_map_resource(spu, 0, (void __iomem**)&spu->local_store, - &spu->local_store_phys); - if (ret) { - pr_debug("spu_new: failed to map %pOF resource 0\n", - np); - goto out; - } - ret = spu_map_resource(spu, 1, (void __iomem**)&spu->problem, - &spu->problem_phys); - if (ret) { - pr_debug("spu_new: failed to map %pOF resource 1\n", - np); - goto out_unmap; - } - ret = spu_map_resource(spu, 2, (void __iomem**)&spu->priv2, NULL); - if (ret) { - pr_debug("spu_new: failed to map %pOF resource 2\n", - np); - goto out_unmap; - } - if (!firmware_has_feature(FW_FEATURE_LPAR)) - ret = spu_map_resource(spu, 3, - (void __iomem**)&spu->priv1, NULL); - if (ret) { - pr_debug("spu_new: failed to map %pOF resource 3\n", - np); - goto out_unmap; - } - pr_debug("spu_new: %pOF maps:\n", np); - pr_debug(" local store : 0x%016lx -> 0x%p\n", - spu->local_store_phys, spu->local_store); - pr_debug(" problem state : 0x%016lx -> 0x%p\n", - spu->problem_phys, spu->problem); - pr_debug(" priv2 : 0x%p\n", spu->priv2); - pr_debug(" priv1 : 0x%p\n", spu->priv1); - - return 0; - -out_unmap: - spu_unmap(spu); -out: - pr_debug("failed to map spe %s: %d\n", spu->name, ret); - return ret; -} - -static int __init of_enumerate_spus(int (*fn)(void *data)) -{ - int ret; - struct device_node *node; - unsigned int n = 0; - - ret = -ENODEV; - for_each_node_by_type(node, "spe") { - ret = fn(node); - if (ret) { - printk(KERN_WARNING "%s: Error initializing %pOFn\n", - __func__, node); - of_node_put(node); - break; - } - n++; - } - return ret ? ret : n; -} - -static int __init of_create_spu(struct spu *spu, void *data) -{ - int ret; - struct device_node *spe = (struct device_node *)data; - static int legacy_map = 0, legacy_irq = 0; - - spu->devnode = of_node_get(spe); - spu->spe_id = find_spu_unit_number(spe); - - spu->node = of_node_to_nid(spe); - if (spu->node >= MAX_NUMNODES) { - printk(KERN_WARNING "SPE %pOF on node %d ignored," - " node number too big\n", spe, spu->node); - printk(KERN_WARNING "Check if CONFIG_NUMA is enabled.\n"); - ret = -ENODEV; - goto out; - } - - ret = spu_map_device(spu); - if (ret) { - if (!legacy_map) { - legacy_map = 1; - printk(KERN_WARNING "%s: Legacy device tree found, " - "trying to map old style\n", __func__); - } - ret = spu_map_device_old(spu); - if (ret) { - printk(KERN_ERR "Unable to map %s\n", - spu->name); - goto out; - } - } - - ret = spu_map_interrupts(spu, spe); - if (ret) { - if (!legacy_irq) { - legacy_irq = 1; - printk(KERN_WARNING "%s: Legacy device tree found, " - "trying old style irq\n", __func__); - } - ret = spu_map_interrupts_old(spu, spe); - if (ret) { - printk(KERN_ERR "%s: could not map interrupts\n", - spu->name); - goto out_unmap; - } - } - - pr_debug("Using SPE %s %p %p %p %p %d\n", spu->name, - spu->local_store, spu->problem, spu->priv1, - spu->priv2, spu->number); - goto out; - -out_unmap: - spu_unmap(spu); -out: - return ret; -} - -static int of_destroy_spu(struct spu *spu) -{ - spu_unmap(spu); - of_node_put(spu->devnode); - return 0; -} - -static void enable_spu_by_master_run(struct spu_context *ctx) -{ - ctx->ops->master_start(ctx); -} - -static void disable_spu_by_master_run(struct spu_context *ctx) -{ - ctx->ops->master_stop(ctx); -} - -/* Hardcoded affinity idxs for qs20 */ -#define QS20_SPES_PER_BE 8 -static int qs20_reg_idxs[QS20_SPES_PER_BE] = { 0, 2, 4, 6, 7, 5, 3, 1 }; -static int qs20_reg_memory[QS20_SPES_PER_BE] = { 1, 1, 0, 0, 0, 0, 0, 0 }; - -static struct spu *__init spu_lookup_reg(int node, u32 reg) -{ - struct spu *spu; - const u32 *spu_reg; - - list_for_each_entry(spu, &cbe_spu_info[node].spus, cbe_list) { - spu_reg = of_get_property(spu_devnode(spu), "reg", NULL); - if (*spu_reg == reg) - return spu; - } - return NULL; -} - -static void __init init_affinity_qs20_harcoded(void) -{ - int node, i; - struct spu *last_spu, *spu; - u32 reg; - - for (node = 0; node < MAX_NUMNODES; node++) { - last_spu = NULL; - for (i = 0; i < QS20_SPES_PER_BE; i++) { - reg = qs20_reg_idxs[i]; - spu = spu_lookup_reg(node, reg); - if (!spu) - continue; - spu->has_mem_affinity = qs20_reg_memory[reg]; - if (last_spu) - list_add_tail(&spu->aff_list, - &last_spu->aff_list); - last_spu = spu; - } - } -} - -static int __init of_has_vicinity(void) -{ - struct device_node *dn; - - for_each_node_by_type(dn, "spe") { - if (of_property_present(dn, "vicinity")) { - of_node_put(dn); - return 1; - } - } - return 0; -} - -static struct spu *__init devnode_spu(int cbe, struct device_node *dn) -{ - struct spu *spu; - - list_for_each_entry(spu, &cbe_spu_info[cbe].spus, cbe_list) - if (spu_devnode(spu) == dn) - return spu; - return NULL; -} - -static struct spu * __init -neighbour_spu(int cbe, struct device_node *target, struct device_node *avoid) -{ - struct spu *spu; - struct device_node *spu_dn; - const phandle *vic_handles; - int lenp, i; - - list_for_each_entry(spu, &cbe_spu_info[cbe].spus, cbe_list) { - spu_dn = spu_devnode(spu); - if (spu_dn == avoid) - continue; - vic_handles = of_get_property(spu_dn, "vicinity", &lenp); - for (i=0; i < (lenp / sizeof(phandle)); i++) { - if (vic_handles[i] == target->phandle) - return spu; - } - } - return NULL; -} - -static void __init init_affinity_node(int cbe) -{ - struct spu *spu, *last_spu; - struct device_node *vic_dn, *last_spu_dn; - phandle avoid_ph; - const phandle *vic_handles; - int lenp, i, added; - - last_spu = list_first_entry(&cbe_spu_info[cbe].spus, struct spu, - cbe_list); - avoid_ph = 0; - for (added = 1; added < cbe_spu_info[cbe].n_spus; added++) { - last_spu_dn = spu_devnode(last_spu); - vic_handles = of_get_property(last_spu_dn, "vicinity", &lenp); - - /* - * Walk through each phandle in vicinity property of the spu - * (typically two vicinity phandles per spe node) - */ - for (i = 0; i < (lenp / sizeof(phandle)); i++) { - if (vic_handles[i] == avoid_ph) - continue; - - vic_dn = of_find_node_by_phandle(vic_handles[i]); - if (!vic_dn) - continue; - - if (of_node_name_eq(vic_dn, "spe") ) { - spu = devnode_spu(cbe, vic_dn); - avoid_ph = last_spu_dn->phandle; - } else { - /* - * "mic-tm" and "bif0" nodes do not have - * vicinity property. So we need to find the - * spe which has vic_dn as neighbour, but - * skipping the one we came from (last_spu_dn) - */ - spu = neighbour_spu(cbe, vic_dn, last_spu_dn); - if (!spu) - continue; - if (of_node_name_eq(vic_dn, "mic-tm")) { - last_spu->has_mem_affinity = 1; - spu->has_mem_affinity = 1; - } - avoid_ph = vic_dn->phandle; - } - - of_node_put(vic_dn); - - list_add_tail(&spu->aff_list, &last_spu->aff_list); - last_spu = spu; - break; - } - } -} - -static void __init init_affinity_fw(void) -{ - int cbe; - - for (cbe = 0; cbe < MAX_NUMNODES; cbe++) - init_affinity_node(cbe); -} - -static int __init init_affinity(void) -{ - if (of_has_vicinity()) { - init_affinity_fw(); - } else { - if (of_machine_is_compatible("IBM,CPBW-1.0")) - init_affinity_qs20_harcoded(); - else - printk("No affinity configuration found\n"); - } - - return 0; -} - -const struct spu_management_ops spu_management_of_ops = { - .enumerate_spus = of_enumerate_spus, - .create_spu = of_create_spu, - .destroy_spu = of_destroy_spu, - .enable_spu = enable_spu_by_master_run, - .disable_spu = disable_spu_by_master_run, - .init_affinity = init_affinity, -}; diff --git a/arch/powerpc/platforms/cell/spu_priv1_mmio.c b/arch/powerpc/platforms/cell/spu_priv1_mmio.c deleted file mode 100644 index d150e3987304..000000000000 --- a/arch/powerpc/platforms/cell/spu_priv1_mmio.c +++ /dev/null @@ -1,167 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * spu hypervisor abstraction for direct hardware access. - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 - * Copyright 2006 Sony Corp. - */ - -#include <linux/interrupt.h> -#include <linux/list.h> -#include <linux/ptrace.h> -#include <linux/wait.h> -#include <linux/mm.h> -#include <linux/io.h> -#include <linux/mutex.h> -#include <linux/device.h> -#include <linux/sched.h> - -#include <asm/spu.h> -#include <asm/spu_priv1.h> -#include <asm/firmware.h> - -#include "interrupt.h" -#include "spu_priv1_mmio.h" - -static void int_mask_and(struct spu *spu, int class, u64 mask) -{ - u64 old_mask; - - old_mask = in_be64(&spu->priv1->int_mask_RW[class]); - out_be64(&spu->priv1->int_mask_RW[class], old_mask & mask); -} - -static void int_mask_or(struct spu *spu, int class, u64 mask) -{ - u64 old_mask; - - old_mask = in_be64(&spu->priv1->int_mask_RW[class]); - out_be64(&spu->priv1->int_mask_RW[class], old_mask | mask); -} - -static void int_mask_set(struct spu *spu, int class, u64 mask) -{ - out_be64(&spu->priv1->int_mask_RW[class], mask); -} - -static u64 int_mask_get(struct spu *spu, int class) -{ - return in_be64(&spu->priv1->int_mask_RW[class]); -} - -static void int_stat_clear(struct spu *spu, int class, u64 stat) -{ - out_be64(&spu->priv1->int_stat_RW[class], stat); -} - -static u64 int_stat_get(struct spu *spu, int class) -{ - return in_be64(&spu->priv1->int_stat_RW[class]); -} - -static void cpu_affinity_set(struct spu *spu, int cpu) -{ - u64 target; - u64 route; - - if (nr_cpus_node(spu->node)) { - const struct cpumask *spumask = cpumask_of_node(spu->node), - *cpumask = cpumask_of_node(cpu_to_node(cpu)); - - if (!cpumask_intersects(spumask, cpumask)) - return; - } - - target = iic_get_target_id(cpu); - route = target << 48 | target << 32 | target << 16; - out_be64(&spu->priv1->int_route_RW, route); -} - -static u64 mfc_dar_get(struct spu *spu) -{ - return in_be64(&spu->priv1->mfc_dar_RW); -} - -static u64 mfc_dsisr_get(struct spu *spu) -{ - return in_be64(&spu->priv1->mfc_dsisr_RW); -} - -static void mfc_dsisr_set(struct spu *spu, u64 dsisr) -{ - out_be64(&spu->priv1->mfc_dsisr_RW, dsisr); -} - -static void mfc_sdr_setup(struct spu *spu) -{ - out_be64(&spu->priv1->mfc_sdr_RW, mfspr(SPRN_SDR1)); -} - -static void mfc_sr1_set(struct spu *spu, u64 sr1) -{ - out_be64(&spu->priv1->mfc_sr1_RW, sr1); -} - -static u64 mfc_sr1_get(struct spu *spu) -{ - return in_be64(&spu->priv1->mfc_sr1_RW); -} - -static void mfc_tclass_id_set(struct spu *spu, u64 tclass_id) -{ - out_be64(&spu->priv1->mfc_tclass_id_RW, tclass_id); -} - -static u64 mfc_tclass_id_get(struct spu *spu) -{ - return in_be64(&spu->priv1->mfc_tclass_id_RW); -} - -static void tlb_invalidate(struct spu *spu) -{ - out_be64(&spu->priv1->tlb_invalidate_entry_W, 0ul); -} - -static void resource_allocation_groupID_set(struct spu *spu, u64 id) -{ - out_be64(&spu->priv1->resource_allocation_groupID_RW, id); -} - -static u64 resource_allocation_groupID_get(struct spu *spu) -{ - return in_be64(&spu->priv1->resource_allocation_groupID_RW); -} - -static void resource_allocation_enable_set(struct spu *spu, u64 enable) -{ - out_be64(&spu->priv1->resource_allocation_enable_RW, enable); -} - -static u64 resource_allocation_enable_get(struct spu *spu) -{ - return in_be64(&spu->priv1->resource_allocation_enable_RW); -} - -const struct spu_priv1_ops spu_priv1_mmio_ops = -{ - .int_mask_and = int_mask_and, - .int_mask_or = int_mask_or, - .int_mask_set = int_mask_set, - .int_mask_get = int_mask_get, - .int_stat_clear = int_stat_clear, - .int_stat_get = int_stat_get, - .cpu_affinity_set = cpu_affinity_set, - .mfc_dar_get = mfc_dar_get, - .mfc_dsisr_get = mfc_dsisr_get, - .mfc_dsisr_set = mfc_dsisr_set, - .mfc_sdr_setup = mfc_sdr_setup, - .mfc_sr1_set = mfc_sr1_set, - .mfc_sr1_get = mfc_sr1_get, - .mfc_tclass_id_set = mfc_tclass_id_set, - .mfc_tclass_id_get = mfc_tclass_id_get, - .tlb_invalidate = tlb_invalidate, - .resource_allocation_groupID_set = resource_allocation_groupID_set, - .resource_allocation_groupID_get = resource_allocation_groupID_get, - .resource_allocation_enable_set = resource_allocation_enable_set, - .resource_allocation_enable_get = resource_allocation_enable_get, -}; diff --git a/arch/powerpc/platforms/cell/spu_priv1_mmio.h b/arch/powerpc/platforms/cell/spu_priv1_mmio.h deleted file mode 100644 index 04f0db339dc1..000000000000 --- a/arch/powerpc/platforms/cell/spu_priv1_mmio.h +++ /dev/null @@ -1,14 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * spu hypervisor abstraction for direct hardware access. - * - * Copyright (C) 2006 Sony Computer Entertainment Inc. - * Copyright 2006 Sony Corp. - */ - -#ifndef SPU_PRIV1_MMIO_H -#define SPU_PRIV1_MMIO_H - -struct device_node *spu_devnode(struct spu *spu); - -#endif /* SPU_PRIV1_MMIO_H */ diff --git a/arch/powerpc/platforms/microwatt/Kconfig b/arch/powerpc/platforms/microwatt/Kconfig index 6af443a1db99..cb2aff635bb0 100644 --- a/arch/powerpc/platforms/microwatt/Kconfig +++ b/arch/powerpc/platforms/microwatt/Kconfig @@ -1,11 +1,12 @@ # SPDX-License-Identifier: GPL-2.0 config PPC_MICROWATT - depends on PPC_BOOK3S_64 && !SMP + depends on PPC_BOOK3S_64 bool "Microwatt SoC platform" select PPC_XICS select PPC_ICS_NATIVE select PPC_ICP_NATIVE select PPC_UDBG_16550 + select COMMON_CLK help This option enables support for FPGA-based Microwatt implementations. diff --git a/arch/powerpc/platforms/microwatt/Makefile b/arch/powerpc/platforms/microwatt/Makefile index 116d6d3ad3f0..d973b2ab4042 100644 --- a/arch/powerpc/platforms/microwatt/Makefile +++ b/arch/powerpc/platforms/microwatt/Makefile @@ -1 +1,2 @@ obj-y += setup.o rng.o +obj-$(CONFIG_SMP) += smp.o diff --git a/arch/powerpc/platforms/microwatt/microwatt.h b/arch/powerpc/platforms/microwatt/microwatt.h index 335417e95e66..891aa2800768 100644 --- a/arch/powerpc/platforms/microwatt/microwatt.h +++ b/arch/powerpc/platforms/microwatt/microwatt.h @@ -3,5 +3,6 @@ #define _MICROWATT_H void microwatt_rng_init(void); +void microwatt_init_smp(void); #endif /* _MICROWATT_H */ diff --git a/arch/powerpc/platforms/microwatt/setup.c b/arch/powerpc/platforms/microwatt/setup.c index 5e1c0997170d..6af2ccef736c 100644 --- a/arch/powerpc/platforms/microwatt/setup.c +++ b/arch/powerpc/platforms/microwatt/setup.c @@ -29,15 +29,33 @@ static int __init microwatt_populate(void) } machine_arch_initcall(microwatt, microwatt_populate); +static int __init microwatt_probe(void) +{ + /* Main reason for having this is to start the other CPU(s) */ + if (IS_ENABLED(CONFIG_SMP)) + microwatt_init_smp(); + return 1; +} + static void __init microwatt_setup_arch(void) { microwatt_rng_init(); } +static void microwatt_idle(void) +{ + if (!prep_irq_for_idle_irqsoff()) + return; + + __asm__ __volatile__ ("wait"); +} + define_machine(microwatt) { .name = "microwatt", .compatible = "microwatt-soc", + .probe = microwatt_probe, .init_IRQ = microwatt_init_IRQ, .setup_arch = microwatt_setup_arch, .progress = udbg_progress, + .power_save = microwatt_idle, }; diff --git a/arch/powerpc/platforms/microwatt/smp.c b/arch/powerpc/platforms/microwatt/smp.c new file mode 100644 index 000000000000..7dbf2ca73d47 --- /dev/null +++ b/arch/powerpc/platforms/microwatt/smp.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +/* + * SMP support functions for Microwatt + * Copyright 2025 Paul Mackerras <paulus@ozlabs.org> + */ + +#include <linux/kernel.h> +#include <linux/smp.h> +#include <linux/io.h> +#include <asm/early_ioremap.h> +#include <asm/ppc-opcode.h> +#include <asm/reg.h> +#include <asm/smp.h> +#include <asm/xics.h> + +#include "microwatt.h" + +static void __init microwatt_smp_probe(void) +{ + xics_smp_probe(); +} + +static void microwatt_smp_setup_cpu(int cpu) +{ + if (cpu != 0) + xics_setup_cpu(); +} + +static struct smp_ops_t microwatt_smp_ops = { + .probe = microwatt_smp_probe, + .message_pass = NULL, /* Use smp_muxed_ipi_message_pass */ + .kick_cpu = smp_generic_kick_cpu, + .setup_cpu = microwatt_smp_setup_cpu, +}; + +/* XXX get from device tree */ +#define SYSCON_BASE 0xc0000000 +#define SYSCON_LENGTH 0x100 + +#define SYSCON_CPU_CTRL 0x58 + +void __init microwatt_init_smp(void) +{ + volatile unsigned char __iomem *syscon; + int ncpus; + int timeout; + + syscon = early_ioremap(SYSCON_BASE, SYSCON_LENGTH); + if (syscon == NULL) { + pr_err("Failed to map SYSCON\n"); + return; + } + ncpus = (readl(syscon + SYSCON_CPU_CTRL) >> 8) & 0xff; + if (ncpus < 2) + goto out; + + smp_ops = µwatt_smp_ops; + + /* + * Write two instructions at location 0: + * mfspr r3, PIR + * b __secondary_hold + */ + *(unsigned int *)KERNELBASE = PPC_RAW_MFSPR(3, SPRN_PIR); + *(unsigned int *)(KERNELBASE+4) = PPC_RAW_BRANCH(&__secondary_hold - (char *)(KERNELBASE+4)); + + /* enable the other CPUs, they start at location 0 */ + writel((1ul << ncpus) - 1, syscon + SYSCON_CPU_CTRL); + + timeout = 10000; + while (!__secondary_hold_acknowledge) { + if (--timeout == 0) + break; + barrier(); + } + + out: + early_iounmap((void *)syscon, SYSCON_LENGTH); +} diff --git a/arch/powerpc/platforms/powernv/Kconfig b/arch/powerpc/platforms/powernv/Kconfig index 70a46acc70d6..3fbe0295ce14 100644 --- a/arch/powerpc/platforms/powernv/Kconfig +++ b/arch/powerpc/platforms/powernv/Kconfig @@ -17,6 +17,7 @@ config PPC_POWERNV select MMU_NOTIFIER select FORCE_SMP select ARCH_SUPPORTS_PER_VMA_LOCK + select PPC_RADIX_BROADCAST_TLBIE default y config OPAL_PRD diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig index 42fc66e97539..a934c2a262f6 100644 --- a/arch/powerpc/platforms/pseries/Kconfig +++ b/arch/powerpc/platforms/pseries/Kconfig @@ -23,6 +23,7 @@ config PPC_PSERIES select FORCE_SMP select SWIOTLB select ARCH_SUPPORTS_PER_VMA_LOCK + select PPC_RADIX_BROADCAST_TLBIE default y config PARAVIRT @@ -128,6 +129,15 @@ config CMM will be reused for other LPARs. The interface allows firmware to balance memory across many LPARs. +config HTMDUMP + tristate "PowerVM data dumper" + depends on PPC_PSERIES && DEBUG_FS + default m + help + Select this option, if you want to enable the kernel debugfs + interface to dump the Hardware Trace Macro (HTM) function data + in the LPAR. + config HV_PERF_CTRS bool "Hypervisor supplied PMU events (24x7 & GPCI)" default y diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile index 7bf506f6b8c8..3f3e3492e436 100644 --- a/arch/powerpc/platforms/pseries/Makefile +++ b/arch/powerpc/platforms/pseries/Makefile @@ -19,6 +19,7 @@ obj-$(CONFIG_HVC_CONSOLE) += hvconsole.o obj-$(CONFIG_HVCS) += hvcserver.o obj-$(CONFIG_HCALL_STATS) += hvCall_inst.o obj-$(CONFIG_CMM) += cmm.o +obj-$(CONFIG_HTMDUMP) += htmdump.o obj-$(CONFIG_IO_EVENT_IRQ) += io_event_irq.o obj-$(CONFIG_LPARCFG) += lparcfg.o obj-$(CONFIG_IBMVIO) += vio.o diff --git a/arch/powerpc/platforms/pseries/htmdump.c b/arch/powerpc/platforms/pseries/htmdump.c new file mode 100644 index 000000000000..57fc1700f604 --- /dev/null +++ b/arch/powerpc/platforms/pseries/htmdump.c @@ -0,0 +1,121 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) IBM Corporation, 2024 + */ + +#define pr_fmt(fmt) "htmdump: " fmt + +#include <linux/debugfs.h> +#include <linux/module.h> +#include <asm/io.h> +#include <asm/machdep.h> +#include <asm/plpar_wrappers.h> + +static void *htm_buf; +static u32 nodeindex; +static u32 nodalchipindex; +static u32 coreindexonchip; +static u32 htmtype; +static struct dentry *htmdump_debugfs_dir; + +static ssize_t htmdump_read(struct file *filp, char __user *ubuf, + size_t count, loff_t *ppos) +{ + void *htm_buf = filp->private_data; + unsigned long page, read_size, available; + loff_t offset; + long rc; + + page = ALIGN_DOWN(*ppos, PAGE_SIZE); + offset = (*ppos) % PAGE_SIZE; + + rc = htm_get_dump_hardware(nodeindex, nodalchipindex, coreindexonchip, + htmtype, virt_to_phys(htm_buf), PAGE_SIZE, page); + + switch (rc) { + case H_SUCCESS: + /* H_PARTIAL for the case where all available data can't be + * returned due to buffer size constraint. + */ + case H_PARTIAL: + break; + /* H_NOT_AVAILABLE indicates reading from an offset outside the range, + * i.e. past end of file. + */ + case H_NOT_AVAILABLE: + return 0; + case H_BUSY: + case H_LONG_BUSY_ORDER_1_MSEC: + case H_LONG_BUSY_ORDER_10_MSEC: + case H_LONG_BUSY_ORDER_100_MSEC: + case H_LONG_BUSY_ORDER_1_SEC: + case H_LONG_BUSY_ORDER_10_SEC: + case H_LONG_BUSY_ORDER_100_SEC: + return -EBUSY; + case H_PARAMETER: + case H_P2: + case H_P3: + case H_P4: + case H_P5: + case H_P6: + return -EINVAL; + case H_STATE: + return -EIO; + case H_AUTHORITY: + return -EPERM; + } + + available = PAGE_SIZE; + read_size = min(count, available); + *ppos += read_size; + return simple_read_from_buffer(ubuf, count, &offset, htm_buf, available); +} + +static const struct file_operations htmdump_fops = { + .llseek = NULL, + .read = htmdump_read, + .open = simple_open, +}; + +static int htmdump_init_debugfs(void) +{ + htm_buf = kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!htm_buf) { + pr_err("Failed to allocate htmdump buf\n"); + return -ENOMEM; + } + + htmdump_debugfs_dir = debugfs_create_dir("htmdump", + arch_debugfs_dir); + + debugfs_create_u32("nodeindex", 0600, + htmdump_debugfs_dir, &nodeindex); + debugfs_create_u32("nodalchipindex", 0600, + htmdump_debugfs_dir, &nodalchipindex); + debugfs_create_u32("coreindexonchip", 0600, + htmdump_debugfs_dir, &coreindexonchip); + debugfs_create_u32("htmtype", 0600, + htmdump_debugfs_dir, &htmtype); + debugfs_create_file("trace", 0400, htmdump_debugfs_dir, htm_buf, &htmdump_fops); + + return 0; +} + +static int __init htmdump_init(void) +{ + if (htmdump_init_debugfs()) + return -ENOMEM; + + return 0; +} + +static void __exit htmdump_exit(void) +{ + debugfs_remove_recursive(htmdump_debugfs_dir); + kfree(htm_buf); +} + +module_init(htmdump_init); +module_exit(htmdump_exit); +MODULE_DESCRIPTION("PHYP Hardware Trace Macro (HTM) data dumper"); +MODULE_LICENSE("GPL"); diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c index ae6f7a235d8b..d6ebc19fb99c 100644 --- a/arch/powerpc/platforms/pseries/iommu.c +++ b/arch/powerpc/platforms/pseries/iommu.c @@ -52,7 +52,8 @@ enum { enum { DDW_EXT_SIZE = 0, DDW_EXT_RESET_DMA_WIN = 1, - DDW_EXT_QUERY_OUT_SIZE = 2 + DDW_EXT_QUERY_OUT_SIZE = 2, + DDW_EXT_LIMITED_ADDR_MODE = 3 }; static struct iommu_table *iommu_pseries_alloc_table(int node) @@ -1284,17 +1285,13 @@ static LIST_HEAD(failed_ddw_pdn_list); static phys_addr_t ddw_memory_hotplug_max(void) { - resource_size_t max_addr = memory_hotplug_max(); - struct device_node *memory; + resource_size_t max_addr; - for_each_node_by_type(memory, "memory") { - struct resource res; - - if (of_address_to_resource(memory, 0, &res)) - continue; - - max_addr = max_t(resource_size_t, max_addr, res.end + 1); - } +#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG) + max_addr = hot_add_drconf_memory_max(); +#else + max_addr = memblock_end_of_DRAM(); +#endif return max_addr; } @@ -1331,6 +1328,54 @@ static void reset_dma_window(struct pci_dev *dev, struct device_node *par_dn) ret); } +/* + * Platforms support placing PHB in limited address mode starting with LoPAR + * level 2.13 implement. In this mode, the DMA address returned by DDW is over + * 4GB but, less than 64-bits. This benefits IO adapters that don't support + * 64-bits for DMA addresses. + */ +static int limited_dma_window(struct pci_dev *dev, struct device_node *par_dn) +{ + int ret; + u32 cfg_addr, reset_dma_win, las_supported; + u64 buid; + struct device_node *dn; + struct pci_dn *pdn; + + ret = ddw_read_ext(par_dn, DDW_EXT_RESET_DMA_WIN, &reset_dma_win); + if (ret) + goto out; + + ret = ddw_read_ext(par_dn, DDW_EXT_LIMITED_ADDR_MODE, &las_supported); + + /* Limited Address Space extension available on the platform but DDW in + * limited addressing mode not supported + */ + if (!ret && !las_supported) + ret = -EPROTO; + + if (ret) { + dev_info(&dev->dev, "Limited Address Space for DDW not Supported, err: %d", ret); + goto out; + } + + dn = pci_device_to_OF_node(dev); + pdn = PCI_DN(dn); + buid = pdn->phb->buid; + cfg_addr = (pdn->busno << 16) | (pdn->devfn << 8); + + ret = rtas_call(reset_dma_win, 4, 1, NULL, cfg_addr, BUID_HI(buid), + BUID_LO(buid), 1); + if (ret) + dev_info(&dev->dev, + "ibm,reset-pe-dma-windows(%x) for Limited Addr Support: %x %x %x returned %d ", + reset_dma_win, cfg_addr, BUID_HI(buid), BUID_LO(buid), + ret); + +out: + return ret; +} + /* Return largest page shift based on "IO Page Sizes" output of ibm,query-pe-dma-window. */ static int iommu_get_page_shift(u32 query_page_size) { @@ -1398,7 +1443,7 @@ static struct property *ddw_property_create(const char *propname, u32 liobn, u64 * * returns true if can map all pages (direct mapping), false otherwise.. */ -static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) +static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn, u64 dma_mask) { int len = 0, ret; int max_ram_len = order_base_2(ddw_memory_hotplug_max()); @@ -1417,6 +1462,9 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) bool pmem_present; struct pci_dn *pci = PCI_DN(pdn); struct property *default_win = NULL; + bool limited_addr_req = false, limited_addr_enabled = false; + int dev_max_ddw; + int ddw_sz; dn = of_find_node_by_type(NULL, "ibm,pmemory"); pmem_present = dn != NULL; @@ -1443,7 +1491,6 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) * the ibm,ddw-applicable property holds the tokens for: * ibm,query-pe-dma-window * ibm,create-pe-dma-window - * ibm,remove-pe-dma-window * for the given node in that order. * the property is actually in the parent, not the PE */ @@ -1463,6 +1510,20 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) if (ret != 0) goto out_failed; + /* DMA Limited Addressing required? This is when the driver has + * requested to create DDW but supports mask which is less than 64-bits + */ + limited_addr_req = (dma_mask != DMA_BIT_MASK(64)); + + /* place the PHB in Limited Addressing mode */ + if (limited_addr_req) { + if (limited_dma_window(dev, pdn)) + goto out_failed; + + /* PHB is in Limited address mode */ + limited_addr_enabled = true; + } + /* * If there is no window available, remove the default DMA window, * if it's present. This will make all the resources available to the @@ -1509,6 +1570,15 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) goto out_failed; } + /* Maximum DMA window size that the device can address (in log2) */ + dev_max_ddw = fls64(dma_mask); + + /* If the device DMA mask is less than 64-bits, make sure the DMA window + * size is not bigger than what the device can access + */ + ddw_sz = min(order_base_2(query.largest_available_block << page_shift), + dev_max_ddw); + /* * The "ibm,pmemory" can appear anywhere in the address space. * Assuming it is still backed by page structs, try MAX_PHYSMEM_BITS @@ -1517,23 +1587,21 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) */ len = max_ram_len; if (pmem_present) { - if (query.largest_available_block >= - (1ULL << (MAX_PHYSMEM_BITS - page_shift))) + if (ddw_sz >= MAX_PHYSMEM_BITS) len = MAX_PHYSMEM_BITS; else dev_info(&dev->dev, "Skipping ibm,pmemory"); } /* check if the available block * number of ptes will map everything */ - if (query.largest_available_block < (1ULL << (len - page_shift))) { + if (ddw_sz < len) { dev_dbg(&dev->dev, "can't map partition max 0x%llx with %llu %llu-sized pages\n", 1ULL << len, query.largest_available_block, 1ULL << page_shift); - len = order_base_2(query.largest_available_block << page_shift); - + len = ddw_sz; dynamic_mapping = true; } else { direct_mapping = !default_win_removed || @@ -1547,8 +1615,9 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) */ if (default_win_removed && pmem_present && !direct_mapping) { /* DDW is big enough to be split */ - if ((query.largest_available_block << page_shift) >= - MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) { + if ((1ULL << ddw_sz) >= + MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) { + direct_mapping = true; /* offset of the Dynamic part of DDW */ @@ -1559,8 +1628,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) dynamic_mapping = true; /* create max size DDW possible */ - len = order_base_2(query.largest_available_block - << page_shift); + len = ddw_sz; } } @@ -1600,7 +1668,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) if (direct_mapping) { /* DDW maps the whole partition, so enable direct DMA mapping */ - ret = walk_system_ram_range(0, memblock_end_of_DRAM() >> PAGE_SHIFT, + ret = walk_system_ram_range(0, ddw_memory_hotplug_max() >> PAGE_SHIFT, win64->value, tce_setrange_multi_pSeriesLP_walk); if (ret) { dev_info(&dev->dev, "failed to map DMA window for %pOF: %d\n", @@ -1689,7 +1757,7 @@ out_remove_win: __remove_dma_window(pdn, ddw_avail, create.liobn); out_failed: - if (default_win_removed) + if (default_win_removed || limited_addr_enabled) reset_dma_window(dev, pdn); fpdn = kzalloc(sizeof(*fpdn), GFP_KERNEL); @@ -1708,6 +1776,9 @@ out_unlock: dev->dev.bus_dma_limit = dev->dev.archdata.dma_offset + (1ULL << max_ram_len); + dev_info(&dev->dev, "lsa_required: %x, lsa_enabled: %x, direct mapping: %x\n", + limited_addr_req, limited_addr_enabled, direct_mapping); + return direct_mapping; } @@ -1833,8 +1904,11 @@ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask) { struct device_node *dn = pci_device_to_OF_node(pdev), *pdn; - /* only attempt to use a new window if 64-bit DMA is requested */ - if (dma_mask < DMA_BIT_MASK(64)) + /* For DDW, DMA mask should be more than 32-bits. For mask more then + * 32-bits but less then 64-bits, DMA addressing is supported in + * Limited Addressing mode. + */ + if (dma_mask <= DMA_BIT_MASK(32)) return false; dev_dbg(&pdev->dev, "node is %pOF\n", dn); @@ -1847,7 +1921,7 @@ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask) */ pdn = pci_dma_find(dn, NULL); if (pdn && PCI_DN(pdn)) - return enable_ddw(pdev, pdn); + return enable_ddw(pdev, pdn, dma_mask); return false; } @@ -2349,11 +2423,17 @@ static int iommu_mem_notifier(struct notifier_block *nb, unsigned long action, struct memory_notify *arg = data; int ret = 0; + /* This notifier can get called when onlining persistent memory as well. + * TCEs are not pre-mapped for persistent memory. Persistent memory will + * always be above ddw_memory_hotplug_max() + */ + switch (action) { case MEM_GOING_ONLINE: spin_lock(&dma_win_list_lock); list_for_each_entry(window, &dma_win_list, list) { - if (window->direct) { + if (window->direct && (arg->start_pfn << PAGE_SHIFT) < + ddw_memory_hotplug_max()) { ret |= tce_setrange_multi_pSeriesLP(arg->start_pfn, arg->nr_pages, window->prop); } @@ -2365,7 +2445,8 @@ static int iommu_mem_notifier(struct notifier_block *nb, unsigned long action, case MEM_OFFLINE: spin_lock(&dma_win_list_lock); list_for_each_entry(window, &dma_win_list, list) { - if (window->direct) { + if (window->direct && (arg->start_pfn << PAGE_SHIFT) < + ddw_memory_hotplug_max()) { ret |= tce_clearrange_multi_pSeriesLP(arg->start_pfn, arg->nr_pages, window->prop); } diff --git a/arch/powerpc/sysdev/Makefile b/arch/powerpc/sysdev/Makefile index 24a177d164f1..0834a9a12600 100644 --- a/arch/powerpc/sysdev/Makefile +++ b/arch/powerpc/sysdev/Makefile @@ -12,7 +12,6 @@ obj-$(CONFIG_PPC_MSI_BITMAP) += msi_bitmap.o obj-$(CONFIG_PPC_MPC106) += grackle.o obj-$(CONFIG_PPC_DCR_NATIVE) += dcr-low.o -obj-$(CONFIG_PPC_PMI) += pmi.o obj-$(CONFIG_U3_DART) += dart_iommu.o obj-$(CONFIG_MMIO_NVRAM) += mmio_nvram.o obj-$(CONFIG_FSL_SOC) += fsl_soc.o fsl_mpic_err.o diff --git a/arch/powerpc/sysdev/dcr.c b/arch/powerpc/sysdev/dcr.c index 70ce66eadff1..cb44a69958e7 100644 --- a/arch/powerpc/sysdev/dcr.c +++ b/arch/powerpc/sysdev/dcr.c @@ -11,107 +11,6 @@ #include <linux/of_address.h> #include <asm/dcr.h> -#ifdef CONFIG_PPC_DCR_MMIO -static struct device_node *find_dcr_parent(struct device_node *node) -{ - struct device_node *par, *tmp; - const u32 *p; - - for (par = of_node_get(node); par;) { - if (of_property_read_bool(par, "dcr-controller")) - break; - p = of_get_property(par, "dcr-parent", NULL); - tmp = par; - if (p == NULL) - par = of_get_parent(par); - else - par = of_find_node_by_phandle(*p); - of_node_put(tmp); - } - return par; -} -#endif - -#if defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) - -bool dcr_map_ok_generic(dcr_host_t host) -{ - if (host.type == DCR_HOST_NATIVE) - return dcr_map_ok_native(host.host.native); - else if (host.type == DCR_HOST_MMIO) - return dcr_map_ok_mmio(host.host.mmio); - else - return false; -} -EXPORT_SYMBOL_GPL(dcr_map_ok_generic); - -dcr_host_t dcr_map_generic(struct device_node *dev, - unsigned int dcr_n, - unsigned int dcr_c) -{ - dcr_host_t host; - struct device_node *dp; - const char *prop; - - host.type = DCR_HOST_INVALID; - - dp = find_dcr_parent(dev); - if (dp == NULL) - return host; - - prop = of_get_property(dp, "dcr-access-method", NULL); - - pr_debug("dcr_map_generic(dcr-access-method = %s)\n", prop); - - if (!strcmp(prop, "native")) { - host.type = DCR_HOST_NATIVE; - host.host.native = dcr_map_native(dev, dcr_n, dcr_c); - } else if (!strcmp(prop, "mmio")) { - host.type = DCR_HOST_MMIO; - host.host.mmio = dcr_map_mmio(dev, dcr_n, dcr_c); - } - - of_node_put(dp); - return host; -} -EXPORT_SYMBOL_GPL(dcr_map_generic); - -void dcr_unmap_generic(dcr_host_t host, unsigned int dcr_c) -{ - if (host.type == DCR_HOST_NATIVE) - dcr_unmap_native(host.host.native, dcr_c); - else if (host.type == DCR_HOST_MMIO) - dcr_unmap_mmio(host.host.mmio, dcr_c); - else /* host.type == DCR_HOST_INVALID */ - WARN_ON(true); -} -EXPORT_SYMBOL_GPL(dcr_unmap_generic); - -u32 dcr_read_generic(dcr_host_t host, unsigned int dcr_n) -{ - if (host.type == DCR_HOST_NATIVE) - return dcr_read_native(host.host.native, dcr_n); - else if (host.type == DCR_HOST_MMIO) - return dcr_read_mmio(host.host.mmio, dcr_n); - else /* host.type == DCR_HOST_INVALID */ - WARN_ON(true); - return 0; -} -EXPORT_SYMBOL_GPL(dcr_read_generic); - -void dcr_write_generic(dcr_host_t host, unsigned int dcr_n, u32 value) -{ - if (host.type == DCR_HOST_NATIVE) - dcr_write_native(host.host.native, dcr_n, value); - else if (host.type == DCR_HOST_MMIO) - dcr_write_mmio(host.host.mmio, dcr_n, value); - else /* host.type == DCR_HOST_INVALID */ - WARN_ON(true); -} -EXPORT_SYMBOL_GPL(dcr_write_generic); - -#endif /* defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) */ - unsigned int dcr_resource_start(const struct device_node *np, unsigned int index) { @@ -137,86 +36,5 @@ unsigned int dcr_resource_len(const struct device_node *np, unsigned int index) } EXPORT_SYMBOL_GPL(dcr_resource_len); -#ifdef CONFIG_PPC_DCR_MMIO - -static u64 of_translate_dcr_address(struct device_node *dev, - unsigned int dcr_n, - unsigned int *out_stride) -{ - struct device_node *dp; - const u32 *p; - unsigned int stride; - u64 ret = OF_BAD_ADDR; - - dp = find_dcr_parent(dev); - if (dp == NULL) - return OF_BAD_ADDR; - - /* Stride is not properly defined yet, default to 0x10 for Axon */ - p = of_get_property(dp, "dcr-mmio-stride", NULL); - stride = (p == NULL) ? 0x10 : *p; - - /* XXX FIXME: Which property name is to use of the 2 following ? */ - p = of_get_property(dp, "dcr-mmio-range", NULL); - if (p == NULL) - p = of_get_property(dp, "dcr-mmio-space", NULL); - if (p == NULL) - goto done; - - /* Maybe could do some better range checking here */ - ret = of_translate_address(dp, p); - if (ret != OF_BAD_ADDR) - ret += (u64)(stride) * (u64)dcr_n; - if (out_stride) - *out_stride = stride; - - done: - of_node_put(dp); - return ret; -} - -dcr_host_mmio_t dcr_map_mmio(struct device_node *dev, - unsigned int dcr_n, - unsigned int dcr_c) -{ - dcr_host_mmio_t ret = { .token = NULL, .stride = 0, .base = dcr_n }; - u64 addr; - - pr_debug("dcr_map(%pOF, 0x%x, 0x%x)\n", - dev, dcr_n, dcr_c); - - addr = of_translate_dcr_address(dev, dcr_n, &ret.stride); - pr_debug("translates to addr: 0x%llx, stride: 0x%x\n", - (unsigned long long) addr, ret.stride); - if (addr == OF_BAD_ADDR) - return ret; - pr_debug("mapping 0x%x bytes\n", dcr_c * ret.stride); - ret.token = ioremap(addr, dcr_c * ret.stride); - if (ret.token == NULL) - return ret; - pr_debug("mapped at 0x%p -> base is 0x%p\n", - ret.token, ret.token - dcr_n * ret.stride); - ret.token -= dcr_n * ret.stride; - return ret; -} -EXPORT_SYMBOL_GPL(dcr_map_mmio); - -void dcr_unmap_mmio(dcr_host_mmio_t host, unsigned int dcr_c) -{ - dcr_host_mmio_t h = host; - - if (h.token == NULL) - return; - h.token += host.base * h.stride; - iounmap(h.token); - h.token = NULL; -} -EXPORT_SYMBOL_GPL(dcr_unmap_mmio); - -#endif /* defined(CONFIG_PPC_DCR_MMIO) */ - -#ifdef CONFIG_PPC_DCR_NATIVE DEFINE_SPINLOCK(dcr_ind_lock); EXPORT_SYMBOL_GPL(dcr_ind_lock); -#endif /* defined(CONFIG_PPC_DCR_NATIVE) */ - diff --git a/arch/powerpc/sysdev/ipic.c b/arch/powerpc/sysdev/ipic.c index 5f69e2d50f26..037b04bf9a9f 100644 --- a/arch/powerpc/sysdev/ipic.c +++ b/arch/powerpc/sysdev/ipic.c @@ -762,8 +762,7 @@ struct ipic * __init ipic_init(struct device_node *node, unsigned int flags) ipic_write(ipic->regs, IPIC_SIMSR_H, 0); ipic_write(ipic->regs, IPIC_SIMSR_L, 0); - printk ("IPIC (%d IRQ sources) at %p\n", NR_IPIC_INTS, - primary_ipic->regs); + pr_info("IPIC (%d IRQ sources) at MMIO %pa\n", NR_IPIC_INTS, &res.start); return ipic; } diff --git a/arch/powerpc/sysdev/pmi.c b/arch/powerpc/sysdev/pmi.c deleted file mode 100644 index 2511e586fe31..000000000000 --- a/arch/powerpc/sysdev/pmi.c +++ /dev/null @@ -1,267 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * pmi driver - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 - * - * PMI (Platform Management Interrupt) is a way to communicate - * with the BMC (Baseboard Management Controller) via interrupts. - * Unlike IPMI it is bidirectional and has a low latency. - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#include <linux/interrupt.h> -#include <linux/slab.h> -#include <linux/completion.h> -#include <linux/spinlock.h> -#include <linux/module.h> -#include <linux/mod_devicetable.h> -#include <linux/workqueue.h> -#include <linux/of_address.h> -#include <linux/of_irq.h> -#include <linux/platform_device.h> - -#include <asm/io.h> -#include <asm/pmi.h> - -struct pmi_data { - struct list_head handler; - spinlock_t handler_spinlock; - spinlock_t pmi_spinlock; - struct mutex msg_mutex; - pmi_message_t msg; - struct completion *completion; - struct platform_device *dev; - int irq; - u8 __iomem *pmi_reg; - struct work_struct work; -}; - -static struct pmi_data *data; - -static irqreturn_t pmi_irq_handler(int irq, void *dev_id) -{ - u8 type; - int rc; - - spin_lock(&data->pmi_spinlock); - - type = ioread8(data->pmi_reg + PMI_READ_TYPE); - pr_debug("pmi: got message of type %d\n", type); - - if (type & PMI_ACK && !data->completion) { - printk(KERN_WARNING "pmi: got unexpected ACK message.\n"); - rc = -EIO; - goto unlock; - } - - if (data->completion && !(type & PMI_ACK)) { - printk(KERN_WARNING "pmi: expected ACK, but got %d\n", type); - rc = -EIO; - goto unlock; - } - - data->msg.type = type; - data->msg.data0 = ioread8(data->pmi_reg + PMI_READ_DATA0); - data->msg.data1 = ioread8(data->pmi_reg + PMI_READ_DATA1); - data->msg.data2 = ioread8(data->pmi_reg + PMI_READ_DATA2); - rc = 0; -unlock: - spin_unlock(&data->pmi_spinlock); - - if (rc == -EIO) { - rc = IRQ_HANDLED; - goto out; - } - - if (data->msg.type & PMI_ACK) { - complete(data->completion); - rc = IRQ_HANDLED; - goto out; - } - - schedule_work(&data->work); - - rc = IRQ_HANDLED; -out: - return rc; -} - - -static const struct of_device_id pmi_match[] = { - { .type = "ibm,pmi", .name = "ibm,pmi" }, - { .type = "ibm,pmi" }, - {}, -}; - -MODULE_DEVICE_TABLE(of, pmi_match); - -static void pmi_notify_handlers(struct work_struct *work) -{ - struct pmi_handler *handler; - - spin_lock(&data->handler_spinlock); - list_for_each_entry(handler, &data->handler, node) { - pr_debug("pmi: notifying handler %p\n", handler); - if (handler->type == data->msg.type) - handler->handle_pmi_message(data->msg); - } - spin_unlock(&data->handler_spinlock); -} - -static int pmi_of_probe(struct platform_device *dev) -{ - struct device_node *np = dev->dev.of_node; - int rc; - - if (data) { - printk(KERN_ERR "pmi: driver has already been initialized.\n"); - rc = -EBUSY; - goto out; - } - - data = kzalloc(sizeof(struct pmi_data), GFP_KERNEL); - if (!data) { - printk(KERN_ERR "pmi: could not allocate memory.\n"); - rc = -ENOMEM; - goto out; - } - - data->pmi_reg = of_iomap(np, 0); - if (!data->pmi_reg) { - printk(KERN_ERR "pmi: invalid register address.\n"); - rc = -EFAULT; - goto error_cleanup_data; - } - - INIT_LIST_HEAD(&data->handler); - - mutex_init(&data->msg_mutex); - spin_lock_init(&data->pmi_spinlock); - spin_lock_init(&data->handler_spinlock); - - INIT_WORK(&data->work, pmi_notify_handlers); - - data->dev = dev; - - data->irq = irq_of_parse_and_map(np, 0); - if (!data->irq) { - printk(KERN_ERR "pmi: invalid interrupt.\n"); - rc = -EFAULT; - goto error_cleanup_iomap; - } - - rc = request_irq(data->irq, pmi_irq_handler, 0, "pmi", NULL); - if (rc) { - printk(KERN_ERR "pmi: can't request IRQ %d: returned %d\n", - data->irq, rc); - goto error_cleanup_iomap; - } - - printk(KERN_INFO "pmi: found pmi device at addr %p.\n", data->pmi_reg); - - goto out; - -error_cleanup_iomap: - iounmap(data->pmi_reg); - -error_cleanup_data: - kfree(data); - -out: - return rc; -} - -static void pmi_of_remove(struct platform_device *dev) -{ - struct pmi_handler *handler, *tmp; - - free_irq(data->irq, NULL); - iounmap(data->pmi_reg); - - spin_lock(&data->handler_spinlock); - - list_for_each_entry_safe(handler, tmp, &data->handler, node) - list_del(&handler->node); - - spin_unlock(&data->handler_spinlock); - - kfree(data); - data = NULL; -} - -static struct platform_driver pmi_of_platform_driver = { - .probe = pmi_of_probe, - .remove = pmi_of_remove, - .driver = { - .name = "pmi", - .of_match_table = pmi_match, - }, -}; -module_platform_driver(pmi_of_platform_driver); - -int pmi_send_message(pmi_message_t msg) -{ - unsigned long flags; - DECLARE_COMPLETION_ONSTACK(completion); - - if (!data) - return -ENODEV; - - mutex_lock(&data->msg_mutex); - - data->msg = msg; - pr_debug("pmi_send_message: msg is %08x\n", *(u32*)&msg); - - data->completion = &completion; - - spin_lock_irqsave(&data->pmi_spinlock, flags); - iowrite8(msg.data0, data->pmi_reg + PMI_WRITE_DATA0); - iowrite8(msg.data1, data->pmi_reg + PMI_WRITE_DATA1); - iowrite8(msg.data2, data->pmi_reg + PMI_WRITE_DATA2); - iowrite8(msg.type, data->pmi_reg + PMI_WRITE_TYPE); - spin_unlock_irqrestore(&data->pmi_spinlock, flags); - - pr_debug("pmi_send_message: wait for completion\n"); - - wait_for_completion_interruptible_timeout(data->completion, - PMI_TIMEOUT); - - data->completion = NULL; - - mutex_unlock(&data->msg_mutex); - - return 0; -} -EXPORT_SYMBOL_GPL(pmi_send_message); - -int pmi_register_handler(struct pmi_handler *handler) -{ - if (!data) - return -ENODEV; - - spin_lock(&data->handler_spinlock); - list_add_tail(&handler->node, &data->handler); - spin_unlock(&data->handler_spinlock); - - return 0; -} -EXPORT_SYMBOL_GPL(pmi_register_handler); - -void pmi_unregister_handler(struct pmi_handler *handler) -{ - if (!data) - return; - - pr_debug("pmi: unregistering handler %p\n", handler); - - spin_lock(&data->handler_spinlock); - list_del(&handler->node); - spin_unlock(&data->handler_spinlock); -} -EXPORT_SYMBOL_GPL(pmi_unregister_handler); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>"); -MODULE_DESCRIPTION("IBM Platform Management Interrupt driver"); diff --git a/arch/powerpc/sysdev/xics/icp-native.c b/arch/powerpc/sysdev/xics/icp-native.c index 700b67476a7d..4e89158a577c 100644 --- a/arch/powerpc/sysdev/xics/icp-native.c +++ b/arch/powerpc/sysdev/xics/icp-native.c @@ -145,27 +145,6 @@ static void icp_native_cause_ipi(int cpu) icp_native_set_qirr(cpu, IPI_PRIORITY); } -#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE -void icp_native_cause_ipi_rm(int cpu) -{ - /* - * Currently not used to send IPIs to another CPU - * on the same core. Only caller is KVM real mode. - * Need the physical address of the XICS to be - * previously saved in kvm_hstate in the paca. - */ - void __iomem *xics_phys; - - /* - * Just like the cause_ipi functions, it is required to - * include a full barrier before causing the IPI. - */ - xics_phys = paca_ptrs[cpu]->kvm_hstate.xics_phys; - mb(); - __raw_rm_writeb(IPI_PRIORITY, xics_phys + XICS_MFRR); -} -#endif - /* * Called when an interrupt is received on an off-line CPU to * clear the interrupt, so that the CPU can go back to nap mode. diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile index d778011060a8..d74b147126b7 100644 --- a/arch/powerpc/xmon/Makefile +++ b/arch/powerpc/xmon/Makefile @@ -16,7 +16,4 @@ ccflags-$(CONFIG_CC_IS_CLANG) += -Wframe-larger-than=4096 obj-y += xmon.o nonstdio.o spr_access.o xmon_bpts.o -ifdef CONFIG_XMON_DISASSEMBLY -obj-y += ppc-dis.o ppc-opc.o -obj-$(CONFIG_SPU_BASE) += spu-dis.o spu-opc.o -endif +obj-$(CONFIG_XMON_DISASSEMBLY) += ppc-dis.o ppc-opc.o diff --git a/arch/powerpc/xmon/spu-dis.c b/arch/powerpc/xmon/spu-dis.c deleted file mode 100644 index 4b0a4e640f08..000000000000 --- a/arch/powerpc/xmon/spu-dis.c +++ /dev/null @@ -1,237 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* Disassemble SPU instructions - - Copyright 2006 Free Software Foundation, Inc. - - This file is part of GDB, GAS, and the GNU binutils. - - */ - -#include <linux/string.h> -#include "nonstdio.h" -#include "ansidecl.h" -#include "spu.h" -#include "dis-asm.h" - -/* This file provides a disassembler function which uses - the disassembler interface defined in dis-asm.h. */ - -extern const struct spu_opcode spu_opcodes[]; -extern const int spu_num_opcodes; - -#define SPU_DISASM_TBL_SIZE (1 << 11) -static const struct spu_opcode *spu_disassemble_table[SPU_DISASM_TBL_SIZE]; - -static void -init_spu_disassemble (void) -{ - int i; - - /* If two instructions have the same opcode then we prefer the first - * one. In most cases it is just an alternate mnemonic. */ - for (i = 0; i < spu_num_opcodes; i++) - { - int o = spu_opcodes[i].opcode; - if (o >= SPU_DISASM_TBL_SIZE) - continue; /* abort (); */ - if (spu_disassemble_table[o] == 0) - spu_disassemble_table[o] = &spu_opcodes[i]; - } -} - -/* Determine the instruction from the 10 least significant bits. */ -static const struct spu_opcode * -get_index_for_opcode (unsigned int insn) -{ - const struct spu_opcode *index; - unsigned int opcode = insn >> (32-11); - - /* Init the table. This assumes that element 0/opcode 0 (currently - * NOP) is always used */ - if (spu_disassemble_table[0] == 0) - init_spu_disassemble (); - - if ((index = spu_disassemble_table[opcode & 0x780]) != 0 - && index->insn_type == RRR) - return index; - - if ((index = spu_disassemble_table[opcode & 0x7f0]) != 0 - && (index->insn_type == RI18 || index->insn_type == LBT)) - return index; - - if ((index = spu_disassemble_table[opcode & 0x7f8]) != 0 - && index->insn_type == RI10) - return index; - - if ((index = spu_disassemble_table[opcode & 0x7fc]) != 0 - && (index->insn_type == RI16)) - return index; - - if ((index = spu_disassemble_table[opcode & 0x7fe]) != 0 - && (index->insn_type == RI8)) - return index; - - if ((index = spu_disassemble_table[opcode & 0x7ff]) != 0) - return index; - - return NULL; -} - -/* Print a Spu instruction. */ - -int -print_insn_spu (unsigned long insn, unsigned long memaddr) -{ - int value; - int hex_value; - const struct spu_opcode *index; - enum spu_insns tag; - - index = get_index_for_opcode (insn); - - if (index == 0) - { - printf(".long 0x%lx", insn); - } - else - { - int i; - int paren = 0; - tag = (enum spu_insns)(index - spu_opcodes); - printf("%s", index->mnemonic); - if (tag == M_BI || tag == M_BISL || tag == M_IRET || tag == M_BISLED - || tag == M_BIHNZ || tag == M_BIHZ || tag == M_BINZ || tag == M_BIZ - || tag == M_SYNC || tag == M_HBR) - { - int fb = (insn >> (32-18)) & 0x7f; - if (fb & 0x40) - printf(tag == M_SYNC ? "c" : "p"); - if (fb & 0x20) - printf("d"); - if (fb & 0x10) - printf("e"); - } - if (index->arg[0] != 0) - printf("\t"); - hex_value = 0; - for (i = 1; i <= index->arg[0]; i++) - { - int arg = index->arg[i]; - if (arg != A_P && !paren && i > 1) - printf(","); - - switch (arg) - { - case A_T: - printf("$%lu", - DECODE_INSN_RT (insn)); - break; - case A_A: - printf("$%lu", - DECODE_INSN_RA (insn)); - break; - case A_B: - printf("$%lu", - DECODE_INSN_RB (insn)); - break; - case A_C: - printf("$%lu", - DECODE_INSN_RC (insn)); - break; - case A_S: - printf("$sp%lu", - DECODE_INSN_RA (insn)); - break; - case A_H: - printf("$ch%lu", - DECODE_INSN_RA (insn)); - break; - case A_P: - paren++; - printf("("); - break; - case A_U7A: - printf("%lu", - 173 - DECODE_INSN_U8 (insn)); - break; - case A_U7B: - printf("%lu", - 155 - DECODE_INSN_U8 (insn)); - break; - case A_S3: - case A_S6: - case A_S7: - case A_S7N: - case A_U3: - case A_U5: - case A_U6: - case A_U7: - hex_value = DECODE_INSN_I7 (insn); - printf("%d", hex_value); - break; - case A_S11: - print_address(memaddr + DECODE_INSN_I9a (insn) * 4); - break; - case A_S11I: - print_address(memaddr + DECODE_INSN_I9b (insn) * 4); - break; - case A_S10: - case A_S10B: - hex_value = DECODE_INSN_I10 (insn); - printf("%d", hex_value); - break; - case A_S14: - hex_value = DECODE_INSN_I10 (insn) * 16; - printf("%d", hex_value); - break; - case A_S16: - hex_value = DECODE_INSN_I16 (insn); - printf("%d", hex_value); - break; - case A_X16: - hex_value = DECODE_INSN_U16 (insn); - printf("%u", hex_value); - break; - case A_R18: - value = DECODE_INSN_I16 (insn) * 4; - if (value == 0) - printf("%d", value); - else - { - hex_value = memaddr + value; - print_address(hex_value & 0x3ffff); - } - break; - case A_S18: - value = DECODE_INSN_U16 (insn) * 4; - if (value == 0) - printf("%d", value); - else - print_address(value); - break; - case A_U18: - value = DECODE_INSN_U18 (insn); - if (value == 0 || 1) - { - hex_value = value; - printf("%u", value); - } - else - print_address(value); - break; - case A_U14: - hex_value = DECODE_INSN_U14 (insn); - printf("%u", hex_value); - break; - } - if (arg != A_P && paren) - { - printf(")"); - paren--; - } - } - if (hex_value > 16) - printf("\t# %x", hex_value); - } - return 4; -} diff --git a/arch/powerpc/xmon/spu-insns.h b/arch/powerpc/xmon/spu-insns.h deleted file mode 100644 index 7e1126a19909..000000000000 --- a/arch/powerpc/xmon/spu-insns.h +++ /dev/null @@ -1,399 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* SPU ELF support for BFD. - - Copyright 2006 Free Software Foundation, Inc. - - This file is part of BFD, the Binary File Descriptor library. - - */ - -/* SPU Opcode Table - --=-=-= FORMAT =-=-=- - - +----+-------+-------+-------+-------+ +------------+-------+-------+-------+ -RRR | op | RC | RB | RA | RT | RI7 | op | I7 | RA | RT | - +----+-------+-------+-------+-------+ +------------+-------+-------+-------+ - 0 3 1 1 2 3 0 1 1 2 3 - 0 7 4 1 0 7 4 1 - - +-----------+--------+-------+-------+ +---------+----------+-------+-------+ -RI8 | op | I8 | RA | RT | RI10 | op | I10 | RA | RT | - +-----------+--------+-------+-------+ +---------+----------+-------+-------+ - 0 9 1 2 3 0 7 1 2 3 - 7 4 1 7 4 1 - - +----------+-----------------+-------+ +--------+-------------------+-------+ -RI16 | op | I16 | RT | RI18 | op | I18 | RT | - +----------+-----------------+-------+ +--------+-------------------+-------+ - 0 8 2 3 0 6 2 3 - 4 1 4 1 - - +------------+-------+-------+-------+ +-------+--+-----------------+-------+ -RR | op | RB | RA | RT | LBT | op |RO| I16 | RO | - +------------+-------+-------+-------+ +-------+--+-----------------+-------+ - 0 1 1 2 3 0 6 8 2 3 - 0 7 4 1 4 1 - - +------------+----+--+-------+-------+ - LBTI | op | // |RO| RA | RO | - +------------+----+--+-------+-------+ - 0 1 1 1 2 3 - 0 5 7 4 1 - --=-=-= OPCODE =-=-=- - -OPCODE field specifies the most significant 11bit of the instruction. Some formats don't have 11bits for opcode field, and in this -case, bit field other than op are defined as 0s. For example, opcode of fma instruction which is RRR format is defined as 0x700, -since 0x700 -> 11'b11100000000, this means opcode is 4'b1110, and other 7bits are defined as 7'b0000000. - --=-=-= ASM_FORMAT =-=-=- - -RRR category RI7 category - ASM_RRR mnemonic RC, RA, RB, RT ASM_RI4 mnemonic RT, RA, I4 - ASM_RI7 mnemonic RT, RA, I7 - -RI8 category RI10 category - ASM_RUI8 mnemonic RT, RA, UI8 ASM_AI10 mnemonic RA, I10 - ASM_RI10 mnemonic RT, RA, R10 - ASM_RI10IDX mnemonic RT, I10(RA) - -RI16 category RI18 category - ASM_I16W mnemonic I16W ASM_RI18 mnemonic RT, I18 - ASM_RI16 mnemonic RT, I16 - ASM_RI16W mnemonic RT, I16W - -RR category LBT category - ASM_MFSPR mnemonic RT, SA ASM_LBT mnemonic brinst, brtarg - ASM_MTSPR mnemonic SA, RT - ASM_NOOP mnemonic LBTI category - ASM_RA mnemonic RA ASM_LBTI mnemonic brinst, RA - ASM_RAB mnemonic RA, RB - ASM_RDCH mnemonic RT, CA - ASM_RR mnemonic RT, RA, RB - ASM_RT mnemonic RT - ASM_RTA mnemonic RT, RA - ASM_WRCH mnemonic CA, RT - -Note that RRR instructions have the names for RC and RT reversed from -what's in the ISA, in order to put RT in the same position it appears -for other formats. - --=-=-= DEPENDENCY =-=-=- - -DEPENDENCY filed consists of 5 digits. This represents which register is used as source and which register is used as target. -The first(most significant) digit is always 0. Then it is followd by RC, RB, RA and RT digits. -If the digit is 0, this means the corresponding register is not used in the instruction. -If the digit is 1, this means the corresponding register is used as a source in the instruction. -If the digit is 2, this means the corresponding register is used as a target in the instruction. -If the digit is 3, this means the corresponding register is used as both source and target in the instruction. -For example, fms instruction has 00113 as the DEPENDENCY field. This means RC is not used in this operation, RB and RA are -used as sources and RT is the target. - --=-=-= PIPE =-=-=- - -This field shows which execution pipe is used for the instruction - -pipe0 execution pipelines: - FP6 SP floating pipeline - FP7 integer operations executed in SP floating pipeline - FPD DP floating pipeline - FX2 FXU pipeline - FX3 Rotate/Shift pipeline - FXB Byte pipeline - NOP No pipeline - -pipe1 execution pipelines: - BR Branch pipeline - LNOP No pipeline - LS Load/Store pipeline - SHUF Shuffle pipeline - SPR SPR/CH pipeline - -*/ - -#define _A0() {0} -#define _A1(a) {1,a} -#define _A2(a,b) {2,a,b} -#define _A3(a,b,c) {3,a,b,c} -#define _A4(a,b,c,d) {4,a,b,c,d} - -/* TAG FORMAT OPCODE MNEMONIC ASM_FORMAT DEPENDENCY PIPE COMMENT */ -/* 0[RC][RB][RA][RT] */ -/* 1:src, 2:target */ - -APUOP(M_BR, RI16, 0x190, "br", _A1(A_R18), 00000, BR) /* BRel IP<-IP+I16 */ -APUOP(M_BRSL, RI16, 0x198, "brsl", _A2(A_T,A_R18), 00002, BR) /* BRelSetLink RT,IP<-IP,IP+I16 */ -APUOP(M_BRA, RI16, 0x180, "bra", _A1(A_S18), 00000, BR) /* BRAbs IP<-I16 */ -APUOP(M_BRASL, RI16, 0x188, "brasl", _A2(A_T,A_S18), 00002, BR) /* BRAbsSetLink RT,IP<-IP,I16 */ -APUOP(M_FSMBI, RI16, 0x194, "fsmbi", _A2(A_T,A_X16), 00002, SHUF) /* FormSelMask%I RT<-fsm(I16) */ -APUOP(M_LQA, RI16, 0x184, "lqa", _A2(A_T,A_S18), 00002, LS) /* LoadQAbs RT<-M[I16] */ -APUOP(M_LQR, RI16, 0x19C, "lqr", _A2(A_T,A_R18), 00002, LS) /* LoadQRel RT<-M[IP+I16] */ -APUOP(M_STOP, RR, 0x000, "stop", _A0(), 00000, BR) /* STOP stop */ -APUOP(M_STOP2, RR, 0x000, "stop", _A1(A_U14), 00000, BR) /* STOP stop */ -APUOP(M_STOPD, RR, 0x140, "stopd", _A3(A_T,A_A,A_B), 00111, BR) /* STOPD stop (with register dependencies) */ -APUOP(M_LNOP, RR, 0x001, "lnop", _A0(), 00000, LNOP) /* LNOP no_operation */ -APUOP(M_SYNC, RR, 0x002, "sync", _A0(), 00000, BR) /* SYNC flush_pipe */ -APUOP(M_DSYNC, RR, 0x003, "dsync", _A0(), 00000, BR) /* DSYNC flush_store_queue */ -APUOP(M_MFSPR, RR, 0x00c, "mfspr", _A2(A_T,A_S), 00002, SPR) /* MFSPR RT<-SA */ -APUOP(M_RDCH, RR, 0x00d, "rdch", _A2(A_T,A_H), 00002, SPR) /* ReaDCHannel RT<-CA:data */ -APUOP(M_RCHCNT, RR, 0x00f, "rchcnt", _A2(A_T,A_H), 00002, SPR) /* ReaDCHanCouNT RT<-CA:count */ -APUOP(M_HBRA, LBT, 0x080, "hbra", _A2(A_S11,A_S18), 00000, LS) /* HBRA BTB[B9]<-M[I16] */ -APUOP(M_HBRR, LBT, 0x090, "hbrr", _A2(A_S11,A_R18), 00000, LS) /* HBRR BTB[B9]<-M[IP+I16] */ -APUOP(M_BRZ, RI16, 0x100, "brz", _A2(A_T,A_R18), 00001, BR) /* BRZ IP<-IP+I16_if(RT) */ -APUOP(M_BRNZ, RI16, 0x108, "brnz", _A2(A_T,A_R18), 00001, BR) /* BRNZ IP<-IP+I16_if(RT) */ -APUOP(M_BRHZ, RI16, 0x110, "brhz", _A2(A_T,A_R18), 00001, BR) /* BRHZ IP<-IP+I16_if(RT) */ -APUOP(M_BRHNZ, RI16, 0x118, "brhnz", _A2(A_T,A_R18), 00001, BR) /* BRHNZ IP<-IP+I16_if(RT) */ -APUOP(M_STQA, RI16, 0x104, "stqa", _A2(A_T,A_S18), 00001, LS) /* SToreQAbs M[I16]<-RT */ -APUOP(M_STQR, RI16, 0x11C, "stqr", _A2(A_T,A_R18), 00001, LS) /* SToreQRel M[IP+I16]<-RT */ -APUOP(M_MTSPR, RR, 0x10c, "mtspr", _A2(A_S,A_T), 00001, SPR) /* MTSPR SA<-RT */ -APUOP(M_WRCH, RR, 0x10d, "wrch", _A2(A_H,A_T), 00001, SPR) /* ChanWRite CA<-RT */ -APUOP(M_LQD, RI10, 0x1a0, "lqd", _A4(A_T,A_S14,A_P,A_A), 00012, LS) /* LoadQDisp RT<-M[Ra+I10] */ -APUOP(M_BI, RR, 0x1a8, "bi", _A1(A_A), 00010, BR) /* BI IP<-RA */ -APUOP(M_BISL, RR, 0x1a9, "bisl", _A2(A_T,A_A), 00012, BR) /* BISL RT,IP<-IP,RA */ -APUOP(M_IRET, RR, 0x1aa, "iret", _A1(A_A), 00010, BR) /* IRET IP<-SRR0 */ -APUOP(M_IRET2, RR, 0x1aa, "iret", _A0(), 00010, BR) /* IRET IP<-SRR0 */ -APUOP(M_BISLED, RR, 0x1ab, "bisled", _A2(A_T,A_A), 00012, BR) /* BISLED RT,IP<-IP,RA_if(ext) */ -APUOP(M_HBR, LBTI, 0x1ac, "hbr", _A2(A_S11I,A_A), 00010, LS) /* HBR BTB[B9]<-M[Ra] */ -APUOP(M_FREST, RR, 0x1b8, "frest", _A2(A_T,A_A), 00012, SHUF) /* FREST RT<-recip(RA) */ -APUOP(M_FRSQEST, RR, 0x1b9, "frsqest", _A2(A_T,A_A), 00012, SHUF) /* FRSQEST RT<-rsqrt(RA) */ -APUOP(M_FSM, RR, 0x1b4, "fsm", _A2(A_T,A_A), 00012, SHUF) /* FormSelMask% RT<-expand(Ra) */ -APUOP(M_FSMH, RR, 0x1b5, "fsmh", _A2(A_T,A_A), 00012, SHUF) /* FormSelMask% RT<-expand(Ra) */ -APUOP(M_FSMB, RR, 0x1b6, "fsmb", _A2(A_T,A_A), 00012, SHUF) /* FormSelMask% RT<-expand(Ra) */ -APUOP(M_GB, RR, 0x1b0, "gb", _A2(A_T,A_A), 00012, SHUF) /* GatherBits% RT<-gather(RA) */ -APUOP(M_GBH, RR, 0x1b1, "gbh", _A2(A_T,A_A), 00012, SHUF) /* GatherBits% RT<-gather(RA) */ -APUOP(M_GBB, RR, 0x1b2, "gbb", _A2(A_T,A_A), 00012, SHUF) /* GatherBits% RT<-gather(RA) */ -APUOP(M_CBD, RI7, 0x1f4, "cbd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ -APUOP(M_CHD, RI7, 0x1f5, "chd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ -APUOP(M_CWD, RI7, 0x1f6, "cwd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ -APUOP(M_CDD, RI7, 0x1f7, "cdd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ -APUOP(M_ROTQBII, RI7, 0x1f8, "rotqbii", _A3(A_T,A_A,A_U3), 00012, SHUF) /* ROTQBII RT<-RA<<<I7 */ -APUOP(M_ROTQBYI, RI7, 0x1fc, "rotqbyi", _A3(A_T,A_A,A_S7N), 00012, SHUF) /* ROTQBYI RT<-RA<<<(I7*8) */ -APUOP(M_ROTQMBII, RI7, 0x1f9, "rotqmbii", _A3(A_T,A_A,A_S3), 00012, SHUF) /* ROTQMBII RT<-RA<<I7 */ -APUOP(M_ROTQMBYI, RI7, 0x1fd, "rotqmbyi", _A3(A_T,A_A,A_S6), 00012, SHUF) /* ROTQMBYI RT<-RA<<I7 */ -APUOP(M_SHLQBII, RI7, 0x1fb, "shlqbii", _A3(A_T,A_A,A_U3), 00012, SHUF) /* SHLQBII RT<-RA<<I7 */ -APUOP(M_SHLQBYI, RI7, 0x1ff, "shlqbyi", _A3(A_T,A_A,A_U5), 00012, SHUF) /* SHLQBYI RT<-RA<<I7 */ -APUOP(M_STQD, RI10, 0x120, "stqd", _A4(A_T,A_S14,A_P,A_A), 00011, LS) /* SToreQDisp M[Ra+I10]<-RT */ -APUOP(M_BIHNZ, RR, 0x12b, "bihnz", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ -APUOP(M_BIHZ, RR, 0x12a, "bihz", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ -APUOP(M_BINZ, RR, 0x129, "binz", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ -APUOP(M_BIZ, RR, 0x128, "biz", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ -APUOP(M_CBX, RR, 0x1d4, "cbx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ -APUOP(M_CHX, RR, 0x1d5, "chx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ -APUOP(M_CWX, RR, 0x1d6, "cwx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ -APUOP(M_CDX, RR, 0x1d7, "cdx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ -APUOP(M_LQX, RR, 0x1c4, "lqx", _A3(A_T,A_A,A_B), 00112, LS) /* LoadQindeX RT<-M[Ra+Rb] */ -APUOP(M_ROTQBI, RR, 0x1d8, "rotqbi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQBI RT<-RA<<<Rb */ -APUOP(M_ROTQMBI, RR, 0x1d9, "rotqmbi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQMBI RT<-RA<<Rb */ -APUOP(M_SHLQBI, RR, 0x1db, "shlqbi", _A3(A_T,A_A,A_B), 00112, SHUF) /* SHLQBI RT<-RA<<Rb */ -APUOP(M_ROTQBY, RR, 0x1dc, "rotqby", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQBY RT<-RA<<<(Rb*8) */ -APUOP(M_ROTQMBY, RR, 0x1dd, "rotqmby", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQMBY RT<-RA<<Rb */ -APUOP(M_SHLQBY, RR, 0x1df, "shlqby", _A3(A_T,A_A,A_B), 00112, SHUF) /* SHLQBY RT<-RA<<Rb */ -APUOP(M_ROTQBYBI, RR, 0x1cc, "rotqbybi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQBYBI RT<-RA<<Rb */ -APUOP(M_ROTQMBYBI, RR, 0x1cd, "rotqmbybi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQMBYBI RT<-RA<<Rb */ -APUOP(M_SHLQBYBI, RR, 0x1cf, "shlqbybi", _A3(A_T,A_A,A_B), 00112, SHUF) /* SHLQBYBI RT<-RA<<Rb */ -APUOP(M_STQX, RR, 0x144, "stqx", _A3(A_T,A_A,A_B), 00111, LS) /* SToreQindeX M[Ra+Rb]<-RT */ -APUOP(M_SHUFB, RRR, 0x580, "shufb", _A4(A_C,A_A,A_B,A_T), 02111, SHUF) /* SHUFfleBytes RC<-f(RA,RB,RT) */ -APUOP(M_IL, RI16, 0x204, "il", _A2(A_T,A_S16), 00002, FX2) /* ImmLoad RT<-sxt(I16) */ -APUOP(M_ILH, RI16, 0x20c, "ilh", _A2(A_T,A_X16), 00002, FX2) /* ImmLoadH RT<-I16 */ -APUOP(M_ILHU, RI16, 0x208, "ilhu", _A2(A_T,A_X16), 00002, FX2) /* ImmLoadHUpper RT<-I16<<16 */ -APUOP(M_ILA, RI18, 0x210, "ila", _A2(A_T,A_U18), 00002, FX2) /* ImmLoadAddr RT<-zxt(I18) */ -APUOP(M_NOP, RR, 0x201, "nop", _A1(A_T), 00000, NOP) /* XNOP no_operation */ -APUOP(M_NOP2, RR, 0x201, "nop", _A0(), 00000, NOP) /* XNOP no_operation */ -APUOP(M_IOHL, RI16, 0x304, "iohl", _A2(A_T,A_X16), 00003, FX2) /* AddImmeXt RT<-RT+sxt(I16) */ -APUOP(M_ANDBI, RI10, 0x0b0, "andbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* AND%I RT<-RA&I10 */ -APUOP(M_ANDHI, RI10, 0x0a8, "andhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* AND%I RT<-RA&I10 */ -APUOP(M_ANDI, RI10, 0x0a0, "andi", _A3(A_T,A_A,A_S10), 00012, FX2) /* AND%I RT<-RA&I10 */ -APUOP(M_ORBI, RI10, 0x030, "orbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* OR%I RT<-RA|I10 */ -APUOP(M_ORHI, RI10, 0x028, "orhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* OR%I RT<-RA|I10 */ -APUOP(M_ORI, RI10, 0x020, "ori", _A3(A_T,A_A,A_S10), 00012, FX2) /* OR%I RT<-RA|I10 */ -APUOP(M_ORX, RR, 0x1f0, "orx", _A2(A_T,A_A), 00012, BR) /* ORX RT<-RA.w0|RA.w1|RA.w2|RA.w3 */ -APUOP(M_XORBI, RI10, 0x230, "xorbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* XOR%I RT<-RA^I10 */ -APUOP(M_XORHI, RI10, 0x228, "xorhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* XOR%I RT<-RA^I10 */ -APUOP(M_XORI, RI10, 0x220, "xori", _A3(A_T,A_A,A_S10), 00012, FX2) /* XOR%I RT<-RA^I10 */ -APUOP(M_AHI, RI10, 0x0e8, "ahi", _A3(A_T,A_A,A_S10), 00012, FX2) /* Add%Immed RT<-RA+I10 */ -APUOP(M_AI, RI10, 0x0e0, "ai", _A3(A_T,A_A,A_S10), 00012, FX2) /* Add%Immed RT<-RA+I10 */ -APUOP(M_SFHI, RI10, 0x068, "sfhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* SubFrom%Imm RT<-I10-RA */ -APUOP(M_SFI, RI10, 0x060, "sfi", _A3(A_T,A_A,A_S10), 00012, FX2) /* SubFrom%Imm RT<-I10-RA */ -APUOP(M_CGTBI, RI10, 0x270, "cgtbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* CGT%I RT<-(RA>I10) */ -APUOP(M_CGTHI, RI10, 0x268, "cgthi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CGT%I RT<-(RA>I10) */ -APUOP(M_CGTI, RI10, 0x260, "cgti", _A3(A_T,A_A,A_S10), 00012, FX2) /* CGT%I RT<-(RA>I10) */ -APUOP(M_CLGTBI, RI10, 0x2f0, "clgtbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* CLGT%I RT<-(RA>I10) */ -APUOP(M_CLGTHI, RI10, 0x2e8, "clgthi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CLGT%I RT<-(RA>I10) */ -APUOP(M_CLGTI, RI10, 0x2e0, "clgti", _A3(A_T,A_A,A_S10), 00012, FX2) /* CLGT%I RT<-(RA>I10) */ -APUOP(M_CEQBI, RI10, 0x3f0, "ceqbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* CEQ%I RT<-(RA=I10) */ -APUOP(M_CEQHI, RI10, 0x3e8, "ceqhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CEQ%I RT<-(RA=I10) */ -APUOP(M_CEQI, RI10, 0x3e0, "ceqi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CEQ%I RT<-(RA=I10) */ -APUOP(M_HGTI, RI10, 0x278, "hgti", _A3(A_T,A_A,A_S10), 00010, FX2) /* HaltGTI halt_if(RA>I10) */ -APUOP(M_HGTI2, RI10, 0x278, "hgti", _A2(A_A,A_S10), 00010, FX2) /* HaltGTI halt_if(RA>I10) */ -APUOP(M_HLGTI, RI10, 0x2f8, "hlgti", _A3(A_T,A_A,A_S10), 00010, FX2) /* HaltLGTI halt_if(RA>I10) */ -APUOP(M_HLGTI2, RI10, 0x2f8, "hlgti", _A2(A_A,A_S10), 00010, FX2) /* HaltLGTI halt_if(RA>I10) */ -APUOP(M_HEQI, RI10, 0x3f8, "heqi", _A3(A_T,A_A,A_S10), 00010, FX2) /* HaltEQImm halt_if(RA=I10) */ -APUOP(M_HEQI2, RI10, 0x3f8, "heqi", _A2(A_A,A_S10), 00010, FX2) /* HaltEQImm halt_if(RA=I10) */ -APUOP(M_MPYI, RI10, 0x3a0, "mpyi", _A3(A_T,A_A,A_S10), 00012, FP7) /* MPYI RT<-RA*I10 */ -APUOP(M_MPYUI, RI10, 0x3a8, "mpyui", _A3(A_T,A_A,A_S10), 00012, FP7) /* MPYUI RT<-RA*I10 */ -APUOP(M_CFLTS, RI8, 0x3b0, "cflts", _A3(A_T,A_A,A_U7A), 00012, FP7) /* CFLTS RT<-int(RA,I8) */ -APUOP(M_CFLTU, RI8, 0x3b2, "cfltu", _A3(A_T,A_A,A_U7A), 00012, FP7) /* CFLTU RT<-int(RA,I8) */ -APUOP(M_CSFLT, RI8, 0x3b4, "csflt", _A3(A_T,A_A,A_U7B), 00012, FP7) /* CSFLT RT<-flt(RA,I8) */ -APUOP(M_CUFLT, RI8, 0x3b6, "cuflt", _A3(A_T,A_A,A_U7B), 00012, FP7) /* CUFLT RT<-flt(RA,I8) */ -APUOP(M_FESD, RR, 0x3b8, "fesd", _A2(A_T,A_A), 00012, FPD) /* FESD RT<-double(RA) */ -APUOP(M_FRDS, RR, 0x3b9, "frds", _A2(A_T,A_A), 00012, FPD) /* FRDS RT<-single(RA) */ -APUOP(M_FSCRRD, RR, 0x398, "fscrrd", _A1(A_T), 00002, FPD) /* FSCRRD RT<-FP_status */ -APUOP(M_FSCRWR, RR, 0x3ba, "fscrwr", _A2(A_T,A_A), 00010, FP7) /* FSCRWR FP_status<-RA */ -APUOP(M_FSCRWR2, RR, 0x3ba, "fscrwr", _A1(A_A), 00010, FP7) /* FSCRWR FP_status<-RA */ -APUOP(M_CLZ, RR, 0x2a5, "clz", _A2(A_T,A_A), 00012, FX2) /* CLZ RT<-clz(RA) */ -APUOP(M_CNTB, RR, 0x2b4, "cntb", _A2(A_T,A_A), 00012, FXB) /* CNT RT<-pop(RA) */ -APUOP(M_XSBH, RR, 0x2b6, "xsbh", _A2(A_T,A_A), 00012, FX2) /* eXtSignBtoH RT<-sign_ext(RA) */ -APUOP(M_XSHW, RR, 0x2ae, "xshw", _A2(A_T,A_A), 00012, FX2) /* eXtSignHtoW RT<-sign_ext(RA) */ -APUOP(M_XSWD, RR, 0x2a6, "xswd", _A2(A_T,A_A), 00012, FX2) /* eXtSignWtoD RT<-sign_ext(RA) */ -APUOP(M_ROTI, RI7, 0x078, "roti", _A3(A_T,A_A,A_S7N), 00012, FX3) /* ROT%I RT<-RA<<<I7 */ -APUOP(M_ROTMI, RI7, 0x079, "rotmi", _A3(A_T,A_A,A_S7), 00012, FX3) /* ROT%MI RT<-RA<<I7 */ -APUOP(M_ROTMAI, RI7, 0x07a, "rotmai", _A3(A_T,A_A,A_S7), 00012, FX3) /* ROTMA%I RT<-RA<<I7 */ -APUOP(M_SHLI, RI7, 0x07b, "shli", _A3(A_T,A_A,A_U6), 00012, FX3) /* SHL%I RT<-RA<<I7 */ -APUOP(M_ROTHI, RI7, 0x07c, "rothi", _A3(A_T,A_A,A_S7N), 00012, FX3) /* ROT%I RT<-RA<<<I7 */ -APUOP(M_ROTHMI, RI7, 0x07d, "rothmi", _A3(A_T,A_A,A_S6), 00012, FX3) /* ROT%MI RT<-RA<<I7 */ -APUOP(M_ROTMAHI, RI7, 0x07e, "rotmahi", _A3(A_T,A_A,A_S6), 00012, FX3) /* ROTMA%I RT<-RA<<I7 */ -APUOP(M_SHLHI, RI7, 0x07f, "shlhi", _A3(A_T,A_A,A_U5), 00012, FX3) /* SHL%I RT<-RA<<I7 */ -APUOP(M_A, RR, 0x0c0, "a", _A3(A_T,A_A,A_B), 00112, FX2) /* Add% RT<-RA+RB */ -APUOP(M_AH, RR, 0x0c8, "ah", _A3(A_T,A_A,A_B), 00112, FX2) /* Add% RT<-RA+RB */ -APUOP(M_SF, RR, 0x040, "sf", _A3(A_T,A_A,A_B), 00112, FX2) /* SubFrom% RT<-RB-RA */ -APUOP(M_SFH, RR, 0x048, "sfh", _A3(A_T,A_A,A_B), 00112, FX2) /* SubFrom% RT<-RB-RA */ -APUOP(M_CGT, RR, 0x240, "cgt", _A3(A_T,A_A,A_B), 00112, FX2) /* CGT% RT<-(RA>RB) */ -APUOP(M_CGTB, RR, 0x250, "cgtb", _A3(A_T,A_A,A_B), 00112, FX2) /* CGT% RT<-(RA>RB) */ -APUOP(M_CGTH, RR, 0x248, "cgth", _A3(A_T,A_A,A_B), 00112, FX2) /* CGT% RT<-(RA>RB) */ -APUOP(M_CLGT, RR, 0x2c0, "clgt", _A3(A_T,A_A,A_B), 00112, FX2) /* CLGT% RT<-(RA>RB) */ -APUOP(M_CLGTB, RR, 0x2d0, "clgtb", _A3(A_T,A_A,A_B), 00112, FX2) /* CLGT% RT<-(RA>RB) */ -APUOP(M_CLGTH, RR, 0x2c8, "clgth", _A3(A_T,A_A,A_B), 00112, FX2) /* CLGT% RT<-(RA>RB) */ -APUOP(M_CEQ, RR, 0x3c0, "ceq", _A3(A_T,A_A,A_B), 00112, FX2) /* CEQ% RT<-(RA=RB) */ -APUOP(M_CEQB, RR, 0x3d0, "ceqb", _A3(A_T,A_A,A_B), 00112, FX2) /* CEQ% RT<-(RA=RB) */ -APUOP(M_CEQH, RR, 0x3c8, "ceqh", _A3(A_T,A_A,A_B), 00112, FX2) /* CEQ% RT<-(RA=RB) */ -APUOP(M_HGT, RR, 0x258, "hgt", _A3(A_T,A_A,A_B), 00110, FX2) /* HaltGT halt_if(RA>RB) */ -APUOP(M_HGT2, RR, 0x258, "hgt", _A2(A_A,A_B), 00110, FX2) /* HaltGT halt_if(RA>RB) */ -APUOP(M_HLGT, RR, 0x2d8, "hlgt", _A3(A_T,A_A,A_B), 00110, FX2) /* HaltLGT halt_if(RA>RB) */ -APUOP(M_HLGT2, RR, 0x2d8, "hlgt", _A2(A_A,A_B), 00110, FX2) /* HaltLGT halt_if(RA>RB) */ -APUOP(M_HEQ, RR, 0x3d8, "heq", _A3(A_T,A_A,A_B), 00110, FX2) /* HaltEQ halt_if(RA=RB) */ -APUOP(M_HEQ2, RR, 0x3d8, "heq", _A2(A_A,A_B), 00110, FX2) /* HaltEQ halt_if(RA=RB) */ -APUOP(M_FCEQ, RR, 0x3c2, "fceq", _A3(A_T,A_A,A_B), 00112, FX2) /* FCEQ RT<-(RA=RB) */ -APUOP(M_FCMEQ, RR, 0x3ca, "fcmeq", _A3(A_T,A_A,A_B), 00112, FX2) /* FCMEQ RT<-(|RA|=|RB|) */ -APUOP(M_FCGT, RR, 0x2c2, "fcgt", _A3(A_T,A_A,A_B), 00112, FX2) /* FCGT RT<-(RA<RB) */ -APUOP(M_FCMGT, RR, 0x2ca, "fcmgt", _A3(A_T,A_A,A_B), 00112, FX2) /* FCMGT RT<-(|RA|<|RB|) */ -APUOP(M_AND, RR, 0x0c1, "and", _A3(A_T,A_A,A_B), 00112, FX2) /* AND RT<-RA&RB */ -APUOP(M_NAND, RR, 0x0c9, "nand", _A3(A_T,A_A,A_B), 00112, FX2) /* NAND RT<-!(RA&RB) */ -APUOP(M_OR, RR, 0x041, "or", _A3(A_T,A_A,A_B), 00112, FX2) /* OR RT<-RA|RB */ -APUOP(M_NOR, RR, 0x049, "nor", _A3(A_T,A_A,A_B), 00112, FX2) /* NOR RT<-!(RA&RB) */ -APUOP(M_XOR, RR, 0x241, "xor", _A3(A_T,A_A,A_B), 00112, FX2) /* XOR RT<-RA^RB */ -APUOP(M_EQV, RR, 0x249, "eqv", _A3(A_T,A_A,A_B), 00112, FX2) /* EQuiValent RT<-!(RA^RB) */ -APUOP(M_ANDC, RR, 0x2c1, "andc", _A3(A_T,A_A,A_B), 00112, FX2) /* ANDComplement RT<-RA&!RB */ -APUOP(M_ORC, RR, 0x2c9, "orc", _A3(A_T,A_A,A_B), 00112, FX2) /* ORComplement RT<-RA|!RB */ -APUOP(M_ABSDB, RR, 0x053, "absdb", _A3(A_T,A_A,A_B), 00112, FXB) /* ABSoluteDiff RT<-|RA-RB| */ -APUOP(M_AVGB, RR, 0x0d3, "avgb", _A3(A_T,A_A,A_B), 00112, FXB) /* AVG% RT<-(RA+RB+1)/2 */ -APUOP(M_SUMB, RR, 0x253, "sumb", _A3(A_T,A_A,A_B), 00112, FXB) /* SUM% RT<-f(RA,RB) */ -APUOP(M_DFA, RR, 0x2cc, "dfa", _A3(A_T,A_A,A_B), 00112, FPD) /* DFAdd RT<-RA+RB */ -APUOP(M_DFM, RR, 0x2ce, "dfm", _A3(A_T,A_A,A_B), 00112, FPD) /* DFMul RT<-RA*RB */ -APUOP(M_DFS, RR, 0x2cd, "dfs", _A3(A_T,A_A,A_B), 00112, FPD) /* DFSub RT<-RA-RB */ -APUOP(M_FA, RR, 0x2c4, "fa", _A3(A_T,A_A,A_B), 00112, FP6) /* FAdd RT<-RA+RB */ -APUOP(M_FM, RR, 0x2c6, "fm", _A3(A_T,A_A,A_B), 00112, FP6) /* FMul RT<-RA*RB */ -APUOP(M_FS, RR, 0x2c5, "fs", _A3(A_T,A_A,A_B), 00112, FP6) /* FSub RT<-RA-RB */ -APUOP(M_MPY, RR, 0x3c4, "mpy", _A3(A_T,A_A,A_B), 00112, FP7) /* MPY RT<-RA*RB */ -APUOP(M_MPYH, RR, 0x3c5, "mpyh", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYH RT<-(RAh*RB)<<16 */ -APUOP(M_MPYHH, RR, 0x3c6, "mpyhh", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYHH RT<-RAh*RBh */ -APUOP(M_MPYHHU, RR, 0x3ce, "mpyhhu", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYHHU RT<-RAh*RBh */ -APUOP(M_MPYS, RR, 0x3c7, "mpys", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYS RT<-(RA*RB)>>16 */ -APUOP(M_MPYU, RR, 0x3cc, "mpyu", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYU RT<-RA*RB */ -APUOP(M_FI, RR, 0x3d4, "fi", _A3(A_T,A_A,A_B), 00112, FP7) /* FInterpolate RT<-f(RA,RB) */ -APUOP(M_ROT, RR, 0x058, "rot", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT% RT<-RA<<<RB */ -APUOP(M_ROTM, RR, 0x059, "rotm", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT%M RT<-RA<<Rb */ -APUOP(M_ROTMA, RR, 0x05a, "rotma", _A3(A_T,A_A,A_B), 00112, FX3) /* ROTMA% RT<-RA<<Rb */ -APUOP(M_SHL, RR, 0x05b, "shl", _A3(A_T,A_A,A_B), 00112, FX3) /* SHL% RT<-RA<<Rb */ -APUOP(M_ROTH, RR, 0x05c, "roth", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT% RT<-RA<<<RB */ -APUOP(M_ROTHM, RR, 0x05d, "rothm", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT%M RT<-RA<<Rb */ -APUOP(M_ROTMAH, RR, 0x05e, "rotmah", _A3(A_T,A_A,A_B), 00112, FX3) /* ROTMA% RT<-RA<<Rb */ -APUOP(M_SHLH, RR, 0x05f, "shlh", _A3(A_T,A_A,A_B), 00112, FX3) /* SHL% RT<-RA<<Rb */ -APUOP(M_MPYHHA, RR, 0x346, "mpyhha", _A3(A_T,A_A,A_B), 00113, FP7) /* MPYHHA RT<-RAh*RBh+RT */ -APUOP(M_MPYHHAU, RR, 0x34e, "mpyhhau", _A3(A_T,A_A,A_B), 00113, FP7) /* MPYHHAU RT<-RAh*RBh+RT */ -APUOP(M_DFMA, RR, 0x35c, "dfma", _A3(A_T,A_A,A_B), 00113, FPD) /* DFMAdd RT<-RT+RA*RB */ -APUOP(M_DFMS, RR, 0x35d, "dfms", _A3(A_T,A_A,A_B), 00113, FPD) /* DFMSub RT<-RA*RB-RT */ -APUOP(M_DFNMS, RR, 0x35e, "dfnms", _A3(A_T,A_A,A_B), 00113, FPD) /* DFNMSub RT<-RT-RA*RB */ -APUOP(M_DFNMA, RR, 0x35f, "dfnma", _A3(A_T,A_A,A_B), 00113, FPD) /* DFNMAdd RT<-(-RT)-RA*RB */ -APUOP(M_FMA, RRR, 0x700, "fma", _A4(A_C,A_A,A_B,A_T), 02111, FP6) /* FMAdd RC<-RT+RA*RB */ -APUOP(M_FMS, RRR, 0x780, "fms", _A4(A_C,A_A,A_B,A_T), 02111, FP6) /* FMSub RC<-RA*RB-RT */ -APUOP(M_FNMS, RRR, 0x680, "fnms", _A4(A_C,A_A,A_B,A_T), 02111, FP6) /* FNMSub RC<-RT-RA*RB */ -APUOP(M_MPYA, RRR, 0x600, "mpya", _A4(A_C,A_A,A_B,A_T), 02111, FP7) /* MPYA RC<-RA*RB+RT */ -APUOP(M_SELB, RRR, 0x400, "selb", _A4(A_C,A_A,A_B,A_T), 02111, FX2) /* SELectBits RC<-RA&RT|RB&!RT */ -/* for system function call, this uses op-code of mtspr */ -APUOP(M_SYSCALL, RI7, 0x10c, "syscall", _A3(A_T,A_A,A_S7N), 00002, SPR) /* System Call */ -/* -pseudo instruction: -system call -value of I9 operation -0 halt -1 rt[0] = open(MEM[ra[0]], ra[1]) -2 rt[0] = close(ra[0]) -3 rt[0] = read(ra[0], MEM[ra[1]], ra[2]) -4 rt[0] = write(ra[0], MEM[ra[1]], ra[2]) -5 printf(MEM[ra[0]], ra[1], ra[2], ra[3]) -42 rt[0] = clock() -52 rt[0] = lseek(ra0, ra1, ra2) - -*/ - - -/* new multiprecision add/sub */ -APUOP(M_ADDX, RR, 0x340, "addx", _A3(A_T,A_A,A_B), 00113, FX2) /* Add_eXtended RT<-RA+RB+RT */ -APUOP(M_CG, RR, 0x0c2, "cg", _A3(A_T,A_A,A_B), 00112, FX2) /* CarryGenerate RT<-cout(RA+RB) */ -APUOP(M_CGX, RR, 0x342, "cgx", _A3(A_T,A_A,A_B), 00113, FX2) /* CarryGen_eXtd RT<-cout(RA+RB+RT) */ -APUOP(M_SFX, RR, 0x341, "sfx", _A3(A_T,A_A,A_B), 00113, FX2) /* Add_eXtended RT<-RA+RB+RT */ -APUOP(M_BG, RR, 0x042, "bg", _A3(A_T,A_A,A_B), 00112, FX2) /* CarryGenerate RT<-cout(RA+RB) */ -APUOP(M_BGX, RR, 0x343, "bgx", _A3(A_T,A_A,A_B), 00113, FX2) /* CarryGen_eXtd RT<-cout(RA+RB+RT) */ - -/* - -The following ops are a subset of above except with feature bits set. -Feature bits are bits 11-17 of the instruction: - - 11 - C & P feature bit - 12 - disable interrupts - 13 - enable interrupts - -*/ -APUOPFB(M_BID, RR, 0x1a8, 0x20, "bid", _A1(A_A), 00010, BR) /* BI IP<-RA */ -APUOPFB(M_BIE, RR, 0x1a8, 0x10, "bie", _A1(A_A), 00010, BR) /* BI IP<-RA */ -APUOPFB(M_BISLD, RR, 0x1a9, 0x20, "bisld", _A2(A_T,A_A), 00012, BR) /* BISL RT,IP<-IP,RA */ -APUOPFB(M_BISLE, RR, 0x1a9, 0x10, "bisle", _A2(A_T,A_A), 00012, BR) /* BISL RT,IP<-IP,RA */ -APUOPFB(M_IRETD, RR, 0x1aa, 0x20, "iretd", _A1(A_A), 00010, BR) /* IRET IP<-SRR0 */ -APUOPFB(M_IRETD2, RR, 0x1aa, 0x20, "iretd", _A0(), 00010, BR) /* IRET IP<-SRR0 */ -APUOPFB(M_IRETE, RR, 0x1aa, 0x10, "irete", _A1(A_A), 00010, BR) /* IRET IP<-SRR0 */ -APUOPFB(M_IRETE2, RR, 0x1aa, 0x10, "irete", _A0(), 00010, BR) /* IRET IP<-SRR0 */ -APUOPFB(M_BISLEDD, RR, 0x1ab, 0x20, "bisledd", _A2(A_T,A_A), 00012, BR) /* BISLED RT,IP<-IP,RA_if(ext) */ -APUOPFB(M_BISLEDE, RR, 0x1ab, 0x10, "bislede", _A2(A_T,A_A), 00012, BR) /* BISLED RT,IP<-IP,RA_if(ext) */ -APUOPFB(M_BIHNZD, RR, 0x12b, 0x20, "bihnzd", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ -APUOPFB(M_BIHNZE, RR, 0x12b, 0x10, "bihnze", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ -APUOPFB(M_BIHZD, RR, 0x12a, 0x20, "bihzd", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ -APUOPFB(M_BIHZE, RR, 0x12a, 0x10, "bihze", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ -APUOPFB(M_BINZD, RR, 0x129, 0x20, "binzd", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ -APUOPFB(M_BINZE, RR, 0x129, 0x10, "binze", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ -APUOPFB(M_BIZD, RR, 0x128, 0x20, "bizd", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ -APUOPFB(M_BIZE, RR, 0x128, 0x10, "bize", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ -APUOPFB(M_SYNCC, RR, 0x002, 0x40, "syncc", _A0(), 00000, BR) /* SYNCC flush_pipe */ -APUOPFB(M_HBRP, LBTI, 0x1ac, 0x40, "hbrp", _A0(), 00010, LS) /* HBR BTB[B9]<-M[Ra] */ - -/* Synonyms required by the AS manual. */ -APUOP(M_LR, RI10, 0x020, "lr", _A2(A_T,A_A), 00012, FX2) /* OR%I RT<-RA|I10 */ -APUOP(M_BIHT, RR, 0x12b, "biht", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ -APUOP(M_BIHF, RR, 0x12a, "bihf", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ -APUOP(M_BIT, RR, 0x129, "bit", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ -APUOP(M_BIF, RR, 0x128, "bif", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ -APUOPFB(M_BIHTD, RR, 0x12b, 0x20, "bihtd", _A2(A_T,A_A), 00011, BR) /* BIHNF IP<-RA_if(RT) */ -APUOPFB(M_BIHTE, RR, 0x12b, 0x10, "bihte", _A2(A_T,A_A), 00011, BR) /* BIHNF IP<-RA_if(RT) */ -APUOPFB(M_BIHFD, RR, 0x12a, 0x20, "bihfd", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ -APUOPFB(M_BIHFE, RR, 0x12a, 0x10, "bihfe", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ -APUOPFB(M_BITD, RR, 0x129, 0x20, "bitd", _A2(A_T,A_A), 00011, BR) /* BINF IP<-RA_if(RT) */ -APUOPFB(M_BITE, RR, 0x129, 0x10, "bite", _A2(A_T,A_A), 00011, BR) /* BINF IP<-RA_if(RT) */ -APUOPFB(M_BIFD, RR, 0x128, 0x20, "bifd", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ -APUOPFB(M_BIFE, RR, 0x128, 0x10, "bife", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ - -#undef _A0 -#undef _A1 -#undef _A2 -#undef _A3 -#undef _A4 diff --git a/arch/powerpc/xmon/spu-opc.c b/arch/powerpc/xmon/spu-opc.c deleted file mode 100644 index 6d8197cc540b..000000000000 --- a/arch/powerpc/xmon/spu-opc.c +++ /dev/null @@ -1,34 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* SPU opcode list - - Copyright 2006 Free Software Foundation, Inc. - - This file is part of GDB, GAS, and the GNU binutils. - - */ - -#include <linux/kernel.h> -#include <linux/bug.h> -#include "spu.h" - -/* This file holds the Spu opcode table */ - - -/* - Example contents of spu-insn.h - id_tag mode mode type opcode mnemonic asmtype dependency FPU L/S? branch? instruction - QUAD WORD (0,RC,RB,RA,RT) latency - APUOP(M_LQD, 1, 0, RI9, 0x1f8, "lqd", ASM_RI9IDX, 00012, FXU, 1, 0) Load Quadword d-form - */ - -const struct spu_opcode spu_opcodes[] = { -#define APUOP(TAG,MACFORMAT,OPCODE,MNEMONIC,ASMFORMAT,DEP,PIPE) \ - { MACFORMAT, OPCODE, MNEMONIC, ASMFORMAT }, -#define APUOPFB(TAG,MACFORMAT,OPCODE,FB,MNEMONIC,ASMFORMAT,DEP,PIPE) \ - { MACFORMAT, OPCODE, MNEMONIC, ASMFORMAT }, -#include "spu-insns.h" -#undef APUOP -#undef APUOPFB -}; - -const int spu_num_opcodes = ARRAY_SIZE(spu_opcodes); diff --git a/arch/powerpc/xmon/spu.h b/arch/powerpc/xmon/spu.h deleted file mode 100644 index 2d13b1a5fa87..000000000000 --- a/arch/powerpc/xmon/spu.h +++ /dev/null @@ -1,115 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* SPU ELF support for BFD. - - Copyright 2006 Free Software Foundation, Inc. - - This file is part of GDB, GAS, and the GNU binutils. - - */ - - -/* These two enums are from rel_apu/common/spu_asm_format.h */ -/* definition of instruction format */ -typedef enum { - RRR, - RI18, - RI16, - RI10, - RI8, - RI7, - RR, - LBT, - LBTI, - IDATA, - UNKNOWN_IFORMAT -} spu_iformat; - -/* These values describe assembly instruction arguments. They indicate - * how to encode, range checking and which relocation to use. */ -typedef enum { - A_T, /* register at pos 0 */ - A_A, /* register at pos 7 */ - A_B, /* register at pos 14 */ - A_C, /* register at pos 21 */ - A_S, /* special purpose register at pos 7 */ - A_H, /* channel register at pos 7 */ - A_P, /* parenthesis, this has to separate regs from immediates */ - A_S3, - A_S6, - A_S7N, - A_S7, - A_U7A, - A_U7B, - A_S10B, - A_S10, - A_S11, - A_S11I, - A_S14, - A_S16, - A_S18, - A_R18, - A_U3, - A_U5, - A_U6, - A_U7, - A_U14, - A_X16, - A_U18, - A_MAX -} spu_aformat; - -enum spu_insns { -#define APUOP(TAG,MACFORMAT,OPCODE,MNEMONIC,ASMFORMAT,DEP,PIPE) \ - TAG, -#define APUOPFB(TAG,MACFORMAT,OPCODE,FB,MNEMONIC,ASMFORMAT,DEP,PIPE) \ - TAG, -#include "spu-insns.h" -#undef APUOP -#undef APUOPFB - M_SPU_MAX -}; - -struct spu_opcode -{ - spu_iformat insn_type; - unsigned int opcode; - char *mnemonic; - int arg[5]; -}; - -#define SIGNED_EXTRACT(insn,size,pos) (((int)((insn) << (32-size-pos))) >> (32-size)) -#define UNSIGNED_EXTRACT(insn,size,pos) (((insn) >> pos) & ((1 << size)-1)) - -#define DECODE_INSN_RT(insn) (insn & 0x7f) -#define DECODE_INSN_RA(insn) ((insn >> 7) & 0x7f) -#define DECODE_INSN_RB(insn) ((insn >> 14) & 0x7f) -#define DECODE_INSN_RC(insn) ((insn >> 21) & 0x7f) - -#define DECODE_INSN_I10(insn) SIGNED_EXTRACT(insn,10,14) -#define DECODE_INSN_U10(insn) UNSIGNED_EXTRACT(insn,10,14) - -/* For branching, immediate loads, hbr and lqa/stqa. */ -#define DECODE_INSN_I16(insn) SIGNED_EXTRACT(insn,16,7) -#define DECODE_INSN_U16(insn) UNSIGNED_EXTRACT(insn,16,7) - -/* for stop */ -#define DECODE_INSN_U14(insn) UNSIGNED_EXTRACT(insn,14,0) - -/* For ila */ -#define DECODE_INSN_I18(insn) SIGNED_EXTRACT(insn,18,7) -#define DECODE_INSN_U18(insn) UNSIGNED_EXTRACT(insn,18,7) - -/* For rotate and shift and generate control mask */ -#define DECODE_INSN_I7(insn) SIGNED_EXTRACT(insn,7,14) -#define DECODE_INSN_U7(insn) UNSIGNED_EXTRACT(insn,7,14) - -/* For float <-> int conversion */ -#define DECODE_INSN_I8(insn) SIGNED_EXTRACT(insn,8,14) -#define DECODE_INSN_U8(insn) UNSIGNED_EXTRACT(insn,8,14) - -/* For hbr */ -#define DECODE_INSN_I9a(insn) ((SIGNED_EXTRACT(insn,2,23) << 7) | UNSIGNED_EXTRACT(insn,7,0)) -#define DECODE_INSN_I9b(insn) ((SIGNED_EXTRACT(insn,2,14) << 7) | UNSIGNED_EXTRACT(insn,7,0)) -#define DECODE_INSN_U9a(insn) ((UNSIGNED_EXTRACT(insn,2,23) << 7) | UNSIGNED_EXTRACT(insn,7,0)) -#define DECODE_INSN_U9b(insn) ((UNSIGNED_EXTRACT(insn,2,14) << 7) | UNSIGNED_EXTRACT(insn,7,0)) - diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c index 1acb53aab252..88abffa8b54c 100644 --- a/arch/powerpc/xmon/xmon.c +++ b/arch/powerpc/xmon/xmon.c @@ -41,8 +41,6 @@ #include <asm/rtas.h> #include <asm/sstep.h> #include <asm/irq_regs.h> -#include <asm/spu.h> -#include <asm/spu_priv1.h> #include <asm/setjmp.h> #include <asm/reg.h> #include <asm/debug.h> @@ -188,8 +186,6 @@ static void xmon_print_symbol(unsigned long address, const char *mid, const char *after); static const char *getvecname(unsigned long vec); -static int do_spu_cmd(void); - #ifdef CONFIG_44x static void dump_tlb_44x(void); #endif @@ -272,13 +268,6 @@ Commands:\n\ P list processes/tasks\n\ r print registers\n\ s single step\n" -#ifdef CONFIG_SPU_BASE -" ss stop execution on all spus\n\ - sr restore execution on stopped spus\n\ - sf # dump spu fields for spu # (in hex)\n\ - sd # dump spu local store for spu # (in hex)\n\ - sdi # disassemble spu local store for spu # (in hex)\n" -#endif " S print special registers\n\ Sa print all SPRs\n\ Sr # read SPR #\n\ @@ -1112,8 +1101,6 @@ cmds(struct pt_regs *excp) cacheflush(); break; case 's': - if (do_spu_cmd() == 0) - break; if (do_step(excp)) return cmd; break; @@ -4103,263 +4090,3 @@ void __init xmon_setup(void) if (xmon_early) debugger(NULL); } - -#ifdef CONFIG_SPU_BASE - -struct spu_info { - struct spu *spu; - u64 saved_mfc_sr1_RW; - u32 saved_spu_runcntl_RW; - unsigned long dump_addr; - u8 stopped_ok; -}; - -#define XMON_NUM_SPUS 16 /* Enough for current hardware */ - -static struct spu_info spu_info[XMON_NUM_SPUS]; - -void __init xmon_register_spus(struct list_head *list) -{ - struct spu *spu; - - list_for_each_entry(spu, list, full_list) { - if (spu->number >= XMON_NUM_SPUS) { - WARN_ON(1); - continue; - } - - spu_info[spu->number].spu = spu; - spu_info[spu->number].stopped_ok = 0; - spu_info[spu->number].dump_addr = (unsigned long) - spu_info[spu->number].spu->local_store; - } -} - -static void stop_spus(void) -{ - struct spu *spu; - volatile int i; - u64 tmp; - - for (i = 0; i < XMON_NUM_SPUS; i++) { - if (!spu_info[i].spu) - continue; - - if (setjmp(bus_error_jmp) == 0) { - catch_memory_errors = 1; - sync(); - - spu = spu_info[i].spu; - - spu_info[i].saved_spu_runcntl_RW = - in_be32(&spu->problem->spu_runcntl_RW); - - tmp = spu_mfc_sr1_get(spu); - spu_info[i].saved_mfc_sr1_RW = tmp; - - tmp &= ~MFC_STATE1_MASTER_RUN_CONTROL_MASK; - spu_mfc_sr1_set(spu, tmp); - - sync(); - __delay(200); - - spu_info[i].stopped_ok = 1; - - printf("Stopped spu %.2d (was %s)\n", i, - spu_info[i].saved_spu_runcntl_RW ? - "running" : "stopped"); - } else { - catch_memory_errors = 0; - printf("*** Error stopping spu %.2d\n", i); - } - catch_memory_errors = 0; - } -} - -static void restart_spus(void) -{ - struct spu *spu; - volatile int i; - - for (i = 0; i < XMON_NUM_SPUS; i++) { - if (!spu_info[i].spu) - continue; - - if (!spu_info[i].stopped_ok) { - printf("*** Error, spu %d was not successfully stopped" - ", not restarting\n", i); - continue; - } - - if (setjmp(bus_error_jmp) == 0) { - catch_memory_errors = 1; - sync(); - - spu = spu_info[i].spu; - spu_mfc_sr1_set(spu, spu_info[i].saved_mfc_sr1_RW); - out_be32(&spu->problem->spu_runcntl_RW, - spu_info[i].saved_spu_runcntl_RW); - - sync(); - __delay(200); - - printf("Restarted spu %.2d\n", i); - } else { - catch_memory_errors = 0; - printf("*** Error restarting spu %.2d\n", i); - } - catch_memory_errors = 0; - } -} - -#define DUMP_WIDTH 23 -#define DUMP_VALUE(format, field, value) \ -do { \ - if (setjmp(bus_error_jmp) == 0) { \ - catch_memory_errors = 1; \ - sync(); \ - printf(" %-*s = "format"\n", DUMP_WIDTH, \ - #field, value); \ - sync(); \ - __delay(200); \ - } else { \ - catch_memory_errors = 0; \ - printf(" %-*s = *** Error reading field.\n", \ - DUMP_WIDTH, #field); \ - } \ - catch_memory_errors = 0; \ -} while (0) - -#define DUMP_FIELD(obj, format, field) \ - DUMP_VALUE(format, field, obj->field) - -static void dump_spu_fields(struct spu *spu) -{ - printf("Dumping spu fields at address %p:\n", spu); - - DUMP_FIELD(spu, "0x%x", number); - DUMP_FIELD(spu, "%s", name); - DUMP_FIELD(spu, "0x%lx", local_store_phys); - DUMP_FIELD(spu, "0x%p", local_store); - DUMP_FIELD(spu, "0x%lx", ls_size); - DUMP_FIELD(spu, "0x%x", node); - DUMP_FIELD(spu, "0x%lx", flags); - DUMP_FIELD(spu, "%llu", class_0_pending); - DUMP_FIELD(spu, "0x%llx", class_0_dar); - DUMP_FIELD(spu, "0x%llx", class_1_dar); - DUMP_FIELD(spu, "0x%llx", class_1_dsisr); - DUMP_FIELD(spu, "0x%x", irqs[0]); - DUMP_FIELD(spu, "0x%x", irqs[1]); - DUMP_FIELD(spu, "0x%x", irqs[2]); - DUMP_FIELD(spu, "0x%x", slb_replace); - DUMP_FIELD(spu, "%d", pid); - DUMP_FIELD(spu, "0x%p", mm); - DUMP_FIELD(spu, "0x%p", ctx); - DUMP_FIELD(spu, "0x%p", rq); - DUMP_FIELD(spu, "0x%llx", timestamp); - DUMP_FIELD(spu, "0x%lx", problem_phys); - DUMP_FIELD(spu, "0x%p", problem); - DUMP_VALUE("0x%x", problem->spu_runcntl_RW, - in_be32(&spu->problem->spu_runcntl_RW)); - DUMP_VALUE("0x%x", problem->spu_status_R, - in_be32(&spu->problem->spu_status_R)); - DUMP_VALUE("0x%x", problem->spu_npc_RW, - in_be32(&spu->problem->spu_npc_RW)); - DUMP_FIELD(spu, "0x%p", priv2); - DUMP_FIELD(spu, "0x%p", pdata); -} - -static int spu_inst_dump(unsigned long adr, long count, int praddr) -{ - return generic_inst_dump(adr, count, praddr, print_insn_spu); -} - -static void dump_spu_ls(unsigned long num, int subcmd) -{ - unsigned long offset, addr, ls_addr; - - if (setjmp(bus_error_jmp) == 0) { - catch_memory_errors = 1; - sync(); - ls_addr = (unsigned long)spu_info[num].spu->local_store; - sync(); - __delay(200); - } else { - catch_memory_errors = 0; - printf("*** Error: accessing spu info for spu %ld\n", num); - return; - } - catch_memory_errors = 0; - - if (scanhex(&offset)) - addr = ls_addr + offset; - else - addr = spu_info[num].dump_addr; - - if (addr >= ls_addr + LS_SIZE) { - printf("*** Error: address outside of local store\n"); - return; - } - - switch (subcmd) { - case 'i': - addr += spu_inst_dump(addr, 16, 1); - last_cmd = "sdi\n"; - break; - default: - prdump(addr, 64); - addr += 64; - last_cmd = "sd\n"; - break; - } - - spu_info[num].dump_addr = addr; -} - -static int do_spu_cmd(void) -{ - static unsigned long num = 0; - int cmd, subcmd = 0; - - cmd = inchar(); - switch (cmd) { - case 's': - stop_spus(); - break; - case 'r': - restart_spus(); - break; - case 'd': - subcmd = inchar(); - if (isxdigit(subcmd) || subcmd == '\n') - termch = subcmd; - fallthrough; - case 'f': - scanhex(&num); - if (num >= XMON_NUM_SPUS || !spu_info[num].spu) { - printf("*** Error: invalid spu number\n"); - return 0; - } - - switch (cmd) { - case 'f': - dump_spu_fields(spu_info[num].spu); - break; - default: - dump_spu_ls(num, subcmd); - break; - } - - break; - default: - return -1; - } - - return 0; -} -#else /* ! CONFIG_SPU_BASE */ -static int do_spu_cmd(void) -{ - return -1; -} -#endif diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index 9e51242ed125..a59c72e77645 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -158,7 +158,7 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) { mutex_lock(&text_mutex); - if (tramp) { + if (tramp && !site) { __static_call_validate(tramp, true, true); __static_call_transform(tramp, __sc_insn(!func, true), func, false); } diff --git a/drivers/cpufreq/Kconfig.powerpc b/drivers/cpufreq/Kconfig.powerpc index eb678fa5260a..551e65d35a1d 100644 --- a/drivers/cpufreq/Kconfig.powerpc +++ b/drivers/cpufreq/Kconfig.powerpc @@ -1,22 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only -config CPU_FREQ_CBE - tristate "CBE frequency scaling" - depends on CBE_RAS && PPC_CELL - default m - help - This adds the cpufreq driver for Cell BE processors. - For details, take a look at <file:Documentation/cpu-freq/>. - If you don't have such processor, say N - -config CPU_FREQ_CBE_PMI - bool "CBE frequency scaling using PMI interface" - depends on CPU_FREQ_CBE - default n - help - Select this, if you want to use the PMI interface to switch - frequencies. Using PMI, the processor will not only be able to run at - lower speed, but also at lower core voltage. - config CPU_FREQ_PMAC bool "Support for Apple PowerBooks" depends on ADB_PMU && PPC32 diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile index 890fff99f37d..22ab45209f9b 100644 --- a/drivers/cpufreq/Makefile +++ b/drivers/cpufreq/Makefile @@ -91,9 +91,6 @@ obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o ################################################################################## # PowerPC platform drivers -obj-$(CONFIG_CPU_FREQ_CBE) += ppc-cbe-cpufreq.o -ppc-cbe-cpufreq-y += ppc_cbe_cpufreq_pervasive.o ppc_cbe_cpufreq.o -obj-$(CONFIG_CPU_FREQ_CBE_PMI) += ppc_cbe_cpufreq_pmi.o obj-$(CONFIG_QORIQ_CPUFREQ) += qoriq-cpufreq.o obj-$(CONFIG_CPU_FREQ_PMAC) += pmac32-cpufreq.o obj-$(CONFIG_CPU_FREQ_PMAC64) += pmac64-cpufreq.o diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.c b/drivers/cpufreq/ppc_cbe_cpufreq.c deleted file mode 100644 index 98595b3ea13f..000000000000 --- a/drivers/cpufreq/ppc_cbe_cpufreq.c +++ /dev/null @@ -1,173 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * cpufreq driver for the cell processor - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#include <linux/cpufreq.h> -#include <linux/module.h> -#include <linux/of.h> - -#include <asm/machdep.h> -#include <asm/cell-regs.h> - -#include "ppc_cbe_cpufreq.h" - -/* the CBE supports an 8 step frequency scaling */ -static struct cpufreq_frequency_table cbe_freqs[] = { - {0, 1, 0}, - {0, 2, 0}, - {0, 3, 0}, - {0, 4, 0}, - {0, 5, 0}, - {0, 6, 0}, - {0, 8, 0}, - {0, 10, 0}, - {0, 0, CPUFREQ_TABLE_END}, -}; - -/* - * hardware specific functions - */ - -static int set_pmode(unsigned int cpu, unsigned int slow_mode) -{ - int rc; - - if (cbe_cpufreq_has_pmi) - rc = cbe_cpufreq_set_pmode_pmi(cpu, slow_mode); - else - rc = cbe_cpufreq_set_pmode(cpu, slow_mode); - - pr_debug("register contains slow mode %d\n", cbe_cpufreq_get_pmode(cpu)); - - return rc; -} - -/* - * cpufreq functions - */ - -static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy) -{ - struct cpufreq_frequency_table *pos; - const u32 *max_freqp; - u32 max_freq; - int cur_pmode; - struct device_node *cpu; - - cpu = of_get_cpu_node(policy->cpu, NULL); - - if (!cpu) - return -ENODEV; - - pr_debug("init cpufreq on CPU %d\n", policy->cpu); - - /* - * Let's check we can actually get to the CELL regs - */ - if (!cbe_get_cpu_pmd_regs(policy->cpu) || - !cbe_get_cpu_mic_tm_regs(policy->cpu)) { - pr_info("invalid CBE regs pointers for cpufreq\n"); - of_node_put(cpu); - return -EINVAL; - } - - max_freqp = of_get_property(cpu, "clock-frequency", NULL); - - of_node_put(cpu); - - if (!max_freqp) - return -EINVAL; - - /* we need the freq in kHz */ - max_freq = *max_freqp / 1000; - - pr_debug("max clock-frequency is at %u kHz\n", max_freq); - pr_debug("initializing frequency table\n"); - - /* initialize frequency table */ - cpufreq_for_each_entry(pos, cbe_freqs) { - pos->frequency = max_freq / pos->driver_data; - pr_debug("%d: %d\n", (int)(pos - cbe_freqs), pos->frequency); - } - - /* if DEBUG is enabled set_pmode() measures the latency - * of a transition */ - policy->cpuinfo.transition_latency = 25000; - - cur_pmode = cbe_cpufreq_get_pmode(policy->cpu); - pr_debug("current pmode is at %d\n",cur_pmode); - - policy->cur = cbe_freqs[cur_pmode].frequency; - -#ifdef CONFIG_SMP - cpumask_copy(policy->cpus, cpu_sibling_mask(policy->cpu)); -#endif - - policy->freq_table = cbe_freqs; - cbe_cpufreq_pmi_policy_init(policy); - return 0; -} - -static void cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy) -{ - cbe_cpufreq_pmi_policy_exit(policy); -} - -static int cbe_cpufreq_target(struct cpufreq_policy *policy, - unsigned int cbe_pmode_new) -{ - pr_debug("setting frequency for cpu %d to %d kHz, " \ - "1/%d of max frequency\n", - policy->cpu, - cbe_freqs[cbe_pmode_new].frequency, - cbe_freqs[cbe_pmode_new].driver_data); - - return set_pmode(policy->cpu, cbe_pmode_new); -} - -static struct cpufreq_driver cbe_cpufreq_driver = { - .verify = cpufreq_generic_frequency_table_verify, - .target_index = cbe_cpufreq_target, - .init = cbe_cpufreq_cpu_init, - .exit = cbe_cpufreq_cpu_exit, - .name = "cbe-cpufreq", - .flags = CPUFREQ_CONST_LOOPS, -}; - -/* - * module init and destoy - */ - -static int __init cbe_cpufreq_init(void) -{ - int ret; - - if (!machine_is(cell)) - return -ENODEV; - - cbe_cpufreq_pmi_init(); - - ret = cpufreq_register_driver(&cbe_cpufreq_driver); - if (ret) - cbe_cpufreq_pmi_exit(); - - return ret; -} - -static void __exit cbe_cpufreq_exit(void) -{ - cpufreq_unregister_driver(&cbe_cpufreq_driver); - cbe_cpufreq_pmi_exit(); -} - -module_init(cbe_cpufreq_init); -module_exit(cbe_cpufreq_exit); - -MODULE_DESCRIPTION("cpufreq driver for Cell BE processors"); -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>"); diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.h b/drivers/cpufreq/ppc_cbe_cpufreq.h deleted file mode 100644 index 00cd8633b0d9..000000000000 --- a/drivers/cpufreq/ppc_cbe_cpufreq.h +++ /dev/null @@ -1,33 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * ppc_cbe_cpufreq.h - * - * This file contains the definitions used by the cbe_cpufreq driver. - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 - * - * Author: Christian Krafft <krafft@de.ibm.com> - * - */ - -#include <linux/cpufreq.h> -#include <linux/types.h> - -int cbe_cpufreq_set_pmode(int cpu, unsigned int pmode); -int cbe_cpufreq_get_pmode(int cpu); - -int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode); - -#if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI) -extern bool cbe_cpufreq_has_pmi; -void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy); -void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy); -void cbe_cpufreq_pmi_init(void); -void cbe_cpufreq_pmi_exit(void); -#else -#define cbe_cpufreq_has_pmi (0) -static inline void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy) {} -static inline void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy) {} -static inline void cbe_cpufreq_pmi_init(void) {} -static inline void cbe_cpufreq_pmi_exit(void) {} -#endif diff --git a/drivers/cpufreq/ppc_cbe_cpufreq_pervasive.c b/drivers/cpufreq/ppc_cbe_cpufreq_pervasive.c deleted file mode 100644 index 04830cd95333..000000000000 --- a/drivers/cpufreq/ppc_cbe_cpufreq_pervasive.c +++ /dev/null @@ -1,102 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * pervasive backend for the cbe_cpufreq driver - * - * This driver makes use of the pervasive unit to - * engage the desired frequency. - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#include <linux/io.h> -#include <linux/kernel.h> -#include <linux/time.h> -#include <asm/machdep.h> -#include <asm/hw_irq.h> -#include <asm/cell-regs.h> - -#include "ppc_cbe_cpufreq.h" - -/* to write to MIC register */ -static u64 MIC_Slow_Fast_Timer_table[] = { - [0 ... 7] = 0x007fc00000000000ull, -}; - -/* more values for the MIC */ -static u64 MIC_Slow_Next_Timer_table[] = { - 0x0000240000000000ull, - 0x0000268000000000ull, - 0x000029C000000000ull, - 0x00002D0000000000ull, - 0x0000300000000000ull, - 0x0000334000000000ull, - 0x000039C000000000ull, - 0x00003FC000000000ull, -}; - - -int cbe_cpufreq_set_pmode(int cpu, unsigned int pmode) -{ - struct cbe_pmd_regs __iomem *pmd_regs; - struct cbe_mic_tm_regs __iomem *mic_tm_regs; - unsigned long flags; - u64 value; -#ifdef DEBUG - long time; -#endif - - local_irq_save(flags); - - mic_tm_regs = cbe_get_cpu_mic_tm_regs(cpu); - pmd_regs = cbe_get_cpu_pmd_regs(cpu); - -#ifdef DEBUG - time = jiffies; -#endif - - out_be64(&mic_tm_regs->slow_fast_timer_0, MIC_Slow_Fast_Timer_table[pmode]); - out_be64(&mic_tm_regs->slow_fast_timer_1, MIC_Slow_Fast_Timer_table[pmode]); - - out_be64(&mic_tm_regs->slow_next_timer_0, MIC_Slow_Next_Timer_table[pmode]); - out_be64(&mic_tm_regs->slow_next_timer_1, MIC_Slow_Next_Timer_table[pmode]); - - value = in_be64(&pmd_regs->pmcr); - /* set bits to zero */ - value &= 0xFFFFFFFFFFFFFFF8ull; - /* set bits to next pmode */ - value |= pmode; - - out_be64(&pmd_regs->pmcr, value); - -#ifdef DEBUG - /* wait until new pmode appears in status register */ - value = in_be64(&pmd_regs->pmsr) & 0x07; - while (value != pmode) { - cpu_relax(); - value = in_be64(&pmd_regs->pmsr) & 0x07; - } - - time = jiffies - time; - time = jiffies_to_msecs(time); - pr_debug("had to wait %lu ms for a transition using " \ - "pervasive unit\n", time); -#endif - local_irq_restore(flags); - - return 0; -} - - -int cbe_cpufreq_get_pmode(int cpu) -{ - int ret; - struct cbe_pmd_regs __iomem *pmd_regs; - - pmd_regs = cbe_get_cpu_pmd_regs(cpu); - ret = in_be64(&pmd_regs->pmsr) & 0x07; - - return ret; -} - diff --git a/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c b/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c deleted file mode 100644 index 6f0c32592416..000000000000 --- a/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c +++ /dev/null @@ -1,150 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * pmi backend for the cbe_cpufreq driver - * - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 - * - * Author: Christian Krafft <krafft@de.ibm.com> - */ - -#include <linux/kernel.h> -#include <linux/types.h> -#include <linux/timer.h> -#include <linux/init.h> -#include <linux/pm_qos.h> -#include <linux/slab.h> - -#include <asm/processor.h> -#include <asm/pmi.h> -#include <asm/cell-regs.h> - -#ifdef DEBUG -#include <asm/time.h> -#endif - -#include "ppc_cbe_cpufreq.h" - -bool cbe_cpufreq_has_pmi = false; -EXPORT_SYMBOL_GPL(cbe_cpufreq_has_pmi); - -/* - * hardware specific functions - */ - -int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode) -{ - int ret; - pmi_message_t pmi_msg; -#ifdef DEBUG - long time; -#endif - pmi_msg.type = PMI_TYPE_FREQ_CHANGE; - pmi_msg.data1 = cbe_cpu_to_node(cpu); - pmi_msg.data2 = pmode; - -#ifdef DEBUG - time = jiffies; -#endif - pmi_send_message(pmi_msg); - -#ifdef DEBUG - time = jiffies - time; - time = jiffies_to_msecs(time); - pr_debug("had to wait %lu ms for a transition using " \ - "PMI\n", time); -#endif - ret = pmi_msg.data2; - pr_debug("PMI returned slow mode %d\n", ret); - - return ret; -} -EXPORT_SYMBOL_GPL(cbe_cpufreq_set_pmode_pmi); - - -static void cbe_cpufreq_handle_pmi(pmi_message_t pmi_msg) -{ - struct cpufreq_policy *policy; - struct freq_qos_request *req; - u8 node, slow_mode; - int cpu, ret; - - BUG_ON(pmi_msg.type != PMI_TYPE_FREQ_CHANGE); - - node = pmi_msg.data1; - slow_mode = pmi_msg.data2; - - cpu = cbe_node_to_cpu(node); - - pr_debug("cbe_handle_pmi: node: %d max_freq: %d\n", node, slow_mode); - - policy = cpufreq_cpu_get(cpu); - if (!policy) { - pr_warn("cpufreq policy not found cpu%d\n", cpu); - return; - } - - req = policy->driver_data; - - ret = freq_qos_update_request(req, - policy->freq_table[slow_mode].frequency); - if (ret < 0) - pr_warn("Failed to update freq constraint: %d\n", ret); - else - pr_debug("limiting node %d to slow mode %d\n", node, slow_mode); - - cpufreq_cpu_put(policy); -} - -static struct pmi_handler cbe_pmi_handler = { - .type = PMI_TYPE_FREQ_CHANGE, - .handle_pmi_message = cbe_cpufreq_handle_pmi, -}; - -void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy) -{ - struct freq_qos_request *req; - int ret; - - if (!cbe_cpufreq_has_pmi) - return; - - req = kzalloc(sizeof(*req), GFP_KERNEL); - if (!req) - return; - - ret = freq_qos_add_request(&policy->constraints, req, FREQ_QOS_MAX, - policy->freq_table[0].frequency); - if (ret < 0) { - pr_err("Failed to add freq constraint (%d)\n", ret); - kfree(req); - return; - } - - policy->driver_data = req; -} -EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_init); - -void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy) -{ - struct freq_qos_request *req = policy->driver_data; - - if (cbe_cpufreq_has_pmi) { - freq_qos_remove_request(req); - kfree(req); - } -} -EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_exit); - -void cbe_cpufreq_pmi_init(void) -{ - if (!pmi_register_handler(&cbe_pmi_handler)) - cbe_cpufreq_has_pmi = true; -} -EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_init); - -void cbe_cpufreq_pmi_exit(void) -{ - pmi_unregister_handler(&cbe_pmi_handler); - cbe_cpufreq_has_pmi = false; -} -EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_exit); diff --git a/drivers/net/ethernet/toshiba/Kconfig b/drivers/net/ethernet/toshiba/Kconfig index 701e9b7c1c3b..b1e27e3b99eb 100644 --- a/drivers/net/ethernet/toshiba/Kconfig +++ b/drivers/net/ethernet/toshiba/Kconfig @@ -6,7 +6,7 @@ config NET_VENDOR_TOSHIBA bool "Toshiba devices" default y - depends on PCI && (PPC_IBM_CELL_BLADE || MIPS) || PPC_PS3 + depends on PCI && MIPS || PPC_PS3 help If you have a network (Ethernet) card belonging to this class, say Y. @@ -39,15 +39,6 @@ config GELIC_WIRELESS the driver automatically distinguishes the models, you can safely enable this option even if you have a wireless-less model. -config SPIDER_NET - tristate "Spider Gigabit Ethernet driver" - depends on PCI && PPC_IBM_CELL_BLADE - select FW_LOADER - select SUNGEM_PHY - help - This driver supports the Gigabit Ethernet chips present on the - Cell Processor-Based Blades from IBM. - config TC35815 tristate "TOSHIBA TC35815 Ethernet support" depends on PCI && MIPS diff --git a/drivers/net/ethernet/toshiba/Makefile b/drivers/net/ethernet/toshiba/Makefile index f434fd0f429e..27e2164cf7e9 100644 --- a/drivers/net/ethernet/toshiba/Makefile +++ b/drivers/net/ethernet/toshiba/Makefile @@ -6,6 +6,4 @@ obj-$(CONFIG_GELIC_NET) += ps3_gelic.o gelic_wireless-$(CONFIG_GELIC_WIRELESS) += ps3_gelic_wireless.o ps3_gelic-objs += ps3_gelic_net.o $(gelic_wireless-y) -spidernet-y += spider_net.o spider_net_ethtool.o -obj-$(CONFIG_SPIDER_NET) += spidernet.o obj-$(CONFIG_TC35815) += tc35815.o diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c deleted file mode 100644 index a4937c18d7cb..000000000000 --- a/drivers/net/ethernet/toshiba/spider_net.c +++ /dev/null @@ -1,2556 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Network device driver for Cell Processor-Based Blade and Celleb platform - * - * (C) Copyright IBM Corp. 2005 - * (C) Copyright 2006 TOSHIBA CORPORATION - * - * Authors : Utz Bacher <utz.bacher@de.ibm.com> - * Jens Osterkamp <Jens.Osterkamp@de.ibm.com> - */ - -#include <linux/compiler.h> -#include <linux/crc32.h> -#include <linux/delay.h> -#include <linux/etherdevice.h> -#include <linux/ethtool.h> -#include <linux/firmware.h> -#include <linux/if_vlan.h> -#include <linux/in.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/gfp.h> -#include <linux/ioport.h> -#include <linux/ip.h> -#include <linux/kernel.h> -#include <linux/mii.h> -#include <linux/module.h> -#include <linux/netdevice.h> -#include <linux/device.h> -#include <linux/pci.h> -#include <linux/skbuff.h> -#include <linux/tcp.h> -#include <linux/types.h> -#include <linux/vmalloc.h> -#include <linux/wait.h> -#include <linux/workqueue.h> -#include <linux/bitops.h> -#include <linux/of.h> -#include <net/checksum.h> - -#include "spider_net.h" - -MODULE_AUTHOR("Utz Bacher <utz.bacher@de.ibm.com> and Jens Osterkamp " \ - "<Jens.Osterkamp@de.ibm.com>"); -MODULE_DESCRIPTION("Spider Southbridge Gigabit Ethernet driver"); -MODULE_LICENSE("GPL"); -MODULE_VERSION(VERSION); -MODULE_FIRMWARE(SPIDER_NET_FIRMWARE_NAME); - -static int rx_descriptors = SPIDER_NET_RX_DESCRIPTORS_DEFAULT; -static int tx_descriptors = SPIDER_NET_TX_DESCRIPTORS_DEFAULT; - -module_param(rx_descriptors, int, 0444); -module_param(tx_descriptors, int, 0444); - -MODULE_PARM_DESC(rx_descriptors, "number of descriptors used " \ - "in rx chains"); -MODULE_PARM_DESC(tx_descriptors, "number of descriptors used " \ - "in tx chain"); - -char spider_net_driver_name[] = "spidernet"; - -static const struct pci_device_id spider_net_pci_tbl[] = { - { PCI_VENDOR_ID_TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_SPIDER_NET, - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL }, - { 0, } -}; - -MODULE_DEVICE_TABLE(pci, spider_net_pci_tbl); - -/** - * spider_net_read_reg - reads an SMMIO register of a card - * @card: device structure - * @reg: register to read from - * - * returns the content of the specified SMMIO register. - */ -static inline u32 -spider_net_read_reg(struct spider_net_card *card, u32 reg) -{ - /* We use the powerpc specific variants instead of readl_be() because - * we know spidernet is not a real PCI device and we can thus avoid the - * performance hit caused by the PCI workarounds. - */ - return in_be32(card->regs + reg); -} - -/** - * spider_net_write_reg - writes to an SMMIO register of a card - * @card: device structure - * @reg: register to write to - * @value: value to write into the specified SMMIO register - */ -static inline void -spider_net_write_reg(struct spider_net_card *card, u32 reg, u32 value) -{ - /* We use the powerpc specific variants instead of writel_be() because - * we know spidernet is not a real PCI device and we can thus avoid the - * performance hit caused by the PCI workarounds. - */ - out_be32(card->regs + reg, value); -} - -/** - * spider_net_write_phy - write to phy register - * @netdev: adapter to be written to - * @mii_id: id of MII - * @reg: PHY register - * @val: value to be written to phy register - * - * spider_net_write_phy_register writes to an arbitrary PHY - * register via the spider GPCWOPCMD register. We assume the queue does - * not run full (not more than 15 commands outstanding). - **/ -static void -spider_net_write_phy(struct net_device *netdev, int mii_id, - int reg, int val) -{ - struct spider_net_card *card = netdev_priv(netdev); - u32 writevalue; - - writevalue = ((u32)mii_id << 21) | - ((u32)reg << 16) | ((u32)val); - - spider_net_write_reg(card, SPIDER_NET_GPCWOPCMD, writevalue); -} - -/** - * spider_net_read_phy - read from phy register - * @netdev: network device to be read from - * @mii_id: id of MII - * @reg: PHY register - * - * Returns value read from PHY register - * - * spider_net_write_phy reads from an arbitrary PHY - * register via the spider GPCROPCMD register - **/ -static int -spider_net_read_phy(struct net_device *netdev, int mii_id, int reg) -{ - struct spider_net_card *card = netdev_priv(netdev); - u32 readvalue; - - readvalue = ((u32)mii_id << 21) | ((u32)reg << 16); - spider_net_write_reg(card, SPIDER_NET_GPCROPCMD, readvalue); - - /* we don't use semaphores to wait for an SPIDER_NET_GPROPCMPINT - * interrupt, as we poll for the completion of the read operation - * in spider_net_read_phy. Should take about 50 us - */ - do { - readvalue = spider_net_read_reg(card, SPIDER_NET_GPCROPCMD); - } while (readvalue & SPIDER_NET_GPREXEC); - - readvalue &= SPIDER_NET_GPRDAT_MASK; - - return readvalue; -} - -/** - * spider_net_setup_aneg - initial auto-negotiation setup - * @card: device structure - **/ -static void -spider_net_setup_aneg(struct spider_net_card *card) -{ - struct mii_phy *phy = &card->phy; - u32 advertise = 0; - u16 bmsr, estat; - - bmsr = spider_net_read_phy(card->netdev, phy->mii_id, MII_BMSR); - estat = spider_net_read_phy(card->netdev, phy->mii_id, MII_ESTATUS); - - if (bmsr & BMSR_10HALF) - advertise |= ADVERTISED_10baseT_Half; - if (bmsr & BMSR_10FULL) - advertise |= ADVERTISED_10baseT_Full; - if (bmsr & BMSR_100HALF) - advertise |= ADVERTISED_100baseT_Half; - if (bmsr & BMSR_100FULL) - advertise |= ADVERTISED_100baseT_Full; - - if ((bmsr & BMSR_ESTATEN) && (estat & ESTATUS_1000_TFULL)) - advertise |= SUPPORTED_1000baseT_Full; - if ((bmsr & BMSR_ESTATEN) && (estat & ESTATUS_1000_THALF)) - advertise |= SUPPORTED_1000baseT_Half; - - sungem_phy_probe(phy, phy->mii_id); - phy->def->ops->setup_aneg(phy, advertise); - -} - -/** - * spider_net_rx_irq_off - switch off rx irq on this spider card - * @card: device structure - * - * switches off rx irq by masking them out in the GHIINTnMSK register - */ -static void -spider_net_rx_irq_off(struct spider_net_card *card) -{ - u32 regvalue; - - regvalue = SPIDER_NET_INT0_MASK_VALUE & (~SPIDER_NET_RXINT); - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, regvalue); -} - -/** - * spider_net_rx_irq_on - switch on rx irq on this spider card - * @card: device structure - * - * switches on rx irq by enabling them in the GHIINTnMSK register - */ -static void -spider_net_rx_irq_on(struct spider_net_card *card) -{ - u32 regvalue; - - regvalue = SPIDER_NET_INT0_MASK_VALUE | SPIDER_NET_RXINT; - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, regvalue); -} - -/** - * spider_net_set_promisc - sets the unicast address or the promiscuous mode - * @card: card structure - * - * spider_net_set_promisc sets the unicast destination address filter and - * thus either allows for non-promisc mode or promisc mode - */ -static void -spider_net_set_promisc(struct spider_net_card *card) -{ - u32 macu, macl; - struct net_device *netdev = card->netdev; - - if (netdev->flags & IFF_PROMISC) { - /* clear destination entry 0 */ - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR, 0); - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR + 0x04, 0); - spider_net_write_reg(card, SPIDER_NET_GMRUA0FIL15R, - SPIDER_NET_PROMISC_VALUE); - } else { - macu = netdev->dev_addr[0]; - macu <<= 8; - macu |= netdev->dev_addr[1]; - memcpy(&macl, &netdev->dev_addr[2], sizeof(macl)); - - macu |= SPIDER_NET_UA_DESCR_VALUE; - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR, macu); - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR + 0x04, macl); - spider_net_write_reg(card, SPIDER_NET_GMRUA0FIL15R, - SPIDER_NET_NONPROMISC_VALUE); - } -} - -/** - * spider_net_get_descr_status -- returns the status of a descriptor - * @hwdescr: descriptor to look at - * - * returns the status as in the dmac_cmd_status field of the descriptor - */ -static inline int -spider_net_get_descr_status(struct spider_net_hw_descr *hwdescr) -{ - return hwdescr->dmac_cmd_status & SPIDER_NET_DESCR_IND_PROC_MASK; -} - -/** - * spider_net_free_chain - free descriptor chain - * @card: card structure - * @chain: address of chain - * - */ -static void -spider_net_free_chain(struct spider_net_card *card, - struct spider_net_descr_chain *chain) -{ - struct spider_net_descr *descr; - - descr = chain->ring; - do { - descr->bus_addr = 0; - descr->hwdescr->next_descr_addr = 0; - descr = descr->next; - } while (descr != chain->ring); - - dma_free_coherent(&card->pdev->dev, chain->num_desc * sizeof(struct spider_net_hw_descr), - chain->hwring, chain->dma_addr); -} - -/** - * spider_net_init_chain - alloc and link descriptor chain - * @card: card structure - * @chain: address of chain - * - * We manage a circular list that mirrors the hardware structure, - * except that the hardware uses bus addresses. - * - * Returns 0 on success, <0 on failure - */ -static int -spider_net_init_chain(struct spider_net_card *card, - struct spider_net_descr_chain *chain) -{ - int i; - struct spider_net_descr *descr; - struct spider_net_hw_descr *hwdescr; - dma_addr_t buf; - size_t alloc_size; - - alloc_size = chain->num_desc * sizeof(struct spider_net_hw_descr); - - chain->hwring = dma_alloc_coherent(&card->pdev->dev, alloc_size, - &chain->dma_addr, GFP_KERNEL); - if (!chain->hwring) - return -ENOMEM; - - /* Set up the hardware pointers in each descriptor */ - descr = chain->ring; - hwdescr = chain->hwring; - buf = chain->dma_addr; - for (i=0; i < chain->num_desc; i++, descr++, hwdescr++) { - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; - hwdescr->next_descr_addr = 0; - - descr->hwdescr = hwdescr; - descr->bus_addr = buf; - descr->next = descr + 1; - descr->prev = descr - 1; - - buf += sizeof(struct spider_net_hw_descr); - } - /* do actual circular list */ - (descr-1)->next = chain->ring; - chain->ring->prev = descr-1; - - spin_lock_init(&chain->lock); - chain->head = chain->ring; - chain->tail = chain->ring; - return 0; -} - -/** - * spider_net_free_rx_chain_contents - frees descr contents in rx chain - * @card: card structure - * - * returns 0 on success, <0 on failure - */ -static void -spider_net_free_rx_chain_contents(struct spider_net_card *card) -{ - struct spider_net_descr *descr; - - descr = card->rx_chain.head; - do { - if (descr->skb) { - dma_unmap_single(&card->pdev->dev, - descr->hwdescr->buf_addr, - SPIDER_NET_MAX_FRAME, - DMA_BIDIRECTIONAL); - dev_kfree_skb(descr->skb); - descr->skb = NULL; - } - descr = descr->next; - } while (descr != card->rx_chain.head); -} - -/** - * spider_net_prepare_rx_descr - Reinitialize RX descriptor - * @card: card structure - * @descr: descriptor to re-init - * - * Return 0 on success, <0 on failure. - * - * Allocates a new rx skb, iommu-maps it and attaches it to the - * descriptor. Mark the descriptor as activated, ready-to-use. - */ -static int -spider_net_prepare_rx_descr(struct spider_net_card *card, - struct spider_net_descr *descr) -{ - struct spider_net_hw_descr *hwdescr = descr->hwdescr; - dma_addr_t buf; - int offset; - int bufsize; - - /* we need to round up the buffer size to a multiple of 128 */ - bufsize = (SPIDER_NET_MAX_FRAME + SPIDER_NET_RXBUF_ALIGN - 1) & - (~(SPIDER_NET_RXBUF_ALIGN - 1)); - - /* and we need to have it 128 byte aligned, therefore we allocate a - * bit more - */ - /* allocate an skb */ - descr->skb = netdev_alloc_skb(card->netdev, - bufsize + SPIDER_NET_RXBUF_ALIGN - 1); - if (!descr->skb) { - if (netif_msg_rx_err(card) && net_ratelimit()) - dev_err(&card->netdev->dev, - "Not enough memory to allocate rx buffer\n"); - card->spider_stats.alloc_rx_skb_error++; - return -ENOMEM; - } - hwdescr->buf_size = bufsize; - hwdescr->result_size = 0; - hwdescr->valid_size = 0; - hwdescr->data_status = 0; - hwdescr->data_error = 0; - - offset = ((unsigned long)descr->skb->data) & - (SPIDER_NET_RXBUF_ALIGN - 1); - if (offset) - skb_reserve(descr->skb, SPIDER_NET_RXBUF_ALIGN - offset); - /* iommu-map the skb */ - buf = dma_map_single(&card->pdev->dev, descr->skb->data, - SPIDER_NET_MAX_FRAME, DMA_FROM_DEVICE); - if (dma_mapping_error(&card->pdev->dev, buf)) { - dev_kfree_skb_any(descr->skb); - descr->skb = NULL; - if (netif_msg_rx_err(card) && net_ratelimit()) - dev_err(&card->netdev->dev, "Could not iommu-map rx buffer\n"); - card->spider_stats.rx_iommu_map_error++; - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; - } else { - hwdescr->buf_addr = buf; - wmb(); - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_CARDOWNED | - SPIDER_NET_DMAC_NOINTR_COMPLETE; - } - - return 0; -} - -/** - * spider_net_enable_rxchtails - sets RX dmac chain tail addresses - * @card: card structure - * - * spider_net_enable_rxchtails sets the RX DMAC chain tail addresses in the - * chip by writing to the appropriate register. DMA is enabled in - * spider_net_enable_rxdmac. - */ -static inline void -spider_net_enable_rxchtails(struct spider_net_card *card) -{ - /* assume chain is aligned correctly */ - spider_net_write_reg(card, SPIDER_NET_GDADCHA , - card->rx_chain.tail->bus_addr); -} - -/** - * spider_net_enable_rxdmac - enables a receive DMA controller - * @card: card structure - * - * spider_net_enable_rxdmac enables the DMA controller by setting RX_DMA_EN - * in the GDADMACCNTR register - */ -static inline void -spider_net_enable_rxdmac(struct spider_net_card *card) -{ - wmb(); - spider_net_write_reg(card, SPIDER_NET_GDADMACCNTR, - SPIDER_NET_DMA_RX_VALUE); -} - -/** - * spider_net_disable_rxdmac - disables the receive DMA controller - * @card: card structure - * - * spider_net_disable_rxdmac terminates processing on the DMA controller - * by turing off the DMA controller, with the force-end flag set. - */ -static inline void -spider_net_disable_rxdmac(struct spider_net_card *card) -{ - spider_net_write_reg(card, SPIDER_NET_GDADMACCNTR, - SPIDER_NET_DMA_RX_FEND_VALUE); -} - -/** - * spider_net_refill_rx_chain - refills descriptors/skbs in the rx chains - * @card: card structure - * - * refills descriptors in the rx chain: allocates skbs and iommu-maps them. - */ -static void -spider_net_refill_rx_chain(struct spider_net_card *card) -{ - struct spider_net_descr_chain *chain = &card->rx_chain; - unsigned long flags; - - /* one context doing the refill (and a second context seeing that - * and omitting it) is ok. If called by NAPI, we'll be called again - * as spider_net_decode_one_descr is called several times. If some - * interrupt calls us, the NAPI is about to clean up anyway. - */ - if (!spin_trylock_irqsave(&chain->lock, flags)) - return; - - while (spider_net_get_descr_status(chain->head->hwdescr) == - SPIDER_NET_DESCR_NOT_IN_USE) { - if (spider_net_prepare_rx_descr(card, chain->head)) - break; - chain->head = chain->head->next; - } - - spin_unlock_irqrestore(&chain->lock, flags); -} - -/** - * spider_net_alloc_rx_skbs - Allocates rx skbs in rx descriptor chains - * @card: card structure - * - * Returns 0 on success, <0 on failure. - */ -static int -spider_net_alloc_rx_skbs(struct spider_net_card *card) -{ - struct spider_net_descr_chain *chain = &card->rx_chain; - struct spider_net_descr *start = chain->tail; - struct spider_net_descr *descr = start; - - /* Link up the hardware chain pointers */ - do { - descr->prev->hwdescr->next_descr_addr = descr->bus_addr; - descr = descr->next; - } while (descr != start); - - /* Put at least one buffer into the chain. if this fails, - * we've got a problem. If not, spider_net_refill_rx_chain - * will do the rest at the end of this function. - */ - if (spider_net_prepare_rx_descr(card, chain->head)) - goto error; - else - chain->head = chain->head->next; - - /* This will allocate the rest of the rx buffers; - * if not, it's business as usual later on. - */ - spider_net_refill_rx_chain(card); - spider_net_enable_rxdmac(card); - return 0; - -error: - spider_net_free_rx_chain_contents(card); - return -ENOMEM; -} - -/** - * spider_net_get_multicast_hash - generates hash for multicast filter table - * @netdev: interface device structure - * @addr: multicast address - * - * returns the hash value. - * - * spider_net_get_multicast_hash calculates a hash value for a given multicast - * address, that is used to set the multicast filter tables - */ -static u8 -spider_net_get_multicast_hash(struct net_device *netdev, __u8 *addr) -{ - u32 crc; - u8 hash; - char addr_for_crc[ETH_ALEN] = { 0, }; - int i, bit; - - for (i = 0; i < ETH_ALEN * 8; i++) { - bit = (addr[i / 8] >> (i % 8)) & 1; - addr_for_crc[ETH_ALEN - 1 - i / 8] += bit << (7 - (i % 8)); - } - - crc = crc32_be(~0, addr_for_crc, netdev->addr_len); - - hash = (crc >> 27); - hash <<= 3; - hash |= crc & 7; - hash &= 0xff; - - return hash; -} - -/** - * spider_net_set_multi - sets multicast addresses and promisc flags - * @netdev: interface device structure - * - * spider_net_set_multi configures multicast addresses as needed for the - * netdev interface. It also sets up multicast, allmulti and promisc - * flags appropriately - */ -static void -spider_net_set_multi(struct net_device *netdev) -{ - struct netdev_hw_addr *ha; - u8 hash; - int i; - u32 reg; - struct spider_net_card *card = netdev_priv(netdev); - DECLARE_BITMAP(bitmask, SPIDER_NET_MULTICAST_HASHES); - - spider_net_set_promisc(card); - - if (netdev->flags & IFF_ALLMULTI) { - bitmap_fill(bitmask, SPIDER_NET_MULTICAST_HASHES); - goto write_hash; - } - - bitmap_zero(bitmask, SPIDER_NET_MULTICAST_HASHES); - - /* well, we know, what the broadcast hash value is: it's xfd - hash = spider_net_get_multicast_hash(netdev, netdev->broadcast); */ - __set_bit(0xfd, bitmask); - - netdev_for_each_mc_addr(ha, netdev) { - hash = spider_net_get_multicast_hash(netdev, ha->addr); - __set_bit(hash, bitmask); - } - -write_hash: - for (i = 0; i < SPIDER_NET_MULTICAST_HASHES / 4; i++) { - reg = 0; - if (test_bit(i * 4, bitmask)) - reg += 0x08; - reg <<= 8; - if (test_bit(i * 4 + 1, bitmask)) - reg += 0x08; - reg <<= 8; - if (test_bit(i * 4 + 2, bitmask)) - reg += 0x08; - reg <<= 8; - if (test_bit(i * 4 + 3, bitmask)) - reg += 0x08; - - spider_net_write_reg(card, SPIDER_NET_GMRMHFILnR + i * 4, reg); - } -} - -/** - * spider_net_prepare_tx_descr - fill tx descriptor with skb data - * @card: card structure - * @skb: packet to use - * - * returns 0 on success, <0 on failure. - * - * fills out the descriptor structure with skb data and len. Copies data, - * if needed (32bit DMA!) - */ -static int -spider_net_prepare_tx_descr(struct spider_net_card *card, - struct sk_buff *skb) -{ - struct spider_net_descr_chain *chain = &card->tx_chain; - struct spider_net_descr *descr; - struct spider_net_hw_descr *hwdescr; - dma_addr_t buf; - unsigned long flags; - - buf = dma_map_single(&card->pdev->dev, skb->data, skb->len, - DMA_TO_DEVICE); - if (dma_mapping_error(&card->pdev->dev, buf)) { - if (netif_msg_tx_err(card) && net_ratelimit()) - dev_err(&card->netdev->dev, "could not iommu-map packet (%p, %i). " - "Dropping packet\n", skb->data, skb->len); - card->spider_stats.tx_iommu_map_error++; - return -ENOMEM; - } - - spin_lock_irqsave(&chain->lock, flags); - descr = card->tx_chain.head; - if (descr->next == chain->tail->prev) { - spin_unlock_irqrestore(&chain->lock, flags); - dma_unmap_single(&card->pdev->dev, buf, skb->len, - DMA_TO_DEVICE); - return -ENOMEM; - } - hwdescr = descr->hwdescr; - chain->head = descr->next; - - descr->skb = skb; - hwdescr->buf_addr = buf; - hwdescr->buf_size = skb->len; - hwdescr->next_descr_addr = 0; - hwdescr->data_status = 0; - - hwdescr->dmac_cmd_status = - SPIDER_NET_DESCR_CARDOWNED | SPIDER_NET_DMAC_TXFRMTL; - spin_unlock_irqrestore(&chain->lock, flags); - - if (skb->ip_summed == CHECKSUM_PARTIAL) - switch (ip_hdr(skb)->protocol) { - case IPPROTO_TCP: - hwdescr->dmac_cmd_status |= SPIDER_NET_DMAC_TCP; - break; - case IPPROTO_UDP: - hwdescr->dmac_cmd_status |= SPIDER_NET_DMAC_UDP; - break; - } - - /* Chain the bus address, so that the DMA engine finds this descr. */ - wmb(); - descr->prev->hwdescr->next_descr_addr = descr->bus_addr; - - netif_trans_update(card->netdev); /* set netdev watchdog timer */ - return 0; -} - -static int -spider_net_set_low_watermark(struct spider_net_card *card) -{ - struct spider_net_descr *descr = card->tx_chain.tail; - struct spider_net_hw_descr *hwdescr; - unsigned long flags; - int status; - int cnt=0; - int i; - - /* Measure the length of the queue. Measurement does not - * need to be precise -- does not need a lock. - */ - while (descr != card->tx_chain.head) { - status = descr->hwdescr->dmac_cmd_status & SPIDER_NET_DESCR_NOT_IN_USE; - if (status == SPIDER_NET_DESCR_NOT_IN_USE) - break; - descr = descr->next; - cnt++; - } - - /* If TX queue is short, don't even bother with interrupts */ - if (cnt < card->tx_chain.num_desc/4) - return cnt; - - /* Set low-watermark 3/4th's of the way into the queue. */ - descr = card->tx_chain.tail; - cnt = (cnt*3)/4; - for (i=0;i<cnt; i++) - descr = descr->next; - - /* Set the new watermark, clear the old watermark */ - spin_lock_irqsave(&card->tx_chain.lock, flags); - descr->hwdescr->dmac_cmd_status |= SPIDER_NET_DESCR_TXDESFLG; - if (card->low_watermark && card->low_watermark != descr) { - hwdescr = card->low_watermark->hwdescr; - hwdescr->dmac_cmd_status = - hwdescr->dmac_cmd_status & ~SPIDER_NET_DESCR_TXDESFLG; - } - card->low_watermark = descr; - spin_unlock_irqrestore(&card->tx_chain.lock, flags); - return cnt; -} - -/** - * spider_net_release_tx_chain - processes sent tx descriptors - * @card: adapter structure - * @brutal: if set, don't care about whether descriptor seems to be in use - * - * returns 0 if the tx ring is empty, otherwise 1. - * - * spider_net_release_tx_chain releases the tx descriptors that spider has - * finished with (if non-brutal) or simply release tx descriptors (if brutal). - * If some other context is calling this function, we return 1 so that we're - * scheduled again (if we were scheduled) and will not lose initiative. - */ -static int -spider_net_release_tx_chain(struct spider_net_card *card, int brutal) -{ - struct net_device *dev = card->netdev; - struct spider_net_descr_chain *chain = &card->tx_chain; - struct spider_net_descr *descr; - struct spider_net_hw_descr *hwdescr; - struct sk_buff *skb; - u32 buf_addr; - unsigned long flags; - int status; - - while (1) { - spin_lock_irqsave(&chain->lock, flags); - if (chain->tail == chain->head) { - spin_unlock_irqrestore(&chain->lock, flags); - return 0; - } - descr = chain->tail; - hwdescr = descr->hwdescr; - - status = spider_net_get_descr_status(hwdescr); - switch (status) { - case SPIDER_NET_DESCR_COMPLETE: - dev->stats.tx_packets++; - dev->stats.tx_bytes += descr->skb->len; - break; - - case SPIDER_NET_DESCR_CARDOWNED: - if (!brutal) { - spin_unlock_irqrestore(&chain->lock, flags); - return 1; - } - - /* fallthrough, if we release the descriptors - * brutally (then we don't care about - * SPIDER_NET_DESCR_CARDOWNED) - */ - fallthrough; - - case SPIDER_NET_DESCR_RESPONSE_ERROR: - case SPIDER_NET_DESCR_PROTECTION_ERROR: - case SPIDER_NET_DESCR_FORCE_END: - if (netif_msg_tx_err(card)) - dev_err(&card->netdev->dev, "forcing end of tx descriptor " - "with status x%02x\n", status); - dev->stats.tx_errors++; - break; - - default: - dev->stats.tx_dropped++; - if (!brutal) { - spin_unlock_irqrestore(&chain->lock, flags); - return 1; - } - } - - chain->tail = descr->next; - hwdescr->dmac_cmd_status |= SPIDER_NET_DESCR_NOT_IN_USE; - skb = descr->skb; - descr->skb = NULL; - buf_addr = hwdescr->buf_addr; - spin_unlock_irqrestore(&chain->lock, flags); - - /* unmap the skb */ - if (skb) { - dma_unmap_single(&card->pdev->dev, buf_addr, skb->len, - DMA_TO_DEVICE); - dev_consume_skb_any(skb); - } - } - return 0; -} - -/** - * spider_net_kick_tx_dma - enables TX DMA processing - * @card: card structure - * - * This routine will start the transmit DMA running if - * it is not already running. This routine ned only be - * called when queueing a new packet to an empty tx queue. - * Writes the current tx chain head as start address - * of the tx descriptor chain and enables the transmission - * DMA engine. - */ -static inline void -spider_net_kick_tx_dma(struct spider_net_card *card) -{ - struct spider_net_descr *descr; - - if (spider_net_read_reg(card, SPIDER_NET_GDTDMACCNTR) & - SPIDER_NET_TX_DMA_EN) - goto out; - - descr = card->tx_chain.tail; - for (;;) { - if (spider_net_get_descr_status(descr->hwdescr) == - SPIDER_NET_DESCR_CARDOWNED) { - spider_net_write_reg(card, SPIDER_NET_GDTDCHA, - descr->bus_addr); - spider_net_write_reg(card, SPIDER_NET_GDTDMACCNTR, - SPIDER_NET_DMA_TX_VALUE); - break; - } - if (descr == card->tx_chain.head) - break; - descr = descr->next; - } - -out: - mod_timer(&card->tx_timer, jiffies + SPIDER_NET_TX_TIMER); -} - -/** - * spider_net_xmit - transmits a frame over the device - * @skb: packet to send out - * @netdev: interface device structure - * - * returns NETDEV_TX_OK on success, NETDEV_TX_BUSY on failure - */ -static netdev_tx_t -spider_net_xmit(struct sk_buff *skb, struct net_device *netdev) -{ - int cnt; - struct spider_net_card *card = netdev_priv(netdev); - - spider_net_release_tx_chain(card, 0); - - if (spider_net_prepare_tx_descr(card, skb) != 0) { - netdev->stats.tx_dropped++; - netif_stop_queue(netdev); - return NETDEV_TX_BUSY; - } - - cnt = spider_net_set_low_watermark(card); - if (cnt < 5) - spider_net_kick_tx_dma(card); - return NETDEV_TX_OK; -} - -/** - * spider_net_cleanup_tx_ring - cleans up the TX ring - * @t: timer context used to obtain the pointer to net card data structure - * - * spider_net_cleanup_tx_ring is called by either the tx_timer - * or from the NAPI polling routine. - * This routine releases resources associted with transmitted - * packets, including updating the queue tail pointer. - */ -static void -spider_net_cleanup_tx_ring(struct timer_list *t) -{ - struct spider_net_card *card = from_timer(card, t, tx_timer); - if ((spider_net_release_tx_chain(card, 0) != 0) && - (card->netdev->flags & IFF_UP)) { - spider_net_kick_tx_dma(card); - netif_wake_queue(card->netdev); - } -} - -/** - * spider_net_do_ioctl - called for device ioctls - * @netdev: interface device structure - * @ifr: request parameter structure for ioctl - * @cmd: command code for ioctl - * - * returns 0 on success, <0 on failure. Currently, we have no special ioctls. - * -EOPNOTSUPP is returned, if an unknown ioctl was requested - */ -static int -spider_net_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) -{ - switch (cmd) { - default: - return -EOPNOTSUPP; - } -} - -/** - * spider_net_pass_skb_up - takes an skb from a descriptor and passes it on - * @descr: descriptor to process - * @card: card structure - * - * Fills out skb structure and passes the data to the stack. - * The descriptor state is not changed. - */ -static void -spider_net_pass_skb_up(struct spider_net_descr *descr, - struct spider_net_card *card) -{ - struct spider_net_hw_descr *hwdescr = descr->hwdescr; - struct sk_buff *skb = descr->skb; - struct net_device *netdev = card->netdev; - u32 data_status = hwdescr->data_status; - u32 data_error = hwdescr->data_error; - - skb_put(skb, hwdescr->valid_size); - - /* the card seems to add 2 bytes of junk in front - * of the ethernet frame - */ -#define SPIDER_MISALIGN 2 - skb_pull(skb, SPIDER_MISALIGN); - skb->protocol = eth_type_trans(skb, netdev); - - /* checksum offload */ - skb_checksum_none_assert(skb); - if (netdev->features & NETIF_F_RXCSUM) { - if ( ( (data_status & SPIDER_NET_DATA_STATUS_CKSUM_MASK) == - SPIDER_NET_DATA_STATUS_CKSUM_MASK) && - !(data_error & SPIDER_NET_DATA_ERR_CKSUM_MASK)) - skb->ip_summed = CHECKSUM_UNNECESSARY; - } - - if (data_status & SPIDER_NET_VLAN_PACKET) { - /* further enhancements: HW-accel VLAN */ - } - - /* update netdevice statistics */ - netdev->stats.rx_packets++; - netdev->stats.rx_bytes += skb->len; - - /* pass skb up to stack */ - netif_receive_skb(skb); -} - -static void show_rx_chain(struct spider_net_card *card) -{ - struct spider_net_descr_chain *chain = &card->rx_chain; - struct spider_net_descr *start= chain->tail; - struct spider_net_descr *descr= start; - struct spider_net_hw_descr *hwd = start->hwdescr; - struct device *dev = &card->netdev->dev; - u32 curr_desc, next_desc; - int status; - - int tot = 0; - int cnt = 0; - int off = start - chain->ring; - int cstat = hwd->dmac_cmd_status; - - dev_info(dev, "Total number of descrs=%d\n", - chain->num_desc); - dev_info(dev, "Chain tail located at descr=%d, status=0x%x\n", - off, cstat); - - curr_desc = spider_net_read_reg(card, SPIDER_NET_GDACTDPA); - next_desc = spider_net_read_reg(card, SPIDER_NET_GDACNEXTDA); - - status = cstat; - do - { - hwd = descr->hwdescr; - off = descr - chain->ring; - status = hwd->dmac_cmd_status; - - if (descr == chain->head) - dev_info(dev, "Chain head is at %d, head status=0x%x\n", - off, status); - - if (curr_desc == descr->bus_addr) - dev_info(dev, "HW curr desc (GDACTDPA) is at %d, status=0x%x\n", - off, status); - - if (next_desc == descr->bus_addr) - dev_info(dev, "HW next desc (GDACNEXTDA) is at %d, status=0x%x\n", - off, status); - - if (hwd->next_descr_addr == 0) - dev_info(dev, "chain is cut at %d\n", off); - - if (cstat != status) { - int from = (chain->num_desc + off - cnt) % chain->num_desc; - int to = (chain->num_desc + off - 1) % chain->num_desc; - dev_info(dev, "Have %d (from %d to %d) descrs " - "with stat=0x%08x\n", cnt, from, to, cstat); - cstat = status; - cnt = 0; - } - - cnt ++; - tot ++; - descr = descr->next; - } while (descr != start); - - dev_info(dev, "Last %d descrs with stat=0x%08x " - "for a total of %d descrs\n", cnt, cstat, tot); - -#ifdef DEBUG - /* Now dump the whole ring */ - descr = start; - do - { - struct spider_net_hw_descr *hwd = descr->hwdescr; - status = spider_net_get_descr_status(hwd); - cnt = descr - chain->ring; - dev_info(dev, "Descr %d stat=0x%08x skb=%p\n", - cnt, status, descr->skb); - dev_info(dev, "bus addr=%08x buf addr=%08x sz=%d\n", - descr->bus_addr, hwd->buf_addr, hwd->buf_size); - dev_info(dev, "next=%08x result sz=%d valid sz=%d\n", - hwd->next_descr_addr, hwd->result_size, - hwd->valid_size); - dev_info(dev, "dmac=%08x data stat=%08x data err=%08x\n", - hwd->dmac_cmd_status, hwd->data_status, - hwd->data_error); - dev_info(dev, "\n"); - - descr = descr->next; - } while (descr != start); -#endif - -} - -/** - * spider_net_resync_head_ptr - Advance head ptr past empty descrs - * @card: card structure - * - * If the driver fails to keep up and empty the queue, then the - * hardware wil run out of room to put incoming packets. This - * will cause the hardware to skip descrs that are full (instead - * of halting/retrying). Thus, once the driver runs, it wil need - * to "catch up" to where the hardware chain pointer is at. - */ -static void spider_net_resync_head_ptr(struct spider_net_card *card) -{ - unsigned long flags; - struct spider_net_descr_chain *chain = &card->rx_chain; - struct spider_net_descr *descr; - int i, status; - - /* Advance head pointer past any empty descrs */ - descr = chain->head; - status = spider_net_get_descr_status(descr->hwdescr); - - if (status == SPIDER_NET_DESCR_NOT_IN_USE) - return; - - spin_lock_irqsave(&chain->lock, flags); - - descr = chain->head; - status = spider_net_get_descr_status(descr->hwdescr); - for (i=0; i<chain->num_desc; i++) { - if (status != SPIDER_NET_DESCR_CARDOWNED) break; - descr = descr->next; - status = spider_net_get_descr_status(descr->hwdescr); - } - chain->head = descr; - - spin_unlock_irqrestore(&chain->lock, flags); -} - -static int spider_net_resync_tail_ptr(struct spider_net_card *card) -{ - struct spider_net_descr_chain *chain = &card->rx_chain; - struct spider_net_descr *descr; - int i, status; - - /* Advance tail pointer past any empty and reaped descrs */ - descr = chain->tail; - status = spider_net_get_descr_status(descr->hwdescr); - - for (i=0; i<chain->num_desc; i++) { - if ((status != SPIDER_NET_DESCR_CARDOWNED) && - (status != SPIDER_NET_DESCR_NOT_IN_USE)) break; - descr = descr->next; - status = spider_net_get_descr_status(descr->hwdescr); - } - chain->tail = descr; - - if ((i == chain->num_desc) || (i == 0)) - return 1; - return 0; -} - -/** - * spider_net_decode_one_descr - processes an RX descriptor - * @card: card structure - * - * Returns 1 if a packet has been sent to the stack, otherwise 0. - * - * Processes an RX descriptor by iommu-unmapping the data buffer - * and passing the packet up to the stack. This function is called - * in softirq context, e.g. either bottom half from interrupt or - * NAPI polling context. - */ -static int -spider_net_decode_one_descr(struct spider_net_card *card) -{ - struct net_device *dev = card->netdev; - struct spider_net_descr_chain *chain = &card->rx_chain; - struct spider_net_descr *descr = chain->tail; - struct spider_net_hw_descr *hwdescr = descr->hwdescr; - u32 hw_buf_addr; - int status; - - status = spider_net_get_descr_status(hwdescr); - - /* Nothing in the descriptor, or ring must be empty */ - if ((status == SPIDER_NET_DESCR_CARDOWNED) || - (status == SPIDER_NET_DESCR_NOT_IN_USE)) - return 0; - - /* descriptor definitively used -- move on tail */ - chain->tail = descr->next; - - /* unmap descriptor */ - hw_buf_addr = hwdescr->buf_addr; - hwdescr->buf_addr = 0xffffffff; - dma_unmap_single(&card->pdev->dev, hw_buf_addr, SPIDER_NET_MAX_FRAME, - DMA_FROM_DEVICE); - - if ( (status == SPIDER_NET_DESCR_RESPONSE_ERROR) || - (status == SPIDER_NET_DESCR_PROTECTION_ERROR) || - (status == SPIDER_NET_DESCR_FORCE_END) ) { - if (netif_msg_rx_err(card)) - dev_err(&dev->dev, - "dropping RX descriptor with state %d\n", status); - dev->stats.rx_dropped++; - goto bad_desc; - } - - if ( (status != SPIDER_NET_DESCR_COMPLETE) && - (status != SPIDER_NET_DESCR_FRAME_END) ) { - if (netif_msg_rx_err(card)) - dev_err(&card->netdev->dev, - "RX descriptor with unknown state %d\n", status); - card->spider_stats.rx_desc_unk_state++; - goto bad_desc; - } - - /* The cases we'll throw away the packet immediately */ - if (hwdescr->data_error & SPIDER_NET_DESTROY_RX_FLAGS) { - if (netif_msg_rx_err(card)) - dev_err(&card->netdev->dev, - "error in received descriptor found, " - "data_status=x%08x, data_error=x%08x\n", - hwdescr->data_status, hwdescr->data_error); - goto bad_desc; - } - - if (hwdescr->dmac_cmd_status & SPIDER_NET_DESCR_BAD_STATUS) { - dev_err(&card->netdev->dev, "bad status, cmd_status=x%08x\n", - hwdescr->dmac_cmd_status); - pr_err("buf_addr=x%08x\n", hw_buf_addr); - pr_err("buf_size=x%08x\n", hwdescr->buf_size); - pr_err("next_descr_addr=x%08x\n", hwdescr->next_descr_addr); - pr_err("result_size=x%08x\n", hwdescr->result_size); - pr_err("valid_size=x%08x\n", hwdescr->valid_size); - pr_err("data_status=x%08x\n", hwdescr->data_status); - pr_err("data_error=x%08x\n", hwdescr->data_error); - pr_err("which=%ld\n", descr - card->rx_chain.ring); - - card->spider_stats.rx_desc_error++; - goto bad_desc; - } - - /* Ok, we've got a packet in descr */ - spider_net_pass_skb_up(descr, card); - descr->skb = NULL; - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; - return 1; - -bad_desc: - if (netif_msg_rx_err(card)) - show_rx_chain(card); - dev_kfree_skb_irq(descr->skb); - descr->skb = NULL; - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; - return 0; -} - -/** - * spider_net_poll - NAPI poll function called by the stack to return packets - * @napi: napi device structure - * @budget: number of packets we can pass to the stack at most - * - * returns 0 if no more packets available to the driver/stack. Returns 1, - * if the quota is exceeded, but the driver has still packets. - * - * spider_net_poll returns all packets from the rx descriptors to the stack - * (using netif_receive_skb). If all/enough packets are up, the driver - * reenables interrupts and returns 0. If not, 1 is returned. - */ -static int spider_net_poll(struct napi_struct *napi, int budget) -{ - struct spider_net_card *card = container_of(napi, struct spider_net_card, napi); - int packets_done = 0; - - while (packets_done < budget) { - if (!spider_net_decode_one_descr(card)) - break; - - packets_done++; - } - - if ((packets_done == 0) && (card->num_rx_ints != 0)) { - if (!spider_net_resync_tail_ptr(card)) - packets_done = budget; - spider_net_resync_head_ptr(card); - } - card->num_rx_ints = 0; - - spider_net_refill_rx_chain(card); - spider_net_enable_rxdmac(card); - - spider_net_cleanup_tx_ring(&card->tx_timer); - - /* if all packets are in the stack, enable interrupts and return 0 */ - /* if not, return 1 */ - if (packets_done < budget) { - napi_complete_done(napi, packets_done); - spider_net_rx_irq_on(card); - card->ignore_rx_ramfull = 0; - } - - return packets_done; -} - -/** - * spider_net_set_mac - sets the MAC of an interface - * @netdev: interface device structure - * @p: pointer to new MAC address - * - * Returns 0 on success, <0 on failure. Currently, we don't support this - * and will always return EOPNOTSUPP. - */ -static int -spider_net_set_mac(struct net_device *netdev, void *p) -{ - struct spider_net_card *card = netdev_priv(netdev); - u32 macl, macu, regvalue; - struct sockaddr *addr = p; - - if (!is_valid_ether_addr(addr->sa_data)) - return -EADDRNOTAVAIL; - - eth_hw_addr_set(netdev, addr->sa_data); - - /* switch off GMACTPE and GMACRPE */ - regvalue = spider_net_read_reg(card, SPIDER_NET_GMACOPEMD); - regvalue &= ~((1 << 5) | (1 << 6)); - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, regvalue); - - /* write mac */ - macu = (netdev->dev_addr[0]<<24) + (netdev->dev_addr[1]<<16) + - (netdev->dev_addr[2]<<8) + (netdev->dev_addr[3]); - macl = (netdev->dev_addr[4]<<8) + (netdev->dev_addr[5]); - spider_net_write_reg(card, SPIDER_NET_GMACUNIMACU, macu); - spider_net_write_reg(card, SPIDER_NET_GMACUNIMACL, macl); - - /* switch GMACTPE and GMACRPE back on */ - regvalue = spider_net_read_reg(card, SPIDER_NET_GMACOPEMD); - regvalue |= ((1 << 5) | (1 << 6)); - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, regvalue); - - spider_net_set_promisc(card); - - return 0; -} - -/** - * spider_net_link_reset - * @netdev: net device structure - * - * This is called when the PHY_LINK signal is asserted. For the blade this is - * not connected so we should never get here. - * - */ -static void -spider_net_link_reset(struct net_device *netdev) -{ - - struct spider_net_card *card = netdev_priv(netdev); - - del_timer_sync(&card->aneg_timer); - - /* clear interrupt, block further interrupts */ - spider_net_write_reg(card, SPIDER_NET_GMACST, - spider_net_read_reg(card, SPIDER_NET_GMACST)); - spider_net_write_reg(card, SPIDER_NET_GMACINTEN, 0); - - /* reset phy and setup aneg */ - card->aneg_count = 0; - card->medium = BCM54XX_COPPER; - spider_net_setup_aneg(card); - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); - -} - -/** - * spider_net_handle_error_irq - handles errors raised by an interrupt - * @card: card structure - * @status_reg: interrupt status register 0 (GHIINT0STS) - * @error_reg1: interrupt status register 1 (GHIINT1STS) - * @error_reg2: interrupt status register 2 (GHIINT2STS) - * - * spider_net_handle_error_irq treats or ignores all error conditions - * found when an interrupt is presented - */ -static void -spider_net_handle_error_irq(struct spider_net_card *card, u32 status_reg, - u32 error_reg1, u32 error_reg2) -{ - u32 i; - int show_error = 1; - - /* check GHIINT0STS ************************************/ - if (status_reg) - for (i = 0; i < 32; i++) - if (status_reg & (1<<i)) - switch (i) - { - /* let error_reg1 and error_reg2 evaluation decide, what to do - case SPIDER_NET_PHYINT: - case SPIDER_NET_GMAC2INT: - case SPIDER_NET_GMAC1INT: - case SPIDER_NET_GFIFOINT: - case SPIDER_NET_DMACINT: - case SPIDER_NET_GSYSINT: - break; */ - - case SPIDER_NET_GIPSINT: - show_error = 0; - break; - - case SPIDER_NET_GPWOPCMPINT: - /* PHY write operation completed */ - show_error = 0; - break; - case SPIDER_NET_GPROPCMPINT: - /* PHY read operation completed */ - /* we don't use semaphores, as we poll for the completion - * of the read operation in spider_net_read_phy. Should take - * about 50 us - */ - show_error = 0; - break; - case SPIDER_NET_GPWFFINT: - /* PHY command queue full */ - if (netif_msg_intr(card)) - dev_err(&card->netdev->dev, "PHY write queue full\n"); - show_error = 0; - break; - - /* case SPIDER_NET_GRMDADRINT: not used. print a message */ - /* case SPIDER_NET_GRMARPINT: not used. print a message */ - /* case SPIDER_NET_GRMMPINT: not used. print a message */ - - case SPIDER_NET_GDTDEN0INT: - /* someone has set TX_DMA_EN to 0 */ - show_error = 0; - break; - - case SPIDER_NET_GDDDEN0INT: - case SPIDER_NET_GDCDEN0INT: - case SPIDER_NET_GDBDEN0INT: - case SPIDER_NET_GDADEN0INT: - /* someone has set RX_DMA_EN to 0 */ - show_error = 0; - break; - - /* RX interrupts */ - case SPIDER_NET_GDDFDCINT: - case SPIDER_NET_GDCFDCINT: - case SPIDER_NET_GDBFDCINT: - case SPIDER_NET_GDAFDCINT: - /* case SPIDER_NET_GDNMINT: not used. print a message */ - /* case SPIDER_NET_GCNMINT: not used. print a message */ - /* case SPIDER_NET_GBNMINT: not used. print a message */ - /* case SPIDER_NET_GANMINT: not used. print a message */ - /* case SPIDER_NET_GRFNMINT: not used. print a message */ - show_error = 0; - break; - - /* TX interrupts */ - case SPIDER_NET_GDTFDCINT: - show_error = 0; - break; - case SPIDER_NET_GTTEDINT: - show_error = 0; - break; - case SPIDER_NET_GDTDCEINT: - /* chain end. If a descriptor should be sent, kick off - * tx dma - if (card->tx_chain.tail != card->tx_chain.head) - spider_net_kick_tx_dma(card); - */ - show_error = 0; - break; - - /* case SPIDER_NET_G1TMCNTINT: not used. print a message */ - /* case SPIDER_NET_GFREECNTINT: not used. print a message */ - } - - /* check GHIINT1STS ************************************/ - if (error_reg1) - for (i = 0; i < 32; i++) - if (error_reg1 & (1<<i)) - switch (i) - { - case SPIDER_NET_GTMFLLINT: - /* TX RAM full may happen on a usual case. - * Logging is not needed. - */ - show_error = 0; - break; - case SPIDER_NET_GRFDFLLINT: - case SPIDER_NET_GRFCFLLINT: - case SPIDER_NET_GRFBFLLINT: - case SPIDER_NET_GRFAFLLINT: - case SPIDER_NET_GRMFLLINT: - /* Could happen when rx chain is full */ - if (card->ignore_rx_ramfull == 0) { - card->ignore_rx_ramfull = 1; - spider_net_resync_head_ptr(card); - spider_net_refill_rx_chain(card); - spider_net_enable_rxdmac(card); - card->num_rx_ints ++; - napi_schedule(&card->napi); - } - show_error = 0; - break; - - /* case SPIDER_NET_GTMSHTINT: problem, print a message */ - case SPIDER_NET_GDTINVDINT: - /* allrighty. tx from previous descr ok */ - show_error = 0; - break; - - /* chain end */ - case SPIDER_NET_GDDDCEINT: - case SPIDER_NET_GDCDCEINT: - case SPIDER_NET_GDBDCEINT: - case SPIDER_NET_GDADCEINT: - spider_net_resync_head_ptr(card); - spider_net_refill_rx_chain(card); - spider_net_enable_rxdmac(card); - card->num_rx_ints ++; - napi_schedule(&card->napi); - show_error = 0; - break; - - /* invalid descriptor */ - case SPIDER_NET_GDDINVDINT: - case SPIDER_NET_GDCINVDINT: - case SPIDER_NET_GDBINVDINT: - case SPIDER_NET_GDAINVDINT: - /* Could happen when rx chain is full */ - spider_net_resync_head_ptr(card); - spider_net_refill_rx_chain(card); - spider_net_enable_rxdmac(card); - card->num_rx_ints ++; - napi_schedule(&card->napi); - show_error = 0; - break; - - /* case SPIDER_NET_GDTRSERINT: problem, print a message */ - /* case SPIDER_NET_GDDRSERINT: problem, print a message */ - /* case SPIDER_NET_GDCRSERINT: problem, print a message */ - /* case SPIDER_NET_GDBRSERINT: problem, print a message */ - /* case SPIDER_NET_GDARSERINT: problem, print a message */ - /* case SPIDER_NET_GDSERINT: problem, print a message */ - /* case SPIDER_NET_GDTPTERINT: problem, print a message */ - /* case SPIDER_NET_GDDPTERINT: problem, print a message */ - /* case SPIDER_NET_GDCPTERINT: problem, print a message */ - /* case SPIDER_NET_GDBPTERINT: problem, print a message */ - /* case SPIDER_NET_GDAPTERINT: problem, print a message */ - default: - show_error = 1; - break; - } - - /* check GHIINT2STS ************************************/ - if (error_reg2) - for (i = 0; i < 32; i++) - if (error_reg2 & (1<<i)) - switch (i) - { - /* there is nothing we can (want to) do at this time. Log a - * message, we can switch on and off the specific values later on - case SPIDER_NET_GPROPERINT: - case SPIDER_NET_GMCTCRSNGINT: - case SPIDER_NET_GMCTLCOLINT: - case SPIDER_NET_GMCTTMOTINT: - case SPIDER_NET_GMCRCAERINT: - case SPIDER_NET_GMCRCALERINT: - case SPIDER_NET_GMCRALNERINT: - case SPIDER_NET_GMCROVRINT: - case SPIDER_NET_GMCRRNTINT: - case SPIDER_NET_GMCRRXERINT: - case SPIDER_NET_GTITCSERINT: - case SPIDER_NET_GTIFMTERINT: - case SPIDER_NET_GTIPKTRVKINT: - case SPIDER_NET_GTISPINGINT: - case SPIDER_NET_GTISADNGINT: - case SPIDER_NET_GTISPDNGINT: - case SPIDER_NET_GRIFMTERINT: - case SPIDER_NET_GRIPKTRVKINT: - case SPIDER_NET_GRISPINGINT: - case SPIDER_NET_GRISADNGINT: - case SPIDER_NET_GRISPDNGINT: - break; - */ - default: - break; - } - - if ((show_error) && (netif_msg_intr(card)) && net_ratelimit()) - dev_err(&card->netdev->dev, "Error interrupt, GHIINT0STS = 0x%08x, " - "GHIINT1STS = 0x%08x, GHIINT2STS = 0x%08x\n", - status_reg, error_reg1, error_reg2); - - /* clear interrupt sources */ - spider_net_write_reg(card, SPIDER_NET_GHIINT1STS, error_reg1); - spider_net_write_reg(card, SPIDER_NET_GHIINT2STS, error_reg2); -} - -/** - * spider_net_interrupt - interrupt handler for spider_net - * @irq: interrupt number - * @ptr: pointer to net_device - * - * returns IRQ_HANDLED, if interrupt was for driver, or IRQ_NONE, if no - * interrupt found raised by card. - * - * This is the interrupt handler, that turns off - * interrupts for this device and makes the stack poll the driver - */ -static irqreturn_t -spider_net_interrupt(int irq, void *ptr) -{ - struct net_device *netdev = ptr; - struct spider_net_card *card = netdev_priv(netdev); - u32 status_reg, error_reg1, error_reg2; - - status_reg = spider_net_read_reg(card, SPIDER_NET_GHIINT0STS); - error_reg1 = spider_net_read_reg(card, SPIDER_NET_GHIINT1STS); - error_reg2 = spider_net_read_reg(card, SPIDER_NET_GHIINT2STS); - - if (!(status_reg & SPIDER_NET_INT0_MASK_VALUE) && - !(error_reg1 & SPIDER_NET_INT1_MASK_VALUE) && - !(error_reg2 & SPIDER_NET_INT2_MASK_VALUE)) - return IRQ_NONE; - - if (status_reg & SPIDER_NET_RXINT ) { - spider_net_rx_irq_off(card); - napi_schedule(&card->napi); - card->num_rx_ints ++; - } - if (status_reg & SPIDER_NET_TXINT) - napi_schedule(&card->napi); - - if (status_reg & SPIDER_NET_LINKINT) - spider_net_link_reset(netdev); - - if (status_reg & SPIDER_NET_ERRINT ) - spider_net_handle_error_irq(card, status_reg, - error_reg1, error_reg2); - - /* clear interrupt sources */ - spider_net_write_reg(card, SPIDER_NET_GHIINT0STS, status_reg); - - return IRQ_HANDLED; -} - -#ifdef CONFIG_NET_POLL_CONTROLLER -/** - * spider_net_poll_controller - artificial interrupt for netconsole etc. - * @netdev: interface device structure - * - * see Documentation/networking/netconsole.rst - */ -static void -spider_net_poll_controller(struct net_device *netdev) -{ - disable_irq(netdev->irq); - spider_net_interrupt(netdev->irq, netdev); - enable_irq(netdev->irq); -} -#endif /* CONFIG_NET_POLL_CONTROLLER */ - -/** - * spider_net_enable_interrupts - enable interrupts - * @card: card structure - * - * spider_net_enable_interrupt enables several interrupts - */ -static void -spider_net_enable_interrupts(struct spider_net_card *card) -{ - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, - SPIDER_NET_INT0_MASK_VALUE); - spider_net_write_reg(card, SPIDER_NET_GHIINT1MSK, - SPIDER_NET_INT1_MASK_VALUE); - spider_net_write_reg(card, SPIDER_NET_GHIINT2MSK, - SPIDER_NET_INT2_MASK_VALUE); -} - -/** - * spider_net_disable_interrupts - disable interrupts - * @card: card structure - * - * spider_net_disable_interrupts disables all the interrupts - */ -static void -spider_net_disable_interrupts(struct spider_net_card *card) -{ - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, 0); - spider_net_write_reg(card, SPIDER_NET_GHIINT1MSK, 0); - spider_net_write_reg(card, SPIDER_NET_GHIINT2MSK, 0); - spider_net_write_reg(card, SPIDER_NET_GMACINTEN, 0); -} - -/** - * spider_net_init_card - initializes the card - * @card: card structure - * - * spider_net_init_card initializes the card so that other registers can - * be used - */ -static void -spider_net_init_card(struct spider_net_card *card) -{ - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, - SPIDER_NET_CKRCTRL_STOP_VALUE); - - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, - SPIDER_NET_CKRCTRL_RUN_VALUE); - - /* trigger ETOMOD signal */ - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, - spider_net_read_reg(card, SPIDER_NET_GMACOPEMD) | 0x4); - - spider_net_disable_interrupts(card); -} - -/** - * spider_net_enable_card - enables the card by setting all kinds of regs - * @card: card structure - * - * spider_net_enable_card sets a lot of SMMIO registers to enable the device - */ -static void -spider_net_enable_card(struct spider_net_card *card) -{ - int i; - /* the following array consists of (register),(value) pairs - * that are set in this function. A register of 0 ends the list - */ - u32 regs[][2] = { - { SPIDER_NET_GRESUMINTNUM, 0 }, - { SPIDER_NET_GREINTNUM, 0 }, - - /* set interrupt frame number registers */ - /* clear the single DMA engine registers first */ - { SPIDER_NET_GFAFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, - { SPIDER_NET_GFBFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, - { SPIDER_NET_GFCFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, - { SPIDER_NET_GFDFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, - /* then set, what we really need */ - { SPIDER_NET_GFFRMNUM, SPIDER_NET_FRAMENUM_VALUE }, - - /* timer counter registers and stuff */ - { SPIDER_NET_GFREECNNUM, 0 }, - { SPIDER_NET_GONETIMENUM, 0 }, - { SPIDER_NET_GTOUTFRMNUM, 0 }, - - /* RX mode setting */ - { SPIDER_NET_GRXMDSET, SPIDER_NET_RXMODE_VALUE }, - /* TX mode setting */ - { SPIDER_NET_GTXMDSET, SPIDER_NET_TXMODE_VALUE }, - /* IPSEC mode setting */ - { SPIDER_NET_GIPSECINIT, SPIDER_NET_IPSECINIT_VALUE }, - - { SPIDER_NET_GFTRESTRT, SPIDER_NET_RESTART_VALUE }, - - { SPIDER_NET_GMRWOLCTRL, 0 }, - { SPIDER_NET_GTESTMD, 0x10000000 }, - { SPIDER_NET_GTTQMSK, 0x00400040 }, - - { SPIDER_NET_GMACINTEN, 0 }, - - /* flow control stuff */ - { SPIDER_NET_GMACAPAUSE, SPIDER_NET_MACAPAUSE_VALUE }, - { SPIDER_NET_GMACTXPAUSE, SPIDER_NET_TXPAUSE_VALUE }, - - { SPIDER_NET_GMACBSTLMT, SPIDER_NET_BURSTLMT_VALUE }, - { 0, 0} - }; - - i = 0; - while (regs[i][0]) { - spider_net_write_reg(card, regs[i][0], regs[i][1]); - i++; - } - - /* clear unicast filter table entries 1 to 14 */ - for (i = 1; i <= 14; i++) { - spider_net_write_reg(card, - SPIDER_NET_GMRUAFILnR + i * 8, - 0x00080000); - spider_net_write_reg(card, - SPIDER_NET_GMRUAFILnR + i * 8 + 4, - 0x00000000); - } - - spider_net_write_reg(card, SPIDER_NET_GMRUA0FIL15R, 0x08080000); - - spider_net_write_reg(card, SPIDER_NET_ECMODE, SPIDER_NET_ECMODE_VALUE); - - /* set chain tail address for RX chains and - * enable DMA - */ - spider_net_enable_rxchtails(card); - spider_net_enable_rxdmac(card); - - spider_net_write_reg(card, SPIDER_NET_GRXDMAEN, SPIDER_NET_WOL_VALUE); - - spider_net_write_reg(card, SPIDER_NET_GMACLENLMT, - SPIDER_NET_LENLMT_VALUE); - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, - SPIDER_NET_OPMODE_VALUE); - - spider_net_write_reg(card, SPIDER_NET_GDTDMACCNTR, - SPIDER_NET_GDTBSTA); -} - -/** - * spider_net_download_firmware - loads firmware into the adapter - * @card: card structure - * @firmware_ptr: pointer to firmware data - * - * spider_net_download_firmware loads the firmware data into the - * adapter. It assumes the length etc. to be allright. - */ -static int -spider_net_download_firmware(struct spider_net_card *card, - const void *firmware_ptr) -{ - int sequencer, i; - const u32 *fw_ptr = firmware_ptr; - - /* stop sequencers */ - spider_net_write_reg(card, SPIDER_NET_GSINIT, - SPIDER_NET_STOP_SEQ_VALUE); - - for (sequencer = 0; sequencer < SPIDER_NET_FIRMWARE_SEQS; - sequencer++) { - spider_net_write_reg(card, - SPIDER_NET_GSnPRGADR + sequencer * 8, 0); - for (i = 0; i < SPIDER_NET_FIRMWARE_SEQWORDS; i++) { - spider_net_write_reg(card, SPIDER_NET_GSnPRGDAT + - sequencer * 8, *fw_ptr); - fw_ptr++; - } - } - - if (spider_net_read_reg(card, SPIDER_NET_GSINIT)) - return -EIO; - - spider_net_write_reg(card, SPIDER_NET_GSINIT, - SPIDER_NET_RUN_SEQ_VALUE); - - return 0; -} - -/** - * spider_net_init_firmware - reads in firmware parts - * @card: card structure - * - * Returns 0 on success, <0 on failure - * - * spider_net_init_firmware opens the sequencer firmware and does some basic - * checks. This function opens and releases the firmware structure. A call - * to download the firmware is performed before the release. - * - * Firmware format - * =============== - * spider_fw.bin is expected to be a file containing 6*1024*4 bytes, 4k being - * the program for each sequencer. Use the command - * tail -q -n +2 Seq_code1_0x088.txt Seq_code2_0x090.txt \ - * Seq_code3_0x098.txt Seq_code4_0x0A0.txt Seq_code5_0x0A8.txt \ - * Seq_code6_0x0B0.txt | xxd -r -p -c4 > spider_fw.bin - * - * to generate spider_fw.bin, if you have sequencer programs with something - * like the following contents for each sequencer: - * <ONE LINE COMMENT> - * <FIRST 4-BYTES-WORD FOR SEQUENCER> - * <SECOND 4-BYTES-WORD FOR SEQUENCER> - * ... - * <1024th 4-BYTES-WORD FOR SEQUENCER> - */ -static int -spider_net_init_firmware(struct spider_net_card *card) -{ - struct firmware *firmware = NULL; - struct device_node *dn; - const u8 *fw_prop = NULL; - int err = -ENOENT; - int fw_size; - - if (request_firmware((const struct firmware **)&firmware, - SPIDER_NET_FIRMWARE_NAME, &card->pdev->dev) == 0) { - if ( (firmware->size != SPIDER_NET_FIRMWARE_LEN) && - netif_msg_probe(card) ) { - dev_err(&card->netdev->dev, - "Incorrect size of spidernet firmware in " \ - "filesystem. Looking in host firmware...\n"); - goto try_host_fw; - } - err = spider_net_download_firmware(card, firmware->data); - - release_firmware(firmware); - if (err) - goto try_host_fw; - - goto done; - } - -try_host_fw: - dn = pci_device_to_OF_node(card->pdev); - if (!dn) - goto out_err; - - fw_prop = of_get_property(dn, "firmware", &fw_size); - if (!fw_prop) - goto out_err; - - if ( (fw_size != SPIDER_NET_FIRMWARE_LEN) && - netif_msg_probe(card) ) { - dev_err(&card->netdev->dev, - "Incorrect size of spidernet firmware in host firmware\n"); - goto done; - } - - err = spider_net_download_firmware(card, fw_prop); - -done: - return err; -out_err: - if (netif_msg_probe(card)) - dev_err(&card->netdev->dev, - "Couldn't find spidernet firmware in filesystem " \ - "or host firmware\n"); - return err; -} - -/** - * spider_net_open - called upon ifonfig up - * @netdev: interface device structure - * - * returns 0 on success, <0 on failure - * - * spider_net_open allocates all the descriptors and memory needed for - * operation, sets up multicast list and enables interrupts - */ -int -spider_net_open(struct net_device *netdev) -{ - struct spider_net_card *card = netdev_priv(netdev); - int result; - - result = spider_net_init_firmware(card); - if (result) - goto init_firmware_failed; - - /* start probing with copper */ - card->aneg_count = 0; - card->medium = BCM54XX_COPPER; - spider_net_setup_aneg(card); - if (card->phy.def->phy_id) - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); - - result = spider_net_init_chain(card, &card->tx_chain); - if (result) - goto alloc_tx_failed; - card->low_watermark = NULL; - - result = spider_net_init_chain(card, &card->rx_chain); - if (result) - goto alloc_rx_failed; - - /* Allocate rx skbs */ - result = spider_net_alloc_rx_skbs(card); - if (result) - goto alloc_skbs_failed; - - spider_net_set_multi(netdev); - - /* further enhancement: setup hw vlan, if needed */ - - result = -EBUSY; - if (request_irq(netdev->irq, spider_net_interrupt, - IRQF_SHARED, netdev->name, netdev)) - goto register_int_failed; - - spider_net_enable_card(card); - - netif_start_queue(netdev); - netif_carrier_on(netdev); - napi_enable(&card->napi); - - spider_net_enable_interrupts(card); - - return 0; - -register_int_failed: - spider_net_free_rx_chain_contents(card); -alloc_skbs_failed: - spider_net_free_chain(card, &card->rx_chain); -alloc_rx_failed: - spider_net_free_chain(card, &card->tx_chain); -alloc_tx_failed: - del_timer_sync(&card->aneg_timer); -init_firmware_failed: - return result; -} - -/** - * spider_net_link_phy - * @t: timer context used to obtain the pointer to net card data structure - */ -static void spider_net_link_phy(struct timer_list *t) -{ - struct spider_net_card *card = from_timer(card, t, aneg_timer); - struct mii_phy *phy = &card->phy; - - /* if link didn't come up after SPIDER_NET_ANEG_TIMEOUT tries, setup phy again */ - if (card->aneg_count > SPIDER_NET_ANEG_TIMEOUT) { - - pr_debug("%s: link is down trying to bring it up\n", - card->netdev->name); - - switch (card->medium) { - case BCM54XX_COPPER: - /* enable fiber with autonegotiation first */ - if (phy->def->ops->enable_fiber) - phy->def->ops->enable_fiber(phy, 1); - card->medium = BCM54XX_FIBER; - break; - - case BCM54XX_FIBER: - /* fiber didn't come up, try to disable fiber autoneg */ - if (phy->def->ops->enable_fiber) - phy->def->ops->enable_fiber(phy, 0); - card->medium = BCM54XX_UNKNOWN; - break; - - case BCM54XX_UNKNOWN: - /* copper, fiber with and without failed, - * retry from beginning - */ - spider_net_setup_aneg(card); - card->medium = BCM54XX_COPPER; - break; - } - - card->aneg_count = 0; - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); - return; - } - - /* link still not up, try again later */ - if (!(phy->def->ops->poll_link(phy))) { - card->aneg_count++; - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); - return; - } - - /* link came up, get abilities */ - phy->def->ops->read_link(phy); - - spider_net_write_reg(card, SPIDER_NET_GMACST, - spider_net_read_reg(card, SPIDER_NET_GMACST)); - spider_net_write_reg(card, SPIDER_NET_GMACINTEN, 0x4); - - if (phy->speed == 1000) - spider_net_write_reg(card, SPIDER_NET_GMACMODE, 0x00000001); - else - spider_net_write_reg(card, SPIDER_NET_GMACMODE, 0); - - card->aneg_count = 0; - - pr_info("%s: link up, %i Mbps, %s-duplex %sautoneg.\n", - card->netdev->name, phy->speed, - phy->duplex == 1 ? "Full" : "Half", - phy->autoneg == 1 ? "" : "no "); -} - -/** - * spider_net_setup_phy - setup PHY - * @card: card structure - * - * returns 0 on success, <0 on failure - * - * spider_net_setup_phy is used as part of spider_net_probe. - **/ -static int -spider_net_setup_phy(struct spider_net_card *card) -{ - struct mii_phy *phy = &card->phy; - - spider_net_write_reg(card, SPIDER_NET_GDTDMASEL, - SPIDER_NET_DMASEL_VALUE); - spider_net_write_reg(card, SPIDER_NET_GPCCTRL, - SPIDER_NET_PHY_CTRL_VALUE); - - phy->dev = card->netdev; - phy->mdio_read = spider_net_read_phy; - phy->mdio_write = spider_net_write_phy; - - for (phy->mii_id = 1; phy->mii_id <= 31; phy->mii_id++) { - unsigned short id; - id = spider_net_read_phy(card->netdev, phy->mii_id, MII_BMSR); - if (id != 0x0000 && id != 0xffff) { - if (!sungem_phy_probe(phy, phy->mii_id)) { - pr_info("Found %s.\n", phy->def->name); - break; - } - } - } - - return 0; -} - -/** - * spider_net_workaround_rxramfull - work around firmware bug - * @card: card structure - * - * no return value - **/ -static void -spider_net_workaround_rxramfull(struct spider_net_card *card) -{ - int i, sequencer = 0; - - /* cancel reset */ - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, - SPIDER_NET_CKRCTRL_RUN_VALUE); - - /* empty sequencer data */ - for (sequencer = 0; sequencer < SPIDER_NET_FIRMWARE_SEQS; - sequencer++) { - spider_net_write_reg(card, SPIDER_NET_GSnPRGADR + - sequencer * 8, 0x0); - for (i = 0; i < SPIDER_NET_FIRMWARE_SEQWORDS; i++) { - spider_net_write_reg(card, SPIDER_NET_GSnPRGDAT + - sequencer * 8, 0x0); - } - } - - /* set sequencer operation */ - spider_net_write_reg(card, SPIDER_NET_GSINIT, 0x000000fe); - - /* reset */ - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, - SPIDER_NET_CKRCTRL_STOP_VALUE); -} - -/** - * spider_net_stop - called upon ifconfig down - * @netdev: interface device structure - * - * always returns 0 - */ -int -spider_net_stop(struct net_device *netdev) -{ - struct spider_net_card *card = netdev_priv(netdev); - - napi_disable(&card->napi); - netif_carrier_off(netdev); - netif_stop_queue(netdev); - del_timer_sync(&card->tx_timer); - del_timer_sync(&card->aneg_timer); - - spider_net_disable_interrupts(card); - - free_irq(netdev->irq, netdev); - - spider_net_write_reg(card, SPIDER_NET_GDTDMACCNTR, - SPIDER_NET_DMA_TX_FEND_VALUE); - - /* turn off DMA, force end */ - spider_net_disable_rxdmac(card); - - /* release chains */ - spider_net_release_tx_chain(card, 1); - spider_net_free_rx_chain_contents(card); - - spider_net_free_chain(card, &card->tx_chain); - spider_net_free_chain(card, &card->rx_chain); - - return 0; -} - -/** - * spider_net_tx_timeout_task - task scheduled by the watchdog timeout - * function (to be called not under interrupt status) - * @work: work context used to obtain the pointer to net card data structure - * - * called as task when tx hangs, resets interface (if interface is up) - */ -static void -spider_net_tx_timeout_task(struct work_struct *work) -{ - struct spider_net_card *card = - container_of(work, struct spider_net_card, tx_timeout_task); - struct net_device *netdev = card->netdev; - - if (!(netdev->flags & IFF_UP)) - goto out; - - netif_device_detach(netdev); - spider_net_stop(netdev); - - spider_net_workaround_rxramfull(card); - spider_net_init_card(card); - - if (spider_net_setup_phy(card)) - goto out; - - spider_net_open(netdev); - spider_net_kick_tx_dma(card); - netif_device_attach(netdev); - -out: - atomic_dec(&card->tx_timeout_task_counter); -} - -/** - * spider_net_tx_timeout - called when the tx timeout watchdog kicks in. - * @netdev: interface device structure - * @txqueue: unused - * - * called, if tx hangs. Schedules a task that resets the interface - */ -static void -spider_net_tx_timeout(struct net_device *netdev, unsigned int txqueue) -{ - struct spider_net_card *card; - - card = netdev_priv(netdev); - atomic_inc(&card->tx_timeout_task_counter); - if (netdev->flags & IFF_UP) - schedule_work(&card->tx_timeout_task); - else - atomic_dec(&card->tx_timeout_task_counter); - card->spider_stats.tx_timeouts++; -} - -static const struct net_device_ops spider_net_ops = { - .ndo_open = spider_net_open, - .ndo_stop = spider_net_stop, - .ndo_start_xmit = spider_net_xmit, - .ndo_set_rx_mode = spider_net_set_multi, - .ndo_set_mac_address = spider_net_set_mac, - .ndo_eth_ioctl = spider_net_do_ioctl, - .ndo_tx_timeout = spider_net_tx_timeout, - .ndo_validate_addr = eth_validate_addr, - /* HW VLAN */ -#ifdef CONFIG_NET_POLL_CONTROLLER - /* poll controller */ - .ndo_poll_controller = spider_net_poll_controller, -#endif /* CONFIG_NET_POLL_CONTROLLER */ -}; - -/** - * spider_net_setup_netdev_ops - initialization of net_device operations - * @netdev: net_device structure - * - * fills out function pointers in the net_device structure - */ -static void -spider_net_setup_netdev_ops(struct net_device *netdev) -{ - netdev->netdev_ops = &spider_net_ops; - netdev->watchdog_timeo = SPIDER_NET_WATCHDOG_TIMEOUT; - /* ethtool ops */ - netdev->ethtool_ops = &spider_net_ethtool_ops; -} - -/** - * spider_net_setup_netdev - initialization of net_device - * @card: card structure - * - * Returns 0 on success or <0 on failure - * - * spider_net_setup_netdev initializes the net_device structure - **/ -static int -spider_net_setup_netdev(struct spider_net_card *card) -{ - int result; - struct net_device *netdev = card->netdev; - struct device_node *dn; - struct sockaddr addr; - const u8 *mac; - - SET_NETDEV_DEV(netdev, &card->pdev->dev); - - pci_set_drvdata(card->pdev, netdev); - - timer_setup(&card->tx_timer, spider_net_cleanup_tx_ring, 0); - netdev->irq = card->pdev->irq; - - card->aneg_count = 0; - timer_setup(&card->aneg_timer, spider_net_link_phy, 0); - - netif_napi_add(netdev, &card->napi, spider_net_poll); - - spider_net_setup_netdev_ops(netdev); - - netdev->hw_features = NETIF_F_RXCSUM | NETIF_F_IP_CSUM; - if (SPIDER_NET_RX_CSUM_DEFAULT) - netdev->features |= NETIF_F_RXCSUM; - netdev->features |= NETIF_F_IP_CSUM; - /* some time: NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | - * NETIF_F_HW_VLAN_CTAG_FILTER - */ - netdev->lltx = true; - - /* MTU range: 64 - 2294 */ - netdev->min_mtu = SPIDER_NET_MIN_MTU; - netdev->max_mtu = SPIDER_NET_MAX_MTU; - - netdev->irq = card->pdev->irq; - card->num_rx_ints = 0; - card->ignore_rx_ramfull = 0; - - dn = pci_device_to_OF_node(card->pdev); - if (!dn) - return -EIO; - - mac = of_get_property(dn, "local-mac-address", NULL); - if (!mac) - return -EIO; - memcpy(addr.sa_data, mac, ETH_ALEN); - - result = spider_net_set_mac(netdev, &addr); - if ((result) && (netif_msg_probe(card))) - dev_err(&card->netdev->dev, - "Failed to set MAC address: %i\n", result); - - result = register_netdev(netdev); - if (result) { - if (netif_msg_probe(card)) - dev_err(&card->netdev->dev, - "Couldn't register net_device: %i\n", result); - return result; - } - - if (netif_msg_probe(card)) - pr_info("Initialized device %s.\n", netdev->name); - - return 0; -} - -/** - * spider_net_alloc_card - allocates net_device and card structure - * - * returns the card structure or NULL in case of errors - * - * the card and net_device structures are linked to each other - */ -static struct spider_net_card * -spider_net_alloc_card(void) -{ - struct net_device *netdev; - struct spider_net_card *card; - - netdev = alloc_etherdev(struct_size(card, darray, - size_add(tx_descriptors, rx_descriptors))); - if (!netdev) - return NULL; - - card = netdev_priv(netdev); - card->netdev = netdev; - card->msg_enable = SPIDER_NET_DEFAULT_MSG; - INIT_WORK(&card->tx_timeout_task, spider_net_tx_timeout_task); - init_waitqueue_head(&card->waitq); - atomic_set(&card->tx_timeout_task_counter, 0); - - card->rx_chain.num_desc = rx_descriptors; - card->rx_chain.ring = card->darray; - card->tx_chain.num_desc = tx_descriptors; - card->tx_chain.ring = card->darray + rx_descriptors; - - return card; -} - -/** - * spider_net_undo_pci_setup - releases PCI ressources - * @card: card structure - * - * spider_net_undo_pci_setup releases the mapped regions - */ -static void -spider_net_undo_pci_setup(struct spider_net_card *card) -{ - iounmap(card->regs); - pci_release_regions(card->pdev); -} - -/** - * spider_net_setup_pci_dev - sets up the device in terms of PCI operations - * @pdev: PCI device - * - * Returns the card structure or NULL if any errors occur - * - * spider_net_setup_pci_dev initializes pdev and together with the - * functions called in spider_net_open configures the device so that - * data can be transferred over it - * The net_device structure is attached to the card structure, if the - * function returns without error. - **/ -static struct spider_net_card * -spider_net_setup_pci_dev(struct pci_dev *pdev) -{ - struct spider_net_card *card; - unsigned long mmio_start, mmio_len; - - if (pci_enable_device(pdev)) { - dev_err(&pdev->dev, "Couldn't enable PCI device\n"); - return NULL; - } - - if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) { - dev_err(&pdev->dev, - "Couldn't find proper PCI device base address.\n"); - goto out_disable_dev; - } - - if (pci_request_regions(pdev, spider_net_driver_name)) { - dev_err(&pdev->dev, - "Couldn't obtain PCI resources, aborting.\n"); - goto out_disable_dev; - } - - pci_set_master(pdev); - - card = spider_net_alloc_card(); - if (!card) { - dev_err(&pdev->dev, - "Couldn't allocate net_device structure, aborting.\n"); - goto out_release_regions; - } - card->pdev = pdev; - - /* fetch base address and length of first resource */ - mmio_start = pci_resource_start(pdev, 0); - mmio_len = pci_resource_len(pdev, 0); - - card->netdev->mem_start = mmio_start; - card->netdev->mem_end = mmio_start + mmio_len; - card->regs = ioremap(mmio_start, mmio_len); - - if (!card->regs) { - dev_err(&pdev->dev, - "Couldn't obtain PCI resources, aborting.\n"); - goto out_release_regions; - } - - return card; - -out_release_regions: - pci_release_regions(pdev); -out_disable_dev: - pci_disable_device(pdev); - return NULL; -} - -/** - * spider_net_probe - initialization of a device - * @pdev: PCI device - * @ent: entry in the device id list - * - * Returns 0 on success, <0 on failure - * - * spider_net_probe initializes pdev and registers a net_device - * structure for it. After that, the device can be ifconfig'ed up - **/ -static int -spider_net_probe(struct pci_dev *pdev, const struct pci_device_id *ent) -{ - int err = -EIO; - struct spider_net_card *card; - - card = spider_net_setup_pci_dev(pdev); - if (!card) - goto out; - - spider_net_workaround_rxramfull(card); - spider_net_init_card(card); - - err = spider_net_setup_phy(card); - if (err) - goto out_undo_pci; - - err = spider_net_setup_netdev(card); - if (err) - goto out_undo_pci; - - return 0; - -out_undo_pci: - spider_net_undo_pci_setup(card); - free_netdev(card->netdev); -out: - return err; -} - -/** - * spider_net_remove - removal of a device - * @pdev: PCI device - * - * Returns 0 on success, <0 on failure - * - * spider_net_remove is called to remove the device and unregisters the - * net_device - **/ -static void -spider_net_remove(struct pci_dev *pdev) -{ - struct net_device *netdev; - struct spider_net_card *card; - - netdev = pci_get_drvdata(pdev); - card = netdev_priv(netdev); - - wait_event(card->waitq, - atomic_read(&card->tx_timeout_task_counter) == 0); - - unregister_netdev(netdev); - - /* switch off card */ - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, - SPIDER_NET_CKRCTRL_STOP_VALUE); - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, - SPIDER_NET_CKRCTRL_RUN_VALUE); - - spider_net_undo_pci_setup(card); - free_netdev(netdev); -} - -static struct pci_driver spider_net_driver = { - .name = spider_net_driver_name, - .id_table = spider_net_pci_tbl, - .probe = spider_net_probe, - .remove = spider_net_remove -}; - -/** - * spider_net_init - init function when the driver is loaded - * - * spider_net_init registers the device driver - */ -static int __init spider_net_init(void) -{ - printk(KERN_INFO "Spidernet version %s.\n", VERSION); - - if (rx_descriptors < SPIDER_NET_RX_DESCRIPTORS_MIN) { - rx_descriptors = SPIDER_NET_RX_DESCRIPTORS_MIN; - pr_info("adjusting rx descriptors to %i.\n", rx_descriptors); - } - if (rx_descriptors > SPIDER_NET_RX_DESCRIPTORS_MAX) { - rx_descriptors = SPIDER_NET_RX_DESCRIPTORS_MAX; - pr_info("adjusting rx descriptors to %i.\n", rx_descriptors); - } - if (tx_descriptors < SPIDER_NET_TX_DESCRIPTORS_MIN) { - tx_descriptors = SPIDER_NET_TX_DESCRIPTORS_MIN; - pr_info("adjusting tx descriptors to %i.\n", tx_descriptors); - } - if (tx_descriptors > SPIDER_NET_TX_DESCRIPTORS_MAX) { - tx_descriptors = SPIDER_NET_TX_DESCRIPTORS_MAX; - pr_info("adjusting tx descriptors to %i.\n", tx_descriptors); - } - - return pci_register_driver(&spider_net_driver); -} - -/** - * spider_net_cleanup - exit function when driver is unloaded - * - * spider_net_cleanup unregisters the device driver - */ -static void __exit spider_net_cleanup(void) -{ - pci_unregister_driver(&spider_net_driver); -} - -module_init(spider_net_init); -module_exit(spider_net_cleanup); diff --git a/drivers/net/ethernet/toshiba/spider_net.h b/drivers/net/ethernet/toshiba/spider_net.h deleted file mode 100644 index 51948e2b3a34..000000000000 --- a/drivers/net/ethernet/toshiba/spider_net.h +++ /dev/null @@ -1,475 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * Network device driver for Cell Processor-Based Blade and Celleb platform - * - * (C) Copyright IBM Corp. 2005 - * (C) Copyright 2006 TOSHIBA CORPORATION - * - * Authors : Utz Bacher <utz.bacher@de.ibm.com> - * Jens Osterkamp <Jens.Osterkamp@de.ibm.com> - */ - -#ifndef _SPIDER_NET_H -#define _SPIDER_NET_H - -#define VERSION "2.0 B" - -#include <linux/sungem_phy.h> - -int spider_net_stop(struct net_device *netdev); -int spider_net_open(struct net_device *netdev); - -extern const struct ethtool_ops spider_net_ethtool_ops; - -extern char spider_net_driver_name[]; - -#define SPIDER_NET_MAX_FRAME 2312 -#define SPIDER_NET_MAX_MTU 2294 -#define SPIDER_NET_MIN_MTU 64 - -#define SPIDER_NET_RXBUF_ALIGN 128 - -#define SPIDER_NET_RX_DESCRIPTORS_DEFAULT 256 -#define SPIDER_NET_RX_DESCRIPTORS_MIN 16 -#define SPIDER_NET_RX_DESCRIPTORS_MAX 512 - -#define SPIDER_NET_TX_DESCRIPTORS_DEFAULT 256 -#define SPIDER_NET_TX_DESCRIPTORS_MIN 16 -#define SPIDER_NET_TX_DESCRIPTORS_MAX 512 - -#define SPIDER_NET_TX_TIMER (HZ/5) -#define SPIDER_NET_ANEG_TIMER (HZ) -#define SPIDER_NET_ANEG_TIMEOUT 5 - -#define SPIDER_NET_RX_CSUM_DEFAULT 1 - -#define SPIDER_NET_WATCHDOG_TIMEOUT 50*HZ - -#define SPIDER_NET_FIRMWARE_SEQS 6 -#define SPIDER_NET_FIRMWARE_SEQWORDS 1024 -#define SPIDER_NET_FIRMWARE_LEN (SPIDER_NET_FIRMWARE_SEQS * \ - SPIDER_NET_FIRMWARE_SEQWORDS * \ - sizeof(u32)) -#define SPIDER_NET_FIRMWARE_NAME "spider_fw.bin" - -/** spider_net SMMIO registers */ -#define SPIDER_NET_GHIINT0STS 0x00000000 -#define SPIDER_NET_GHIINT1STS 0x00000004 -#define SPIDER_NET_GHIINT2STS 0x00000008 -#define SPIDER_NET_GHIINT0MSK 0x00000010 -#define SPIDER_NET_GHIINT1MSK 0x00000014 -#define SPIDER_NET_GHIINT2MSK 0x00000018 - -#define SPIDER_NET_GRESUMINTNUM 0x00000020 -#define SPIDER_NET_GREINTNUM 0x00000024 - -#define SPIDER_NET_GFFRMNUM 0x00000028 -#define SPIDER_NET_GFAFRMNUM 0x0000002c -#define SPIDER_NET_GFBFRMNUM 0x00000030 -#define SPIDER_NET_GFCFRMNUM 0x00000034 -#define SPIDER_NET_GFDFRMNUM 0x00000038 - -/* clear them (don't use it) */ -#define SPIDER_NET_GFREECNNUM 0x0000003c -#define SPIDER_NET_GONETIMENUM 0x00000040 - -#define SPIDER_NET_GTOUTFRMNUM 0x00000044 - -#define SPIDER_NET_GTXMDSET 0x00000050 -#define SPIDER_NET_GPCCTRL 0x00000054 -#define SPIDER_NET_GRXMDSET 0x00000058 -#define SPIDER_NET_GIPSECINIT 0x0000005c -#define SPIDER_NET_GFTRESTRT 0x00000060 -#define SPIDER_NET_GRXDMAEN 0x00000064 -#define SPIDER_NET_GMRWOLCTRL 0x00000068 -#define SPIDER_NET_GPCWOPCMD 0x0000006c -#define SPIDER_NET_GPCROPCMD 0x00000070 -#define SPIDER_NET_GTTFRMCNT 0x00000078 -#define SPIDER_NET_GTESTMD 0x0000007c - -#define SPIDER_NET_GSINIT 0x00000080 -#define SPIDER_NET_GSnPRGADR 0x00000084 -#define SPIDER_NET_GSnPRGDAT 0x00000088 - -#define SPIDER_NET_GMACOPEMD 0x00000100 -#define SPIDER_NET_GMACLENLMT 0x00000108 -#define SPIDER_NET_GMACST 0x00000110 -#define SPIDER_NET_GMACINTEN 0x00000118 -#define SPIDER_NET_GMACPHYCTRL 0x00000120 - -#define SPIDER_NET_GMACAPAUSE 0x00000154 -#define SPIDER_NET_GMACTXPAUSE 0x00000164 - -#define SPIDER_NET_GMACMODE 0x000001b0 -#define SPIDER_NET_GMACBSTLMT 0x000001b4 - -#define SPIDER_NET_GMACUNIMACU 0x000001c0 -#define SPIDER_NET_GMACUNIMACL 0x000001c8 - -#define SPIDER_NET_GMRMHFILnR 0x00000400 -#define SPIDER_NET_MULTICAST_HASHES 256 - -#define SPIDER_NET_GMRUAFILnR 0x00000500 -#define SPIDER_NET_GMRUA0FIL15R 0x00000578 - -#define SPIDER_NET_GTTQMSK 0x00000934 - -/* RX DMA controller registers, all 0x00000a.. are for DMA controller A, - * 0x00000b.. for DMA controller B, etc. */ -#define SPIDER_NET_GDADCHA 0x00000a00 -#define SPIDER_NET_GDADMACCNTR 0x00000a04 -#define SPIDER_NET_GDACTDPA 0x00000a08 -#define SPIDER_NET_GDACTDCNT 0x00000a0c -#define SPIDER_NET_GDACDBADDR 0x00000a20 -#define SPIDER_NET_GDACDBSIZE 0x00000a24 -#define SPIDER_NET_GDACNEXTDA 0x00000a28 -#define SPIDER_NET_GDACCOMST 0x00000a2c -#define SPIDER_NET_GDAWBCOMST 0x00000a30 -#define SPIDER_NET_GDAWBRSIZE 0x00000a34 -#define SPIDER_NET_GDAWBVSIZE 0x00000a38 -#define SPIDER_NET_GDAWBTRST 0x00000a3c -#define SPIDER_NET_GDAWBTRERR 0x00000a40 - -/* TX DMA controller registers */ -#define SPIDER_NET_GDTDCHA 0x00000e00 -#define SPIDER_NET_GDTDMACCNTR 0x00000e04 -#define SPIDER_NET_GDTCDPA 0x00000e08 -#define SPIDER_NET_GDTDMASEL 0x00000e14 - -#define SPIDER_NET_ECMODE 0x00000f00 -/* clock and reset control register */ -#define SPIDER_NET_CKRCTRL 0x00000ff0 - -/** SCONFIG registers */ -#define SPIDER_NET_SCONFIG_IOACTE 0x00002810 - -/** interrupt mask registers */ -#define SPIDER_NET_INT0_MASK_VALUE 0x3f7fe2c7 -#define SPIDER_NET_INT1_MASK_VALUE 0x0000fff2 -#define SPIDER_NET_INT2_MASK_VALUE 0x000003f1 - -/* we rely on flagged descriptor interrupts */ -#define SPIDER_NET_FRAMENUM_VALUE 0x00000000 -/* set this first, then the FRAMENUM_VALUE */ -#define SPIDER_NET_GFXFRAMES_VALUE 0x00000000 - -#define SPIDER_NET_STOP_SEQ_VALUE 0x00000000 -#define SPIDER_NET_RUN_SEQ_VALUE 0x0000007e - -#define SPIDER_NET_PHY_CTRL_VALUE 0x00040040 -/* #define SPIDER_NET_PHY_CTRL_VALUE 0x01070080*/ -#define SPIDER_NET_RXMODE_VALUE 0x00000011 -/* auto retransmission in case of MAC aborts */ -#define SPIDER_NET_TXMODE_VALUE 0x00010000 -#define SPIDER_NET_RESTART_VALUE 0x00000000 -#define SPIDER_NET_WOL_VALUE 0x00001111 -#if 0 -#define SPIDER_NET_WOL_VALUE 0x00000000 -#endif -#define SPIDER_NET_IPSECINIT_VALUE 0x6f716f71 - -/* pause frames: automatic, no upper retransmission count */ -/* outside loopback mode: ETOMOD signal dont matter, not connected */ -/* ETOMOD signal is brought to PHY reset. bit 2 must be 1 in Celleb */ -#define SPIDER_NET_OPMODE_VALUE 0x00000067 -/*#define SPIDER_NET_OPMODE_VALUE 0x001b0062*/ -#define SPIDER_NET_LENLMT_VALUE 0x00000908 - -#define SPIDER_NET_MACAPAUSE_VALUE 0x00000800 /* about 1 ms */ -#define SPIDER_NET_TXPAUSE_VALUE 0x00000000 - -#define SPIDER_NET_MACMODE_VALUE 0x00000001 -#define SPIDER_NET_BURSTLMT_VALUE 0x00000200 /* about 16 us */ - -/* DMAC control register GDMACCNTR - * - * 1(0) enable r/tx dma - * 0000000 fixed to 0 - * - * 000000 fixed to 0 - * 0(1) en/disable descr writeback on force end - * 0(1) force end - * - * 000000 fixed to 0 - * 00 burst alignment: 128 bytes - * 11 burst alignment: 1024 bytes - * - * 00000 fixed to 0 - * 0 descr writeback size 32 bytes - * 0(1) descr chain end interrupt enable - * 0(1) descr status writeback enable */ - -/* to set RX_DMA_EN */ -#define SPIDER_NET_DMA_RX_VALUE 0x80000000 -#define SPIDER_NET_DMA_RX_FEND_VALUE 0x00030003 -/* to set TX_DMA_EN */ -#define SPIDER_NET_TX_DMA_EN 0x80000000 -#define SPIDER_NET_GDTBSTA 0x00000300 -#define SPIDER_NET_GDTDCEIDIS 0x00000002 -#define SPIDER_NET_DMA_TX_VALUE SPIDER_NET_TX_DMA_EN | \ - SPIDER_NET_GDTDCEIDIS | \ - SPIDER_NET_GDTBSTA - -#define SPIDER_NET_DMA_TX_FEND_VALUE 0x00030003 - -/* SPIDER_NET_UA_DESCR_VALUE is OR'ed with the unicast address */ -#define SPIDER_NET_UA_DESCR_VALUE 0x00080000 -#define SPIDER_NET_PROMISC_VALUE 0x00080000 -#define SPIDER_NET_NONPROMISC_VALUE 0x00000000 - -#define SPIDER_NET_DMASEL_VALUE 0x00000001 - -#define SPIDER_NET_ECMODE_VALUE 0x00000000 - -#define SPIDER_NET_CKRCTRL_RUN_VALUE 0x1fff010f -#define SPIDER_NET_CKRCTRL_STOP_VALUE 0x0000010f - -#define SPIDER_NET_SBIMSTATE_VALUE 0x00000000 -#define SPIDER_NET_SBTMSTATE_VALUE 0x00000000 - -/* SPIDER_NET_GHIINT0STS bits, in reverse order so that they can be used - * with 1 << SPIDER_NET_... */ -enum spider_net_int0_status { - SPIDER_NET_GPHYINT = 0, - SPIDER_NET_GMAC2INT, - SPIDER_NET_GMAC1INT, - SPIDER_NET_GIPSINT, - SPIDER_NET_GFIFOINT, - SPIDER_NET_GDMACINT, - SPIDER_NET_GSYSINT, - SPIDER_NET_GPWOPCMPINT, - SPIDER_NET_GPROPCMPINT, - SPIDER_NET_GPWFFINT, - SPIDER_NET_GRMDADRINT, - SPIDER_NET_GRMARPINT, - SPIDER_NET_GRMMPINT, - SPIDER_NET_GDTDEN0INT, - SPIDER_NET_GDDDEN0INT, - SPIDER_NET_GDCDEN0INT, - SPIDER_NET_GDBDEN0INT, - SPIDER_NET_GDADEN0INT, - SPIDER_NET_GDTFDCINT, - SPIDER_NET_GDDFDCINT, - SPIDER_NET_GDCFDCINT, - SPIDER_NET_GDBFDCINT, - SPIDER_NET_GDAFDCINT, - SPIDER_NET_GTTEDINT, - SPIDER_NET_GDTDCEINT, - SPIDER_NET_GRFDNMINT, - SPIDER_NET_GRFCNMINT, - SPIDER_NET_GRFBNMINT, - SPIDER_NET_GRFANMINT, - SPIDER_NET_GRFNMINT, - SPIDER_NET_G1TMCNTINT, - SPIDER_NET_GFREECNTINT -}; -/* GHIINT1STS bits */ -enum spider_net_int1_status { - SPIDER_NET_GTMFLLINT = 0, - SPIDER_NET_GRMFLLINT, - SPIDER_NET_GTMSHTINT, - SPIDER_NET_GDTINVDINT, - SPIDER_NET_GRFDFLLINT, - SPIDER_NET_GDDDCEINT, - SPIDER_NET_GDDINVDINT, - SPIDER_NET_GRFCFLLINT, - SPIDER_NET_GDCDCEINT, - SPIDER_NET_GDCINVDINT, - SPIDER_NET_GRFBFLLINT, - SPIDER_NET_GDBDCEINT, - SPIDER_NET_GDBINVDINT, - SPIDER_NET_GRFAFLLINT, - SPIDER_NET_GDADCEINT, - SPIDER_NET_GDAINVDINT, - SPIDER_NET_GDTRSERINT, - SPIDER_NET_GDDRSERINT, - SPIDER_NET_GDCRSERINT, - SPIDER_NET_GDBRSERINT, - SPIDER_NET_GDARSERINT, - SPIDER_NET_GDSERINT, - SPIDER_NET_GDTPTERINT, - SPIDER_NET_GDDPTERINT, - SPIDER_NET_GDCPTERINT, - SPIDER_NET_GDBPTERINT, - SPIDER_NET_GDAPTERINT -}; -/* GHIINT2STS bits */ -enum spider_net_int2_status { - SPIDER_NET_GPROPERINT = 0, - SPIDER_NET_GMCTCRSNGINT, - SPIDER_NET_GMCTLCOLINT, - SPIDER_NET_GMCTTMOTINT, - SPIDER_NET_GMCRCAERINT, - SPIDER_NET_GMCRCALERINT, - SPIDER_NET_GMCRALNERINT, - SPIDER_NET_GMCROVRINT, - SPIDER_NET_GMCRRNTINT, - SPIDER_NET_GMCRRXERINT, - SPIDER_NET_GTITCSERINT, - SPIDER_NET_GTIFMTERINT, - SPIDER_NET_GTIPKTRVKINT, - SPIDER_NET_GTISPINGINT, - SPIDER_NET_GTISADNGINT, - SPIDER_NET_GTISPDNGINT, - SPIDER_NET_GRIFMTERINT, - SPIDER_NET_GRIPKTRVKINT, - SPIDER_NET_GRISPINGINT, - SPIDER_NET_GRISADNGINT, - SPIDER_NET_GRISPDNGINT -}; - -#define SPIDER_NET_TXINT (1 << SPIDER_NET_GDTFDCINT) - -/* We rely on flagged descriptor interrupts */ -#define SPIDER_NET_RXINT ( (1 << SPIDER_NET_GDAFDCINT) ) - -#define SPIDER_NET_LINKINT ( 1 << SPIDER_NET_GMAC2INT ) - -#define SPIDER_NET_ERRINT ( 0xffffffff & \ - (~SPIDER_NET_TXINT) & \ - (~SPIDER_NET_RXINT) & \ - (~SPIDER_NET_LINKINT) ) - -#define SPIDER_NET_GPREXEC 0x80000000 -#define SPIDER_NET_GPRDAT_MASK 0x0000ffff - -#define SPIDER_NET_DMAC_NOINTR_COMPLETE 0x00800000 -#define SPIDER_NET_DMAC_TXFRMTL 0x00040000 -#define SPIDER_NET_DMAC_TCP 0x00020000 -#define SPIDER_NET_DMAC_UDP 0x00030000 -#define SPIDER_NET_TXDCEST 0x08000000 - -#define SPIDER_NET_DESCR_RXFDIS 0x00000001 -#define SPIDER_NET_DESCR_RXDCEIS 0x00000002 -#define SPIDER_NET_DESCR_RXDEN0IS 0x00000004 -#define SPIDER_NET_DESCR_RXINVDIS 0x00000008 -#define SPIDER_NET_DESCR_RXRERRIS 0x00000010 -#define SPIDER_NET_DESCR_RXFDCIMS 0x00000100 -#define SPIDER_NET_DESCR_RXDCEIMS 0x00000200 -#define SPIDER_NET_DESCR_RXDEN0IMS 0x00000400 -#define SPIDER_NET_DESCR_RXINVDIMS 0x00000800 -#define SPIDER_NET_DESCR_RXRERRMIS 0x00001000 -#define SPIDER_NET_DESCR_UNUSED 0x077fe0e0 - -#define SPIDER_NET_DESCR_IND_PROC_MASK 0xF0000000 -#define SPIDER_NET_DESCR_COMPLETE 0x00000000 /* used in rx and tx */ -#define SPIDER_NET_DESCR_RESPONSE_ERROR 0x10000000 /* used in rx and tx */ -#define SPIDER_NET_DESCR_PROTECTION_ERROR 0x20000000 /* used in rx and tx */ -#define SPIDER_NET_DESCR_FRAME_END 0x40000000 /* used in rx */ -#define SPIDER_NET_DESCR_FORCE_END 0x50000000 /* used in rx and tx */ -#define SPIDER_NET_DESCR_CARDOWNED 0xA0000000 /* used in rx and tx */ -#define SPIDER_NET_DESCR_NOT_IN_USE 0xF0000000 -#define SPIDER_NET_DESCR_TXDESFLG 0x00800000 - -#define SPIDER_NET_DESCR_BAD_STATUS (SPIDER_NET_DESCR_RXDEN0IS | \ - SPIDER_NET_DESCR_RXRERRIS | \ - SPIDER_NET_DESCR_RXDEN0IMS | \ - SPIDER_NET_DESCR_RXINVDIMS | \ - SPIDER_NET_DESCR_RXRERRMIS | \ - SPIDER_NET_DESCR_UNUSED) - -/* Descriptor, as defined by the hardware */ -struct spider_net_hw_descr { - u32 buf_addr; - u32 buf_size; - u32 next_descr_addr; - u32 dmac_cmd_status; - u32 result_size; - u32 valid_size; /* all zeroes for tx */ - u32 data_status; - u32 data_error; /* all zeroes for tx */ -} __attribute__((aligned(32))); - -struct spider_net_descr { - struct spider_net_hw_descr *hwdescr; - struct sk_buff *skb; - u32 bus_addr; - struct spider_net_descr *next; - struct spider_net_descr *prev; -}; - -struct spider_net_descr_chain { - spinlock_t lock; - struct spider_net_descr *head; - struct spider_net_descr *tail; - struct spider_net_descr *ring; - int num_desc; - struct spider_net_hw_descr *hwring; - dma_addr_t dma_addr; -}; - -/* descriptor data_status bits */ -#define SPIDER_NET_RX_IPCHK 29 -#define SPIDER_NET_RX_TCPCHK 28 -#define SPIDER_NET_VLAN_PACKET 21 -#define SPIDER_NET_DATA_STATUS_CKSUM_MASK ( (1 << SPIDER_NET_RX_IPCHK) | \ - (1 << SPIDER_NET_RX_TCPCHK) ) - -/* descriptor data_error bits */ -#define SPIDER_NET_RX_IPCHKERR 27 -#define SPIDER_NET_RX_RXTCPCHKERR 28 - -#define SPIDER_NET_DATA_ERR_CKSUM_MASK (1 << SPIDER_NET_RX_IPCHKERR) - -/* the cases we don't pass the packet to the stack. - * 701b8000 would be correct, but every packets gets that flag */ -#define SPIDER_NET_DESTROY_RX_FLAGS 0x700b8000 - -#define SPIDER_NET_DEFAULT_MSG ( NETIF_MSG_DRV | \ - NETIF_MSG_PROBE | \ - NETIF_MSG_LINK | \ - NETIF_MSG_TIMER | \ - NETIF_MSG_IFDOWN | \ - NETIF_MSG_IFUP | \ - NETIF_MSG_RX_ERR | \ - NETIF_MSG_TX_ERR | \ - NETIF_MSG_TX_QUEUED | \ - NETIF_MSG_INTR | \ - NETIF_MSG_TX_DONE | \ - NETIF_MSG_RX_STATUS | \ - NETIF_MSG_PKTDATA | \ - NETIF_MSG_HW | \ - NETIF_MSG_WOL ) - -struct spider_net_extra_stats { - unsigned long rx_desc_error; - unsigned long tx_timeouts; - unsigned long alloc_rx_skb_error; - unsigned long rx_iommu_map_error; - unsigned long tx_iommu_map_error; - unsigned long rx_desc_unk_state; -}; - -struct spider_net_card { - struct net_device *netdev; - struct pci_dev *pdev; - struct mii_phy phy; - - struct napi_struct napi; - - int medium; - - void __iomem *regs; - - struct spider_net_descr_chain tx_chain; - struct spider_net_descr_chain rx_chain; - struct spider_net_descr *low_watermark; - - int aneg_count; - struct timer_list aneg_timer; - struct timer_list tx_timer; - struct work_struct tx_timeout_task; - atomic_t tx_timeout_task_counter; - wait_queue_head_t waitq; - int num_rx_ints; - int ignore_rx_ramfull; - - /* for ethtool */ - int msg_enable; - struct spider_net_extra_stats spider_stats; - - /* Must be last item in struct */ - struct spider_net_descr darray[]; -}; - -#endif diff --git a/drivers/net/ethernet/toshiba/spider_net_ethtool.c b/drivers/net/ethernet/toshiba/spider_net_ethtool.c deleted file mode 100644 index fef9fd127b5e..000000000000 --- a/drivers/net/ethernet/toshiba/spider_net_ethtool.c +++ /dev/null @@ -1,174 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Network device driver for Cell Processor-Based Blade - * - * (C) Copyright IBM Corp. 2005 - * - * Authors : Utz Bacher <utz.bacher@de.ibm.com> - * Jens Osterkamp <Jens.Osterkamp@de.ibm.com> - */ - -#include <linux/netdevice.h> -#include <linux/ethtool.h> -#include <linux/pci.h> - -#include "spider_net.h" - - -static struct { - const char str[ETH_GSTRING_LEN]; -} ethtool_stats_keys[] = { - { "tx_packets" }, - { "tx_bytes" }, - { "rx_packets" }, - { "rx_bytes" }, - { "tx_errors" }, - { "tx_dropped" }, - { "rx_dropped" }, - { "rx_descriptor_error" }, - { "tx_timeouts" }, - { "alloc_rx_skb_error" }, - { "rx_iommu_map_error" }, - { "tx_iommu_map_error" }, - { "rx_desc_unk_state" }, -}; - -static int -spider_net_ethtool_get_link_ksettings(struct net_device *netdev, - struct ethtool_link_ksettings *cmd) -{ - struct spider_net_card *card; - card = netdev_priv(netdev); - - ethtool_link_ksettings_zero_link_mode(cmd, supported); - ethtool_link_ksettings_add_link_mode(cmd, supported, 1000baseT_Full); - ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); - - ethtool_link_ksettings_zero_link_mode(cmd, advertising); - ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full); - ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE); - - cmd->base.port = PORT_FIBRE; - cmd->base.speed = card->phy.speed; - cmd->base.duplex = DUPLEX_FULL; - - return 0; -} - -static void -spider_net_ethtool_get_drvinfo(struct net_device *netdev, - struct ethtool_drvinfo *drvinfo) -{ - struct spider_net_card *card; - card = netdev_priv(netdev); - - /* clear and fill out info */ - strscpy(drvinfo->driver, spider_net_driver_name, - sizeof(drvinfo->driver)); - strscpy(drvinfo->version, VERSION, sizeof(drvinfo->version)); - strscpy(drvinfo->fw_version, "no information", - sizeof(drvinfo->fw_version)); - strscpy(drvinfo->bus_info, pci_name(card->pdev), - sizeof(drvinfo->bus_info)); -} - -static void -spider_net_ethtool_get_wol(struct net_device *netdev, - struct ethtool_wolinfo *wolinfo) -{ - /* no support for wol */ - wolinfo->supported = 0; - wolinfo->wolopts = 0; -} - -static u32 -spider_net_ethtool_get_msglevel(struct net_device *netdev) -{ - struct spider_net_card *card; - card = netdev_priv(netdev); - return card->msg_enable; -} - -static void -spider_net_ethtool_set_msglevel(struct net_device *netdev, - u32 level) -{ - struct spider_net_card *card; - card = netdev_priv(netdev); - card->msg_enable = level; -} - -static int -spider_net_ethtool_nway_reset(struct net_device *netdev) -{ - if (netif_running(netdev)) { - spider_net_stop(netdev); - spider_net_open(netdev); - } - return 0; -} - -static void -spider_net_ethtool_get_ringparam(struct net_device *netdev, - struct ethtool_ringparam *ering, - struct kernel_ethtool_ringparam *kernel_ering, - struct netlink_ext_ack *extack) -{ - struct spider_net_card *card = netdev_priv(netdev); - - ering->tx_max_pending = SPIDER_NET_TX_DESCRIPTORS_MAX; - ering->tx_pending = card->tx_chain.num_desc; - ering->rx_max_pending = SPIDER_NET_RX_DESCRIPTORS_MAX; - ering->rx_pending = card->rx_chain.num_desc; -} - -static int spider_net_get_sset_count(struct net_device *netdev, int sset) -{ - switch (sset) { - case ETH_SS_STATS: - return ARRAY_SIZE(ethtool_stats_keys); - default: - return -EOPNOTSUPP; - } -} - -static void spider_net_get_ethtool_stats(struct net_device *netdev, - struct ethtool_stats *stats, u64 *data) -{ - struct spider_net_card *card = netdev_priv(netdev); - - data[0] = netdev->stats.tx_packets; - data[1] = netdev->stats.tx_bytes; - data[2] = netdev->stats.rx_packets; - data[3] = netdev->stats.rx_bytes; - data[4] = netdev->stats.tx_errors; - data[5] = netdev->stats.tx_dropped; - data[6] = netdev->stats.rx_dropped; - data[7] = card->spider_stats.rx_desc_error; - data[8] = card->spider_stats.tx_timeouts; - data[9] = card->spider_stats.alloc_rx_skb_error; - data[10] = card->spider_stats.rx_iommu_map_error; - data[11] = card->spider_stats.tx_iommu_map_error; - data[12] = card->spider_stats.rx_desc_unk_state; -} - -static void spider_net_get_strings(struct net_device *netdev, u32 stringset, - u8 *data) -{ - memcpy(data, ethtool_stats_keys, sizeof(ethtool_stats_keys)); -} - -const struct ethtool_ops spider_net_ethtool_ops = { - .get_drvinfo = spider_net_ethtool_get_drvinfo, - .get_wol = spider_net_ethtool_get_wol, - .get_msglevel = spider_net_ethtool_get_msglevel, - .set_msglevel = spider_net_ethtool_set_msglevel, - .get_link = ethtool_op_get_link, - .nway_reset = spider_net_ethtool_nway_reset, - .get_ringparam = spider_net_ethtool_get_ringparam, - .get_strings = spider_net_get_strings, - .get_sset_count = spider_net_get_sset_count, - .get_ethtool_stats = spider_net_get_ethtool_stats, - .get_link_ksettings = spider_net_ethtool_get_link_ksettings, -}; - diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig index e126e6ce510e..3f02a0e45254 100644 --- a/kernel/irq/Kconfig +++ b/kernel/irq/Kconfig @@ -47,10 +47,6 @@ config GENERIC_IRQ_INJECTION config HARDIRQS_SW_RESEND bool -# Edge style eoi based handler (cell) -config IRQ_EDGE_EOI_HANDLER - bool - # Generic configurable interrupt chip implementation config GENERIC_IRQ_CHIP bool diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c index 0ff987d3a799..36cf1b09cc84 100644 --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -838,53 +838,6 @@ out_unlock: } EXPORT_SYMBOL(handle_edge_irq); -#ifdef CONFIG_IRQ_EDGE_EOI_HANDLER -/** - * handle_edge_eoi_irq - edge eoi type IRQ handler - * @desc: the interrupt description structure for this irq - * - * Similar as the above handle_edge_irq, but using eoi and w/o the - * mask/unmask logic. - */ -void handle_edge_eoi_irq(struct irq_desc *desc) -{ - struct irq_chip *chip = irq_desc_get_chip(desc); - - raw_spin_lock(&desc->lock); - - desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING); - - if (!irq_may_run(desc)) { - desc->istate |= IRQS_PENDING; - goto out_eoi; - } - - /* - * If its disabled or no action available then mask it and get - * out of here. - */ - if (irqd_irq_disabled(&desc->irq_data) || !desc->action) { - desc->istate |= IRQS_PENDING; - goto out_eoi; - } - - kstat_incr_irqs_this_cpu(desc); - - do { - if (unlikely(!desc->action)) - goto out_eoi; - - handle_irq_event(desc); - - } while ((desc->istate & IRQS_PENDING) && - !irqd_irq_disabled(&desc->irq_data)); - -out_eoi: - chip->irq_eoi(&desc->irq_data); - raw_spin_unlock(&desc->lock); -} -#endif - /** * handle_percpu_irq - Per CPU local irq handler * @desc: the interrupt description structure for this irq diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c index bb7d066a7c39..a297790b7333 100644 --- a/kernel/static_call_inline.c +++ b/kernel/static_call_inline.c @@ -206,7 +206,7 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func) continue; } - arch_static_call_transform(site_addr, NULL, func, + arch_static_call_transform(site_addr, tramp, func, static_call_is_tail(site)); } } diff --git a/tools/objtool/arch/powerpc/decode.c b/tools/objtool/arch/powerpc/decode.c index 7c0bf2429067..c851c51d4bd3 100644 --- a/tools/objtool/arch/powerpc/decode.c +++ b/tools/objtool/arch/powerpc/decode.c @@ -55,12 +55,17 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec switch (opcode) { case 18: /* b[l][a] */ - if ((ins & 3) == 1) /* bl */ + if (ins == 0x48000005) /* bl .+4 */ + typ = INSN_OTHER; + else if (ins & 1) /* bl[a] */ typ = INSN_CALL; + else /* b[a] */ + typ = INSN_JUMP_UNCONDITIONAL; imm = ins & 0x3fffffc; if (imm & 0x2000000) imm -= 0x4000000; + imm |= ins & 2; /* AA flag */ break; } @@ -77,6 +82,9 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec unsigned long arch_jump_destination(struct instruction *insn) { + if (insn->immediate & 2) + return insn->immediate & ~2; + return insn->offset + insn->immediate; } diff --git a/tools/testing/selftests/powerpc/pmu/event_code_tests/event_alternatives_tests_p10.c b/tools/testing/selftests/powerpc/pmu/event_code_tests/event_alternatives_tests_p10.c index 8be7aada6523..355f8bbe06c3 100644 --- a/tools/testing/selftests/powerpc/pmu/event_code_tests/event_alternatives_tests_p10.c +++ b/tools/testing/selftests/powerpc/pmu/event_code_tests/event_alternatives_tests_p10.c @@ -26,6 +26,7 @@ static int event_alternatives_tests_p10(void) { struct event *e, events[5]; int i; + int pvr = PVR_VER(mfspr(SPRN_PVR)); /* Check for platform support for the test */ SKIP_IF(platform_check_for_tests()); @@ -36,7 +37,7 @@ static int event_alternatives_tests_p10(void) * code and using PVR will work correctly for all cases * including generic compat mode. */ - SKIP_IF(PVR_VER(mfspr(SPRN_PVR)) != POWER10); + SKIP_IF((pvr != POWER10) && (pvr != POWER11)); SKIP_IF(check_for_generic_compat_pmu()); diff --git a/tools/testing/selftests/powerpc/pmu/event_code_tests/generic_events_valid_test.c b/tools/testing/selftests/powerpc/pmu/event_code_tests/generic_events_valid_test.c index 0d237c15d3f2..a378fa9a5a7b 100644 --- a/tools/testing/selftests/powerpc/pmu/event_code_tests/generic_events_valid_test.c +++ b/tools/testing/selftests/powerpc/pmu/event_code_tests/generic_events_valid_test.c @@ -17,6 +17,7 @@ static int generic_events_valid_test(void) { struct event event; + int pvr = mfspr(SPRN_PVR); /* Check for platform support for the test */ SKIP_IF(platform_check_for_tests()); @@ -31,7 +32,7 @@ static int generic_events_valid_test(void) * - PERF_COUNT_HW_STALLED_CYCLES_BACKEND * - PERF_COUNT_HW_REF_CPU_CYCLES */ - if (PVR_VER(mfspr(SPRN_PVR)) == POWER10) { + if ((pvr == POWER10) || (pvr == POWER11)) { event_init_opts(&event, PERF_COUNT_HW_CPU_CYCLES, PERF_TYPE_HARDWARE, "event"); FAIL_IF(event_open(&event)); event_close(&event); diff --git a/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_l2l3_sel_test.c b/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_l2l3_sel_test.c index 85a636886069..e3c7a0c071e2 100644 --- a/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_l2l3_sel_test.c +++ b/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_l2l3_sel_test.c @@ -30,7 +30,7 @@ static int group_constraint_l2l3_sel(void) /* * Check for platform support for the test. - * This test is only aplicable on power10 + * This test is only aplicable on ISA v3.1 */ SKIP_IF(platform_check_for_tests()); SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1)); diff --git a/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_radix_scope_qual_test.c b/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_radix_scope_qual_test.c index 9225618b846a..9233175787cc 100644 --- a/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_radix_scope_qual_test.c +++ b/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_radix_scope_qual_test.c @@ -26,7 +26,7 @@ static int group_constraint_radix_scope_qual(void) /* * Check for platform support for the test. - * This test is aplicable on power10 only. + * This test is aplicable on ISA v3.1 only. */ SKIP_IF(platform_check_for_tests()); SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1)); diff --git a/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_thresh_cmp_test.c b/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_thresh_cmp_test.c index 9f1197104e8c..4b69e7214c0b 100644 --- a/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_thresh_cmp_test.c +++ b/tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_thresh_cmp_test.c @@ -25,7 +25,7 @@ /* * Testcase for group constraint check of thresh_cmp bits which is * used to program thresh compare field in Monitor Mode Control Register A - * (MMCRA: 9-18 bits for power9 and MMCRA: 8-18 bits for power10). + * (MMCRA: 9-18 bits for power9 and MMCRA: 8-18 bits for power10/power11). * All events in the group should match thresh compare bits otherwise * event_open for the group will fail. */ diff --git a/tools/testing/selftests/powerpc/pmu/event_code_tests/invalid_event_code_test.c b/tools/testing/selftests/powerpc/pmu/event_code_tests/invalid_event_code_test.c index f51fcab837fc..88aa7fd64ec1 100644 --- a/tools/testing/selftests/powerpc/pmu/event_code_tests/invalid_event_code_test.c +++ b/tools/testing/selftests/powerpc/pmu/event_code_tests/invalid_event_code_test.c @@ -20,7 +20,7 @@ * Some of the bits in the event code is * reserved for specific platforms. * Event code bits 52-59 are reserved in power9, - * whereas in power10, these are used for programming + * whereas in ISA v3.1, these are used for programming * Monitor Mode Control Register 3 (MMCR3). * Bit 9 in event code is reserved in power9, * whereas it is used for programming "radix_scope_qual" @@ -39,7 +39,7 @@ static int invalid_event_code(void) /* * Events using MMCR3 bits and radix scope qual bits - * should fail in power9 and should succeed in power10. + * should fail in power9 and should succeed in power10 ( ISA v3.1 ) * Init the events and check for pass/fail in event open. */ if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) { diff --git a/tools/testing/selftests/powerpc/pmu/event_code_tests/reserved_bits_mmcra_sample_elig_mode_test.c b/tools/testing/selftests/powerpc/pmu/event_code_tests/reserved_bits_mmcra_sample_elig_mode_test.c index 4c119c821b99..757f454c0dc0 100644 --- a/tools/testing/selftests/powerpc/pmu/event_code_tests/reserved_bits_mmcra_sample_elig_mode_test.c +++ b/tools/testing/selftests/powerpc/pmu/event_code_tests/reserved_bits_mmcra_sample_elig_mode_test.c @@ -21,6 +21,7 @@ static int reserved_bits_mmcra_sample_elig_mode(void) { struct event event; + int pvr = PVR_VER(mfspr(SPRN_PVR)); /* Check for platform support for the test */ SKIP_IF(platform_check_for_tests()); @@ -56,10 +57,10 @@ static int reserved_bits_mmcra_sample_elig_mode(void) /* * MMCRA Random Sampling Mode (SM) value 0x10 - * is reserved in power10 and 0xC is reserved in + * is reserved in power10/power11 and 0xC is reserved in * power9. */ - if (PVR_VER(mfspr(SPRN_PVR)) == POWER10) { + if ((pvr == POWER10) || (pvr == POWER11)) { event_init(&event, 0x100401e0); FAIL_IF(!event_open(&event)); } else if (PVR_VER(mfspr(SPRN_PVR)) == POWER9) { diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/Makefile b/tools/testing/selftests/powerpc/pmu/sampling_tests/Makefile index 9f79bec5fce7..0c4ed299c3b8 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/Makefile +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/Makefile @@ -5,7 +5,8 @@ TEST_GEN_PROGS := mmcr0_exceptionbits_test mmcr0_cc56run_test mmcr0_pmccext_test mmcr3_src_test mmcra_thresh_marked_sample_test mmcra_thresh_cmp_test \ mmcra_bhrb_ind_call_test mmcra_bhrb_any_test mmcra_bhrb_cond_test \ mmcra_bhrb_disable_test bhrb_no_crash_wo_pmu_test intr_regs_no_crash_wo_pmu_test \ - bhrb_filter_map_test mmcr1_sel_unit_cache_test mmcra_bhrb_disable_no_branch_test + bhrb_filter_map_test mmcr1_sel_unit_cache_test mmcra_bhrb_disable_no_branch_test \ + check_extended_reg_test top_srcdir = ../../../../../.. include ../../../lib.mk diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/bhrb_filter_map_test.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/bhrb_filter_map_test.c index 3f43c315c666..3ad71d49ae65 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/bhrb_filter_map_test.c +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/bhrb_filter_map_test.c @@ -14,7 +14,7 @@ * A perf sampling test to check bhrb filter * map. All the branch filters are not supported * in powerpc. Supported filters in: - * power10: any, any_call, ind_call, cond + * power10/power11: any, any_call, ind_call, cond * power9: any, any_call * * Testcase checks event open for invalid bhrb filter @@ -24,13 +24,13 @@ */ /* Invalid types for powerpc */ -/* Valid bhrb filters in power9/power10 */ +/* Valid bhrb filters in power9/power10/power11 */ int bhrb_filter_map_valid_common[] = { PERF_SAMPLE_BRANCH_ANY, PERF_SAMPLE_BRANCH_ANY_CALL, }; -/* Valid bhrb filters in power10 */ +/* Valid bhrb filters in power10/power11 */ int bhrb_filter_map_valid_p10[] = { PERF_SAMPLE_BRANCH_IND_CALL, PERF_SAMPLE_BRANCH_COND, @@ -69,7 +69,7 @@ static int bhrb_filter_map_test(void) FAIL_IF(!event_open(&event)); } - /* valid filter maps for power9/power10 which are expected to pass in event_open */ + /* valid filter maps for power9/power10/power11 which are expected to pass in event_open */ for (i = 0; i < ARRAY_SIZE(bhrb_filter_map_valid_common); i++) { event.attr.branch_sample_type = bhrb_filter_map_valid_common[i]; FAIL_IF(event_open(&event)); @@ -77,19 +77,22 @@ static int bhrb_filter_map_test(void) } /* - * filter maps which are valid in power10 and invalid in power9. + * filter maps which are valid in power10/power11 and invalid in power9. * PVR check is used here since PMU specific data like bhrb filter * alternative tests is handled by respective PMU driver code and * using PVR will work correctly for all cases including generic * compat mode. */ - if (PVR_VER(mfspr(SPRN_PVR)) == POWER10) { + switch (PVR_VER(mfspr(SPRN_PVR))) { + case POWER11: + case POWER10: for (i = 0; i < ARRAY_SIZE(bhrb_filter_map_valid_p10); i++) { event.attr.branch_sample_type = bhrb_filter_map_valid_p10[i]; FAIL_IF(event_open(&event)); event_close(&event); } - } else { + break; + default: for (i = 0; i < ARRAY_SIZE(bhrb_filter_map_valid_p10); i++) { event.attr.branch_sample_type = bhrb_filter_map_valid_p10[i]; FAIL_IF(!event_open(&event)); diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/check_extended_reg_test.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/check_extended_reg_test.c new file mode 100644 index 000000000000..865bc69f920c --- /dev/null +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/check_extended_reg_test.c @@ -0,0 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright 2024, Kajol Jain, IBM Corp. + */ + +#include <stdio.h> +#include <stdlib.h> + +#include "../event.h" +#include "misc.h" +#include "utils.h" + +/* + * A perf sampling test to check extended + * reg support. + */ +static int check_extended_reg_test(void) +{ + /* Check for platform support for the test */ + SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_00)); + + /* Skip for Generic compat PMU */ + SKIP_IF(check_for_generic_compat_pmu()); + + /* Check if platform supports extended regs */ + platform_extended_mask = perf_get_platform_reg_mask(); + FAIL_IF(check_extended_regs_support()); + + return 0; +} + +int main(void) +{ + return test_harness(check_extended_reg_test, "check_extended_reg_test"); +} diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.c index eac6420abdf1..8a538b6182a1 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.c +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.c @@ -59,6 +59,7 @@ static void init_ev_encodes(void) ev_shift_thd_stop = 32; switch (pvr) { + case POWER11: case POWER10: ev_mask_thd_cmp = 0x3ffff; ev_shift_thd_cmp = 0; @@ -91,7 +92,7 @@ static void init_ev_encodes(void) } /* Return the extended regs mask value */ -static u64 perf_get_platform_reg_mask(void) +u64 perf_get_platform_reg_mask(void) { if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) return PERF_POWER10_MASK; @@ -129,8 +130,14 @@ int platform_check_for_tests(void) * Check for supported platforms * for sampling test */ - if ((pvr != POWER10) && (pvr != POWER9)) + switch (pvr) { + case POWER11: + case POWER10: + case POWER9: + break; + default: goto out; + } /* * Check PMU driver registered by looking for @@ -490,6 +497,13 @@ int get_thresh_cmp_val(struct event event) * Utility function to check for generic compat PMU * by comparing base_platform value from auxv and real * PVR value. + * auxv_base_platform() func gives information of "base platform" + * corresponding to PVR value. Incase, if the distro doesn't + * support platform PVR (missing cputable support), base platform + * in auxv will have a default value other than the real PVR's. + * In this case, ISAv3 PMU (generic compat PMU) will be registered + * in the system. auxv_generic_compat_pmu() makes use of the base + * platform value from auxv to do this check. */ static bool auxv_generic_compat_pmu(void) { @@ -499,6 +513,8 @@ static bool auxv_generic_compat_pmu(void) base_pvr = POWER9; else if (!strcmp(auxv_base_platform(), "power10")) base_pvr = POWER10; + else if (!strcmp(auxv_base_platform(), "power11")) + base_pvr = POWER11; return (!base_pvr); } diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.h b/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.h index 64e25cce1435..357e9f0fc0f7 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.h +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/misc.h @@ -8,10 +8,12 @@ #include <sys/stat.h> #include "../event.h" +#define POWER11 0x82 #define POWER10 0x80 #define POWER9 0x4e #define PERF_POWER9_MASK 0x7f8ffffffffffff #define PERF_POWER10_MASK 0x7ffffffffffffff +#define PERF_POWER11_MASK PERF_POWER10_MASK #define MMCR0_FC56 0x00000010UL /* freeze counters 5 and 6 */ #define MMCR0_PMCCEXT 0x00000200UL /* PMCCEXT control */ @@ -37,6 +39,8 @@ extern int pvr; extern u64 platform_extended_mask; extern int check_pvr_for_sampling_tests(void); extern int platform_check_for_tests(void); +extern int check_extended_regs_support(void); +extern u64 perf_get_platform_reg_mask(void); /* * Event code field extraction macro. @@ -165,21 +169,21 @@ static inline int get_mmcr2_fcta(u64 mmcr2, int pmc) static inline int get_mmcr2_l2l3(u64 mmcr2, int pmc) { - if (pvr == POWER10) + if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) return ((mmcr2 & 0xf8) >> 3); return 0; } static inline int get_mmcr3_src(u64 mmcr3, int pmc) { - if (pvr != POWER10) + if (!have_hwcap2(PPC_FEATURE2_ARCH_3_1)) return 0; return ((mmcr3 >> ((49 - (15 * ((pmc) - 1))))) & 0x7fff); } static inline int get_mmcra_thd_cmp(u64 mmcra, int pmc) { - if (pvr == POWER10) + if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) return ((mmcra >> 45) & 0x7ff); return ((mmcra >> 45) & 0x3ff); } @@ -191,7 +195,7 @@ static inline int get_mmcra_sm(u64 mmcra, int pmc) static inline u64 get_mmcra_bhrb_disable(u64 mmcra, int pmc) { - if (pvr == POWER10) + if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) return mmcra & BHRB_DISABLE; return 0; } diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_cond_test.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_cond_test.c index 3e08176eb7f8..809de8d58b3b 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_cond_test.c +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_cond_test.c @@ -29,7 +29,7 @@ static int mmcra_bhrb_cond_test(void) /* * Check for platform support for the test. - * This test is only aplicable on power10 + * This test is only aplicable on ISA v3.1 */ SKIP_IF(check_pvr_for_sampling_tests()); SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1)); diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_no_branch_test.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_no_branch_test.c index 488c865387e4..fa0dc15f9123 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_no_branch_test.c +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_no_branch_test.c @@ -26,7 +26,7 @@ static int mmcra_bhrb_disable_no_branch_test(void) /* * Check for platform support for the test. - * This test is only aplicable on power10 + * This test is only aplicable on ISA v3.1 */ SKIP_IF(check_pvr_for_sampling_tests()); SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1)); diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_test.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_test.c index 186a853c0f62..bc3161ab003d 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_test.c +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_test.c @@ -26,7 +26,7 @@ static int mmcra_bhrb_disable_test(void) /* * Check for platform support for the test. - * This test is only aplicable on power10 + * This test is only aplicable on ISA v3.1 */ SKIP_IF(check_pvr_for_sampling_tests()); SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1)); diff --git a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_ind_call_test.c b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_ind_call_test.c index f0706730c099..fd6c9f12212c 100644 --- a/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_ind_call_test.c +++ b/tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_ind_call_test.c @@ -29,7 +29,7 @@ static int mmcra_bhrb_ind_call_test(void) /* * Check for platform support for the test. - * This test is only aplicable on power10 + * This test is only aplicable on ISA v3.1 */ SKIP_IF(check_pvr_for_sampling_tests()); SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1)); |