summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2025-03-25net: phylink: add functions to block/unblock rx clock stopRussell King (Oracle)
Some MACs require the PHY receive clock to be running to complete setup actions. This may fail if the PHY has negotiated EEE, the MAC supports receive clock stop, and the link has entered LPI state. Provide a pair of APIs that MAC drivers can use to temporarily block the PHY disabling the receive clock. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1tvO6k-008Vjt-MZ@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: stmmac: socfpga: remove phy_resume() callRussell King (Oracle)
As the previous commit addressed DWGMAC resuming with a PHY in suspended state, there is now no need for socfpga to work around this. Remove this code. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1tvO6f-008Vjn-J1@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: stmmac: address non-LPI resume failures properlyRussell King (Oracle)
The Synopsys Designware GMAC core databook requires all clocks to be active in order to complete software reset, which we perform during resume. However, IEEE 802.3 allows a PHY to stop its clocks when placed in low-power mode, which happens when the system is suspended and WoL is not enabled. As an attempt to work around this, commit 36d18b5664ef ("net: stmmac: start phylink instance before stmmac_hw_setup()") started phylink early, but this has the side effect that the mac_link_up() method may be called before or during the initialisation of GMAC hardware. We also have the socfpga glue driver directly calling phy_resume() also as an attempt to work around this. In a previous commit, phylink_prepare_resume() has been introduced to give MAC drivers a way to ensure that the PHY is resumed prior to their initialisation of their MAC hardware. This commit adds the call, and moves the phylink_resume() call back to where it should be before the aforementioned commit. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1tvO6a-008Vjh-FG@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: phylink: add phylink_prepare_resume()Russell King (Oracle)
When the system is suspended, the PHY may be placed in low-power mode by setting the BMCR 0.11 Power down bit. IEEE 802.3 states that the behaviour of the PHY in this state is implementation specific, and the PHY is not required to meet the RX_CLK and TX_CLK requirements. Essentially, this means that a PHY may stop the clocks that it is generating while in power down state. However, MACs exist which require the clocks from the PHY to be running in order to properly resume. phylink_prepare_resume() provides them with a way to clear the Power down bit early. Note, however, that IEEE 802.3 gives PHYs up to 500ms grace before the transmit and receive clocks meet the requirements after clearing the power down bit. Add a resume preparation function, which will ensure that the receive clock from the PHY is appropriately configured while resuming. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1tvO6V-008Vjb-AP@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25sfc: support X4 devlink flashEdward Cree
Unlike X2 and EF100, we do not attempt to parse the firmware file to find an image within it; we simply hand the entire file to the MC, which is responsible for understanding any container formats we might use and validating that the firmware file is applicable to this NIC. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://patch.msgid.link/9a72a74002a7819c780b0a18ce9294c9d4e1db12.1742493017.git.ecree.xilinx@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25sfc: update MCDI protocol headersEdward Cree
Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://patch.msgid.link/bcb7597460a5a99d1dca4ef282f4aa2dd46ae545.1742493017.git.ecree.xilinx@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25sfc: rip out MDIO supportEdward Cree
Unlike Siena, no EF10 board ever had an external PHY, and consequently MDIO handling isn't even built into the firmware. Since Siena has been split out into its own driver, the MDIO code can be deleted from the sfc driver. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://patch.msgid.link/aa689d192ddaef7abe82709316c2be648a7bd66e.1742493017.git.ecree.xilinx@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25vmxnet3: unregister xdp rxq info in the reset pathSankararaman Jayaraman
vmxnet3 does not unregister xdp rxq info in the vmxnet3_reset_work() code path as vmxnet3_rq_destroy() is not invoked in this code path. So, we get below message with a backtrace. Missing unregister, handled but fix driver WARNING: CPU:48 PID: 500 at net/core/xdp.c:182 __xdp_rxq_info_reg+0x93/0xf0 This patch fixes the problem by moving the unregister code of XDP from vmxnet3_rq_destroy() to vmxnet3_rq_cleanup(). Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.") Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@broadcom.com> Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com> Link: https://patch.msgid.link/20250320045522.57892-1-sankararaman.jayaraman@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net/mlx5e: TC, Don't offload CT commit if it's the last actionJianbo Liu
For CT action with commit argument, it's usually followed by the forward action, either to the output netdev or next chain. The default behavior for software is to drop by setting action attribute to TC_ACT_SHOT instead of TC_ACT_PIPE if it's the last action. But driver can't handle it, so block the offload for such case. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1742392983-153050-6-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net/mlx5e: CT: Filter legacy rules that are unrelated to nicPaul Blakey
In nic mode CT setup where we do hairpin between the two nics, both nics register to the same flow table (per zone), and try to offload all rules on it. Instead, filter the rules that originated from the relevant nic (so only one side is offloaded for each nic). Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1742392983-153050-5-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net/mlx5: Update pfnum retrieval for devlink port attributesShay Drory
Align mlx5 driver usage of 'pfnum' with the documentation clarification introduced in commit bb70b0d48d8e ("devlink: Improve the port attributes description"). Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1742392983-153050-4-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net/mlx5: fw reset, check bridge accessibility at earlier stageAmir Tzin
Currently, mlx5_is_reset_now_capable() checks whether the pci bridge is accessible only on bridge hot plug capability check. If the pci bridge is not accessible, reset now will fail regardless of bridge hotplug capability. Move this check to function mlx5_is_reset_now_capable() which, in such case, aborts the reset and does so in the request phase instead of the reset now phase. Signed-off-by: Aya Levin <ayal@nvidia.com> Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Amir Tzin <amirtz@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1742392983-153050-3-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net/mlx5: Lag, use port selection tables when availableMark Bloch
As queue affinity is being deprecated and will no longer be supported in the future, Always check for the presence of the port selection namespace. When available, leverage it to distribute traffic across the physical ports via steering, ensuring compatibility with future NICs. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1742392983-153050-2-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net/mlx5e: TX, Utilize WQ fragments edge for multi-packet WQEsTariq Toukan
For simplicity reasons, the driver avoids crossing work queue fragment boundaries within the same TX WQE (Work-Queue Element). Until today, as the number of packets in a TX MPWQE (Multi-Packet WQE) descriptor is not known in advance, the driver pre-prepared contiguous memory for the largest possible WQE. For this, when getting too close to the fragment edge, having no room for the largest WQE possible, the driver was filling the fragment remainder with NOP descriptors, aligning the next descriptor to the beginning of the next fragment. Generating and handling these NOPs wastes resources, like: CPU cycles, work-queue entries fetched to the device, and PCI bandwidth. In this patch, we replace this NOPs filling mechanism in the TX MPWQE flow. Instead, we utilize the remaining entries of the fragment with a TX MPWQE. If this room turns out to be too small, we simply open an additional descriptor starting at the beginning of the next fragment. Performance benchmark: uperf test, single server against 3 clients. TCP multi-stream, bidir, traffic profile "2x350B read, 1400B write". Bottleneck is in inbound PCI bandwidth (device POV). +---------------+------------+------------+--------+ | | Before | After | | +---------------+------------+------------+--------+ | BW | 117.4 Gbps | 121.1 Gbps | +3.1% | +---------------+------------+------------+--------+ | tx_packets | 15 M/sec | 15.5 M/sec | +3.3% | +---------------+------------+------------+--------+ | tx_nops | 3 M/sec | 0 | -100% | +---------------+------------+------------+--------+ Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/1742391746-118647-1-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25firmware: cs_dsp: Ensure cs_dsp_load[_coeff]() returns 0 on successRichard Fitzgerald
Set ret = 0 on successful completion of the processing loop in cs_dsp_load() and cs_dsp_load_coeff() to ensure that the function returns 0 on success. All normal firmware files will have at least one data block, and processing this block will set ret == 0, from the result of either regmap_raw_write() or cs_dsp_parse_coeff(). The kunit tests create a dummy firmware file that contains only the header, without any data blocks. This gives cs_dsp a file to "load" that will not cause any side-effects. As there aren't any data blocks, the processing loop will not set ret == 0. Originally there was a line after the processing loop: ret = regmap_async_complete(regmap); which would set ret == 0 before the function returned. Commit fe08b7d5085a ("firmware: cs_dsp: Remove async regmap writes") changed the regmap write to a normal sync write, so the call to regmap_async_complete() wasn't necessary and was removed. It was overlooked that the ret here wasn't only to check the result of regmap_async_complete(), it also set the final return value of the function. Fixes: fe08b7d5085a ("firmware: cs_dsp: Remove async regmap writes") Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com> Link: https://patch.msgid.link/20250323170529.197205-1-rf@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2025-03-25Merge remote-tracking branches 'ras/edac-cxl', 'ras/edac-drivers' and ↵Borislav Petkov (AMD)
'ras/edac-misc' into edac-updates * ras/edac-cxl: EDAC/device: Fix dev_set_name() format string EDAC: Update memory repair control interface for memory sparing feature EDAC: Add a memory repair control feature EDAC: Add a Error Check Scrub control feature EDAC: Add scrub control feature EDAC: Add support for EDAC device features control * ras/edac-drivers: EDAC/ie31200: Switch Raptor Lake-S to interrupt mode EDAC/ie31200: Add Intel Raptor Lake-S SoCs support EDAC/ie31200: Break up ie31200_probe1() EDAC/ie31200: Fold the two channel loops into one loop EDAC/ie31200: Make struct dimm_data contain decoded information EDAC/ie31200: Make the memory controller resources configurable EDAC/ie31200: Simplify the pci_device_id table EDAC/ie31200: Fix the 3rd parameter name of *populate_dimm_info() EDAC/ie31200: Fix the error path order of ie31200_init() EDAC/ie31200: Fix the DIMM size mask for several SoCs EDAC/ie31200: Fix the size of EDAC_MC_LAYER_CHIP_SELECT layer EDAC/{skx_common,i10nm}: Fix some missing error reports on Emerald Rapids EDAC/igen6: Fix the flood of invalid error reports EDAC/ie31200: work around false positive build warning * ras/edac-misc: MAINTAINERS: Add a secondary maintainer for bluefield_edac EDAC/pnd2: Make read-only const array intlv static EDAC/igen6: Constify struct res_config EDAC/amd64: Simplify return statement in dct_ecc_enabled() EDAC: Use string choice helper functions Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2025-03-25vfio: VFIO_DEVICE_[AT|DE]TACH_IOMMUFD_PT support pasidYi Liu
This extends the VFIO_DEVICE_[AT|DE]TACH_IOMMUFD_PT ioctls to attach/detach a given pasid of a vfio device to/from an IOAS/HWPT. Link: https://patch.msgid.link/r/20250321180143.8468-4-yi.l.liu@intel.com Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25vfio-iommufd: Support pasid [at|de]tach for physical VFIO devicesYi Liu
This adds pasid_at|de]tach_ioas ops for attaching hwpt to pasid of a device and the helpers for it. For now, only vfio-pci supports pasid attach/detach. Link: https://patch.msgid.link/r/20250321180143.8468-3-yi.l.liu@intel.com Signed-off-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/selftest: Add test ops to test pasid attach/detachYi Liu
This adds 4 test ops for pasid attach/replace/detach testing. There are ops to attach/detach pasid, and also op to check the attached hwpt of a pasid. Link: https://patch.msgid.link/r/20250321171940.7213-18-yi.l.liu@intel.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/selftest: Add a helper to get test deviceYi Liu
There is need to get the selftest device (sobj->type == TYPE_IDEV) in multiple places, so have a helper to for it. Link: https://patch.msgid.link/r/20250321171940.7213-17-yi.l.liu@intel.com Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/selftest: Add set_dev_pasid in mock iommuYi Liu
The callback is needed to make pasid_attach/detach path complete for mock device. A nop is enough for set_dev_pasid. A MOCK_FLAGS_DEVICE_PASID is added to indicate a pasid-capable mock device for the pasid test cases. Other test cases will still create a non-pasid mock device. While the mock iommu always pretends to be pasid-capable. Link: https://patch.msgid.link/r/20250321171940.7213-16-yi.l.liu@intel.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd: Allow allocating PASID-compatible domainYi Liu
The underlying infrastructure has supported the PASID attach and related enforcement per the requirement of the IOMMU_HWPT_ALLOC_PASID flag. This extends iommufd to support PASID compatible domain requested by userspace. Link: https://patch.msgid.link/r/20250321171940.7213-15-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommu/vt-d: Add IOMMU_HWPT_ALLOC_PASID supportYi Liu
Intel iommu driver just treats it as a nop since Intel VT-d does not have special requirement on domains attached to either the PASID or RID of a PASID-capable device. Link: https://patch.msgid.link/r/20250321171940.7213-14-yi.l.liu@intel.com Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd: Enforce PASID-compatible domain for RIDYi Liu
Per the definition of IOMMU_HWPT_ALLOC_PASID, iommufd needs to enforce the RID to use PASID-compatible domain if PASID has been attached, and vice versa. The PASID path has already enforced it. This adds the enforcement in the RID path. This enforcement requires a lock across the RID and PASID attach path, the idev->igroup->lock is used as both the RID and the PASID path holds it. Link: https://patch.msgid.link/r/20250321171940.7213-13-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd: Support pasid attach/replaceYi Liu
This extends the below APIs to support PASID. Device drivers to manage pasid attach/replace/detach. int iommufd_device_attach(struct iommufd_device *idev, ioasid_t pasid, u32 *pt_id); int iommufd_device_replace(struct iommufd_device *idev, ioasid_t pasid, u32 *pt_id); void iommufd_device_detach(struct iommufd_device *idev, ioasid_t pasid); The pasid operations share underlying attach/replace/detach infrastructure with the device operations, but still have some different implications: - no reserved region per pasid otherwise SVA architecture is already broken (CPU address space doesn't count device reserved regions); - accordingly no sw_msi trick; Cache coherency enforcement is still applied to pasid operations since it is about memory accesses post page table walking (no matter the walk is per RID or per PASID). Link: https://patch.msgid.link/r/20250321171940.7213-12-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd: Enforce PASID-compatible domain in PASID pathYi Liu
AMD IOMMU requires attaching PASID-compatible domains to PASID-capable devices. This includes the domains attached to RID and PASIDs. Related discussions in link [1] and [2]. ARM also has such a requirement, Intel does not need it, but can live up with it. Hence, iommufd is going to enforce this requirement as it is not harmful to vendors that do not need it. Mark the PASID-compatible domains and enforce it in the PASID path. [1] https://lore.kernel.org/linux-iommu/20240709182303.GK14050@ziepe.ca/ [2] https://lore.kernel.org/linux-iommu/20240822124433.GD3468552@ziepe.ca/ Link: https://patch.msgid.link/r/20250321171940.7213-11-yi.l.liu@intel.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/device: Add pasid_attach array to track per-PASID attachYi Liu
PASIDs of PASID-capable device can be attached to hwpt separately, hence a pasid array to track per-PASID attachment is necessary. The index IOMMU_NO_PASID is used by the RID path. Hence drop the igroup->attach. Link: https://patch.msgid.link/r/20250321171940.7213-10-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/device: Replace device_list with device_arrayYi Liu
igroup->attach->device_list is used to track attached device of a group in the RID path. Such tracking is also needed in the PASID path in order to share path with the RID path. While there is only one list_head in the iommufd_device. It cannot work if the device has been attached in both RID path and PASID path. To solve it, replacing the device_list with an xarray. The attached iommufd_device is stored in the entry indexed by the idev->obj.id. Link: https://patch.msgid.link/r/20250321171940.7213-9-yi.l.liu@intel.com Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/device: Wrap igroup->hwpt and igroup->device_list into attach structYi Liu
The igroup->hwpt and igroup->device_list are used to track the hwpt attach of a group in the RID path. While the coming PASID path also needs such tracking. To be prepared, wrap igroup->hwpt and igroup->device_list into attach struct which is allocated per attaching the first device of the group and freed per detaching the last device of the group. Link: https://patch.msgid.link/r/20250321171940.7213-8-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/device: Add helper to detect the first attach of a groupYi Liu
The existing code detects the first attach by checking the igroup->device_list. However, the igroup->hwpt can also be used to detect the first attach. In future modifications, it is better to check the igroup->hwpt instead of the device_list. To improve readbility and also prepare for further modifications on this part, this adds a helper for it. Link: https://patch.msgid.link/r/20250321171940.7213-7-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/device: Replace idev->igroup with local variableYi Liu
With more use of the fields of igroup, use a local vairable instead of using the idev->igroup heavily. No functional change expected. Link: https://patch.msgid.link/r/20250321171940.7213-6-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd/device: Only add reserved_iova in non-pasid pathYi Liu
As the pasid is passed through the attach/replace/detach helpers, it is necessary to ensure only the non-pasid path adds reserved_iova. Link: https://patch.msgid.link/r/20250321171940.7213-5-yi.l.liu@intel.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd: Pass @pasid through the device attach/replace pathYi Liu
Most of the core logic before conducting the actual device attach/ replace operation can be shared with pasid attach/replace. So pass @pasid through the device attach/replace helpers to prepare adding pasid attach/replace. So far the @pasid should only be IOMMU_NO_PASID. No functional change. Link: https://patch.msgid.link/r/20250321171940.7213-4-yi.l.liu@intel.com Signed-off-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommu: Introduce a replace API for device pasidYi Liu
Provide a high-level API to allow replacements of one domain with another for specific pasid of a device. This is similar to iommu_replace_group_handle() and it is expected to be used only by IOMMUFD. Link: https://patch.msgid.link/r/20250321171940.7213-3-yi.l.liu@intel.com Co-developed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommu: Require passing new handles to APIs supporting handleYi Liu
Add kdoc to highligt the caller of iommu_[attach|replace]_group_handle() and iommu_attach_device_pasid() should always provide a new handle. This can avoid race with lockless reference to the handle. e.g. the find_fault_handler() and iommu_report_device_fault() in the PRI path. Link: https://patch.msgid.link/r/20250321171940.7213-2-yi.l.liu@intel.com Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommu: Drop sw_msi from iommu_domainNicolin Chen
There are only two sw_msi implementations in the entire system, thus it's not very necessary to have an sw_msi pointer. Instead, check domain->cookie_type to call the two sw_msi implementations directly from the core code. Link: https://patch.msgid.link/r/7ded87c871afcbaac665b71354de0a335087bf0f.1742871535.git.nicolinc@nvidia.com Suggested-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommufd: Move iommufd_sw_msi and related functions to driver.cNicolin Chen
To provide the iommufd_sw_msi() to the iommu core that is under a different Kconfig, move it and its related functions to driver.c. Then, stub it into the iommu-priv header. The iommufd_sw_msi_install() continues to be used by iommufd internal, so put it in the private header. Note that iommufd_sw_msi() will be called in the iommu core, replacing the sw_msi function pointer. Given that IOMMU_API is "bool" in Kconfig, change IOMMUFD_DRIVER_CORE to "bool" as well. Since this affects the module size, here is before-n-after size comparison: [Before] text data bss dec hex filename 18797 848 56 19701 4cf5 drivers/iommu/iommufd/device.o 722 44 0 766 2fe drivers/iommu/iommufd/driver.o [After] text data bss dec hex filename 17735 808 56 18599 48a7 drivers/iommu/iommufd/device.o 3020 180 0 3200 c80 drivers/iommu/iommufd/driver.o Link: https://patch.msgid.link/r/374c159592dba7852bee20968f3f66fa0ee8ca93.1742871535.git.nicolinc@nvidia.com Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25iommu: Sort out domain user dataRobin Murphy
When DMA/MSI cookies were made first-class citizens back in commit 46983fcd67ac ("iommu: Pull IOVA cookie management into the core"), there was no real need to further expose the two different cookie types. However, now that IOMMUFD wants to add a third type of MSI-mapping cookie, we do have a nicely compelling reason to properly dismabiguate things at the domain level beyond just vaguely guessing from the domain type. Meanwhile, we also effectively have another "cookie" in the form of the anonymous union for other user data, which isn't much better in terms of being vague and unenforced. The fact is that all these cookie types are mutually exclusive, in the sense that combining them makes zero sense and/or would be catastrophic (iommu_set_fault_handler() on an SVA domain, anyone?) - the only combination which *might* be reasonable is perhaps a fault handler and an MSI cookie, but nobody's doing that at the moment, so let's rule it out as well for the sake of being clear and robust. To that end, we pull DMA and MSI cookies apart a little more, mostly to clear up the ambiguity at domain teardown, then for clarity (and to save a little space), move them into the union, whose ownership we can then properly describe and enforce entirely unambiguously. [nicolinc: rebase on latest tree; use prefix IOMMU_COOKIE_; merge unions in iommu_domain; add IOMMU_COOKIE_IOMMUFD for iommufd_hwpt] Link: https://patch.msgid.link/r/1ace9076c95204bbe193ee77499d395f15f44b23.1742871535.git.nicolinc@nvidia.com Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-25watchdog: sunxi_wdt: Add support for Allwinner A523Andre Przywara
The Allwinner A523 SoC comes with a watchdog very similar to the ones in the previous Allwinner SoCs, but oddly enough moves the first half of its registers up by one word. Since we have different offsets for these registers across the other SoCs as well, this can simply be modelled by just stating the new offsets in our per-SoC struct. The rest of the IP is the same as in the D1, although the A523 moves its watchdog to a separate MMIO frame, so it's not embedded in the timer anymore. The driver can be ignorant of this, because the DT will take care of this. Add a new struct for the A523, specifying the SoC-specific details, and tie the new DT compatible string to it. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Jernej Skrabec <jernej.skrabec@gmail.com> Link: https://lore.kernel.org/r/20250307005712.16828-5-andre.przywara@arm.com Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
2025-03-25Merge branch '100GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2025-03-18 (ice, idpf) For ice: Przemek modifies string declarations to resolve compile issues on gcc 7.5. Karol adds padding to initial programming of GLTSYN_TIME* registers to ensure it will occur in the future to prevent hardware issues. Jesse Brandeburg turns off driver RDMA capability when the corresponding kernel config is not enabled to aid in preventing resource exhaustion. Jan adjusts type declaration to properly catch error conditions and prevent truncation of values. He also adds bounds checking to prevent overflow in ice_vc_cfg_q_quanta(). Lukasz adds checking and error reporting for invalid values in ice_vc_cfg_q_bw(). Mateusz adds check for valid size for ice_vc_fdir_parse_raw(). For idpf: Emil adds check, and handling, on failure to register netdev. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: idpf: check error for register_netdev() on init ice: fix using untrusted value of pkt_len in ice_vc_fdir_parse_raw() ice: fix input validation for virtchnl BW ice: validate queue quanta parameters to prevent OOB access ice: stop truncating queue ids when checking virtchnl: make proto and filter action count unsigned ice: fix reservation of resources for RDMA when disabled ice: ensure periodic output start time is in the future ice: health.c: fix compilation on gcc 7.5 ==================== Link: https://patch.msgid.link/20250318200511.2958251-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25Merge tag 'i2c-host-6.15' of ↵Wolfram Sang
git://git.kernel.org/pub/scm/linux/kernel/git/andi.shyti/linux into i2c/for-mergewindow i2c-host updates for v6.15 Refactoring and cleanups - octeon, cadence, i801, pasemi, mlxbf, bcm-iproc: general refactorings - octeon: remove 10-bit address support Improvements - amd-asf: improved error handling - designware: use guard(mutex) - amd-asf, designware: update naming to follow latest specs - cadence: fix cleanup path in probe - i801: use MMIO and I/O mapping helpers to access registers - pxa: handle error after clk_prepare_enable New features - added i2c_10bit_addr_*_from_msg() and updated multiple drivers - omap: added multiplexer state handling - qcom-geni: update frequency configuration - qup: introduce DMA usage policy New hardware support - exynos: add support for Samsung exynos7870 - k1: add support for spacemit k1 (new driver) - imx: add support for i.mx94 lpi2c - rk3x: add support for rk3562 Multiplexers - ltc4306, reg: fix assignment in platform_driver structure
2025-03-25net: ti: cpsw: Add metadata support for xdp modeLorenzo Bianconi
Set metadata size building the skb from xdp_buff in cpsw/cpsw_new drivers. ti cpsw and cpsw_new drivers set xdp headroom at least to CPSW_HEADROOM_NA: CPSW_HEADROOM_NA max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN so the headroom is large enough to contain xdp_frame and xdp metadata. Please note this patch is just compiled tested. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250318-mvneta-xdp-meta-v2-7-b6075778f61f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: mana: Add metadata support for xdp modeLorenzo Bianconi
Set metadata size building the skb from xdp_buff in mana driver. mana driver sets xdp headroom to XDP_PACKET_HEADROOM so the headroom is large enough to contain xdp_frame and xdp metadata. Please note this patch is just compiled tested. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250318-mvneta-xdp-meta-v2-6-b6075778f61f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: ethernet: mediatek: Add metadata support for xdp modeLorenzo Bianconi
Set metadata size building the skb from xdp_buff in mediatek driver. mtk_eth_soc driver sets xdp headroom to XDP_PACKET_HEADROOM so the headroom is large enough to contain xdp_frame and xdp metadata. Please note this patch is just compiled tested. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250318-mvneta-xdp-meta-v2-5-b6075778f61f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: octeontx2: Add metadata support for xdp modeLorenzo Bianconi
Set metadata size building the skb from xdp_buff in octeontx2 driver. octeontx2 driver sets xdp headroom to OTX2_HEAD_ROOM OTX2_HEAD_ROOM OTX2_ALIGN OTX2_ALIGN 128 so the headroom is large enough to contain xdp_frame and xdp metadata. Please note this patch is just compiled tested. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250318-mvneta-xdp-meta-v2-4-b6075778f61f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: netsec: Add metadata support for xdp modeLorenzo Bianconi
Set metadata size building the skb from xdp_buff in netsec driver. netsec driver sets xdp headroom to NETSEC_RXBUF_HEADROOM: NETSEC_RXBUF_HEADROOM max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN so the headroom is large enough to contain xdp_frame and xdp metadata. Please note this patch is just compiled tested. Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250318-mvneta-xdp-meta-v2-3-b6075778f61f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: mvpp2: Add metadata support for xdp modeLorenzo Bianconi
Set metadata size building the skb from xdp_buff in mvpp2 driver mvpp2 driver sets xdp headroom to: MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM where MVPP2_MH_SIZE 2 MVPP2_SKB_HEADROOM min(max(XDP_PACKET_HEADROOM, NET_SKB_PAD), 224) so the headroom is large enough to contain xdp_frame and xdp metadata. Please note this patch is just compiled tested. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250318-mvneta-xdp-meta-v2-2-b6075778f61f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25net: mvneta: Add metadata support for xdp modeLorenzo Bianconi
Set metadata size building the skb from xdp_buff in mvneta driver mvneta sets xdp headroom to: MVNETA_MH_SIZE + MVNETA_SKB_HEADROOM where MVNETA_MH_SIZE 2 MVNETA_SKB_HEADROOM max(NET_SKB_PAD, XDP_PACKET_HEADROOM) so the headroom is large enough to contain xdp_frame and xdp metadata. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/20250318-mvneta-xdp-meta-v2-1-b6075778f61f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-25irqdomain: i2c: Switch to irq_find_mapping()Jiri Slaby (SUSE)
irq_linear_revmap() is deprecated, so remove all its uses and supersede them by an identical call to irq_find_mapping(). Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org> Cc: Peter Rosin <peda@axentia.se> Cc: linux-i2c@vger.kernel.org Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
2025-03-25net: tulip: avoid unused variable warningSimon Horman
There is an effort to achieve W=1 kernel builds without warnings. As part of that effort Helge Deller highlighted the following warnings in the tulip driver when compiling with W=1 and CONFIG_TULIP_MWI=n: .../tulip_core.c: In function ‘tulip_init_one’: .../tulip_core.c:1309:22: warning: variable ‘force_csr0’ set but not used This patch addresses that problem using IS_ENABLED(). This approach has the added benefit of reducing conditionally compiled code. And thus increasing compile coverage. E.g. for allmodconfig builds which enable CONFIG_TULIP_MWI. Compile tested only. No run-time effect intended. Acked-by: Helge Deller <deller@gmx.de> Signed-off-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250318-tulip-w1-v3-1-a813fadd164d@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>