summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)Author
2025-04-04net: stmmac: thead: switch to use set_clk_tx_rate() hooknet-mergedRussell King (Oracle)
Switch from using the fix_mac_speed() hook to set_clk_tx_rate() to manage the transmit clock. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: meson: switch to use set_clk_tx_rate() hookRussell King (Oracle)
Switch from using the fix_mac_speed() hook to set_clk_tx_rate() to manage the transmit clock. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: ipq806x: switch to use set_clk_tx_rate() hookRussell King (Oracle)
Switch from using the fix_mac_speed() hook to set_clk_tx_rate() to manage the transmit clock. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: rk: switch to use set_clk_tx_rate() hookRussell King (Oracle)
Switch from using the fix_mac_speed() hook to set_clk_tx_rate() to manage the transmit clock. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: imx: use generic stmmac_set_clk_tx_rate()Russell King (Oracle)
Convert non-i.MX93 users to use the generic stmmac_set_clk_tx_rate() to configure the MAC transmit clock rate. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: intel: use generic stmmac_set_clk_tx_rate()Russell King (Oracle)
Use the generic stmmac_set_clk_tx_rate() to configure the MAC transmit clock. Note that given the current unpatched driver structure, plat_dat->fix_mac_speed will always be populated with kmb_eth_fix_mac_speed(), even when no clock is present. We preserve this behaviour in this patch by always initialising plat_dat->clk_tx_i and plat_dat->set_clk_tx_rate. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: starfive: use generic stmmac_set_clk_tx_rate()Russell King (Oracle)
Use the generic stmmac_set_clk_tx_rate() to configure the MAC transmit clock. Reviewed-by: Thierry Reding <treding@nvidia.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: dwc-qos: use generic stmmac_set_clk_tx_rate()Russell King (Oracle)
Use the generic stmmac_set_clk_tx_rate() to configure the MAC transmit clock. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: provide generic implementation for set_clk_tx_rate methodRussell King (Oracle)
Provide a generic implementation for the set_clk_tx_rate method introduced by the previous patch, which is capable of configuring the MAC transmit clock for 10M, 100M and 1000M speeds for at least MII, GMII, RGMII and RMII interface modes. Reviewed-by: Thierry Reding <treding@nvidia.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: provide set_clk_tx_rate() hookRussell King (Oracle)
Several stmmac sub-drivers which support RGMII follow the same pattern. They calculate the transmit clock rate, and then call clk_set_rate(). Analysis of several implementation documents suggests that the platform is responsible for providing the transmit clock to the DWMAC core's clk_tx_i. The expected rates are: 10Mbps 100Mbps 1Gbps MII 2.5MHz 25MHz RMII 2.5MHz 25MHz GMII 125MHz RGMI 2.5MHz 25MHz 125MHz It seems some platforms require this clock to be manually configured, but there are outputs from the MAC core that indicate the speed, so a platform may use these to automatically configure the clock. Thus, we can't just provide one solution to configure this clock rate. Moreover, the clock may need to be derived from one of several sources depending on the interface mode. Provide a platform hook that is passed the transmit clock, interface mode and speed. Reviewed-by: Thierry Reding <treding@nvidia.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: dwc-qos: name struct plat_stmmacenet_data consistentlyRussell King (Oracle)
Most of the stmmac driver uses "plat_dat" to name the platform data, and "data" is used for the glue driver data. make dwc-qos follow this pattern to avoid silly mistakes. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: thead: ensure divisor gives proper rateRussell King (Oracle)
thead was checking that the stmmac_clk rate was a multiple of the RGMII rates for 1G and 100M, but didn't check for 10M. Rather than use this with hard-coded speeds, check that the calculated divisor gives the required rate by multplying the transmit clock rate back up to the stmmac clock rate and checking that it agrees. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: thead: use rgmii_clock() for RGMII clock rateRussell King (Oracle)
Switch to using rgmii_clock() to get the RGMII TXC rate, and calculate the divisor from the parent clock rate and the rate indicated by rgmii_clock(). Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: qcom-ethqos: use rgmii_clock() to set the link clockRussell King (Oracle)
The link clock operates at twice the RGMII clock rate. Therefore, we can use the rgmii_clock() helper to set this clock rate. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: print stmmac_init_dma_engine() errors using netdev_err()Russell King (Oracle)
stmmac_init_dma_engine() uses dev_err() which leads to errors being reported as e.g: dwc-eth-dwmac 2490000.ethernet: Failed to reset the dma dwc-eth-dwmac 2490000.ethernet eth0: stmmac_hw_setup: DMA engine initialization failed stmmac_init_dma_engine() is only called from stmmac_hw_setup() which itself uses netdev_err(), and we will have a net_device setup. So, change the dev_err() to netdev_err() to give consistent error messages. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1tl5y1-004UgG-8X@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-04net: stmmac: "speed" passed to fix_mac_speed is an intRussell King (Oracle)
priv->plat->fix_mac_speed() is called from stmmac_mac_link_up(), which is passed the speed as an "int". However, fix_mac_speed() implicitly casts this to an unsigned int. Some platform glue code print this value using %u, others with %d. Some implicitly cast it back to an int, and others to u32. Good practice is to use one type and only one type to represent a value being passed around a driver. Switch all of these over to consistently use "int" when dealing with a speed passed from stmmac_mac_link_up(), even though the speed will always be positive. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Acked-by: Chen-Yu Tsai <wens@csie.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> Link: https://patch.msgid.link/E1tkKmN-004ObM-Ge@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-04net: stmmac: remove useless priv->flow_ctrlRussell King (Oracle)
priv->flow_ctrl is only accessed by stmmac_main.c, and the only place that it is read is in stmmac_mac_flow_ctrl(). This function is only called from stmmac_mac_link_up() which always sets priv->flow_ctrl immediately before calling this function. Therefore, initialising this at probe time is ineffectual because it will always be overwritten before it's read. As such, the "flow_ctrl" module parameter has been useless for some time. Rather than remove the module parameter, which would risk module load failure, change the description to indicate that it is obsolete, and warn if it is set by userspace. Moreover, storing the value in the stmmac_priv has no benefit as it's set and then immediately read stmmac_mac_flow_ctrl(). Instead, pass it as a parameter to this function.. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> Link: https://patch.msgid.link/E1tkKmI-004ObG-DL@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-04net: stmmac: clarify priv->pause and pause module parameterRussell King (Oracle)
priv->pause corresponds with "pause_time" in the 802.3 specification, and is also called "pause_time" in the various MAC backends. For consistency, use the same name in the core stmmac code. Clarify the units of the "pause" module parameter which sets up this struct member to indicate that it's in units of the pause_quanta defined by 802.3, which is 512 bit times. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Link: https://patch.msgid.link/E1tkKmD-004ObA-9K@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-04net: stmmac: remove calls to xpcs_config_eee()Russell King (Oracle)
Remove the explicit calls to xpcs_config_eee() from the stmmac driver, preferring instead for phylink to manage the EEE configuration at the PCS. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: call xpcs_config_eee_mult_fact()Russell King (Oracle)
Arrange for stmmac to call the new xpcs_config_eee_mult_fact() function to configure the EEE clock multiplying factor. This will allow the removal of the xpcs_config_eee() calls in the next patch. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: remove old EEE methodsRussell King (Oracle)
As we no longer call the set_eee_mode(), reset_eee_mode() and set_eee_lpi_entry_timer() methods, remove these and their glue in common.h Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: use stmmac_set_lpi_mode()Russell King (Oracle)
Use the new stmmac_set_lpi_mode() API to configure the parameters of the desired LPI mode rather than the older methods. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: dwmac4: clear LPI_CTRL_STATUS_LPITCSE tooRussell King (Oracle)
Ensure that LPI_CTRL_STATUS_LPITCSE is also appropriately cleared when disabling LPI or enabling LPI without TX clock gating. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: add new MAC method set_lpi_mode()Russell King (Oracle)
Add a new method to control LPI mode configuration. This is architected to have three configuration states: LPI disabled, LPI forced (active), or LPI under hardware timer control. This reflects the three modes which the main body of the driver wishes to deal with. We pass in whether transmit clock gating should be used, and the hardware timer value in microseconds to be set when using hardware timer control. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: use common LPI_CTRL_STATUS bit definitionsRussell King (Oracle)
The bit definitions for the LPI control/status register are identical across all MAC versions, with the exception that some bits may not be implemented. Provide definitions for bits in this register in common.h, convert to use them, and remove the core- specific definitions. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: remove unnecessary LPI disable when enabling LPIRussell King (Oracle)
Remove the unnecessary LPI disable when enabling LPI - as noted in previous commits, there will never be two consecutive calls to stmmac_mac_enable_tx_lpi() without an intervening stmmac_mac_disable_tx_lpi. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: clear priv->tx_path_in_lpi_mode when disabling LPIRussell King (Oracle)
As other code paths do, clear priv->tx_path_in_lpi_mode when disabling LPI. This is done after the software timer has been deleted and hardware LPI has been disabled. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: remove unnecessary priv->eee_enabled testsRussell King (Oracle)
Phylink will not call the mac_disable_tx_lpi() and mac_enable_tx_lpi() methods randomly - the first method to be called will be the enable method, and then after, the disable method will be called once between subsequent enable calls. Thus there is a guaranteed ordering. Therefore, we know the previous state of priv->eee_enabled, and can remove it from both methods. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: remove unnecessary priv->eee_active testsRussell King (Oracle)
Since priv->eee_active is assigned with a constant value in each of these methods, there is no need to test its value later. Remove these unnecessary tests. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: remove priv->dma_cap.eee test in tx_lpi methodsRussell King (Oracle)
The tests for priv->dma_cap.eee in stmmac_mac_{en,dis}able_tx_lpi() is useless as these methods will only be called when using phylink managed EEE, and that will only be enabled if the LPI capabilities in phylink_config have been populated during initialisation. This only occurs when priv->dma_cap.eee was true. As priv->dma_cap.eee remains constant during the lifetime of the driver instance, there is no need to re-check it in these methods. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: split stmmac_init_eee() and move to phylink methodsRussell King (Oracle)
Move the appropriate parts of stmmac_init_eee() into the phylink mac_enable_tx_lpi() and mac_disable_tx_lpi() methods. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: dwmac4: ensure LPIATE is clearedRussell King (Oracle)
LPIATE enables the hardware timer for entering LPI mode. To sure that the correct mode is used, clear LPIATE when using manual/software-timed mode to prevent the hardware using the timer. stmmac_main.c avoids this being a problem at the moment by calling stmmac_set_eee_lpi_timer(..., 0) before switching to software mode. We no longer need to call stmmac_set_eee_lpi_timer(..., 0) when disabling EEE as stmmac_reset_eee_mode() will now clear all LPI settings. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: ensure LPI is disabled when disabling EEERussell King (Oracle)
When EEE is disabled, we call stmmac_set_eee_lpi_timer(..., 0). For dwmac4, this will result in LPIATE being cleared, but LPIEN and LPITXA being set, causing LPI mode to be signalled (if it wasn't before). For others MACs, stmmac_set_eee_lpi_timer() does nothing, which means that LPI mode will continue to be signalled despite the expectation for it to be disabled. In both cases, LPI mode will be terminated when the transmitter has a packet to send, and LPIEN will be cleared by hardware. Call stmmac_reset_eee_mode() to ensure that LPI mode is disabled when EEE mode is requested to be disabled. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-04net: stmmac: delete software timer before disabling LPIRussell King (Oracle)
Delete the software timer to ensure that the timer doesn't fire while we are modifying the LPI register state, potentially re-enabling LPI. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-03-20net: ti: icssg-prueth: Add lock to statsMD Danish Anwar
Currently the API emac_update_hardware_stats() reads different ICSSG stats without any lock protection. This API gets called by .ndo_get_stats64() which is only under RCU protection and nothing else. Add lock to this API so that the reading of statistics happens during lock. Fixes: c1e10d5dc7a1 ("net: ti: icssg-prueth: Add ICSSG Stats") Signed-off-by: MD Danish Anwar <danishanwar@ti.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250314102721.1394366-1-danishanwar@ti.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-19net: stmmac: dwc-qos-eth: use devm_kzalloc() for AXI dataRussell King (Oracle)
Everywhere else in the driver uses devm_kzalloc() when allocating the AXI data, so there is no kfree() of this structure. However, dwc-qos-eth uses kzalloc(), which leads to this memory being leaked. Switch to use devm_kzalloc(). Fixes: d8256121a91a ("stmmac: adding new glue driver dwmac-dwc-qos-eth") Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1tsRyv-0064nU-O9@rmk-PC.armlinux.org.uk Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-18net: mana: Support holes in device list reply msgHaiyang Zhang
According to GDMA protocol, holes (zeros) are allowed at the beginning or middle of the gdma_list_devices_resp message. The existing code cannot properly handle this, and may miss some devices in the list. To fix, scan the entire list until the num_of_devs are found, or until the end of the list. Cc: stable@vger.kernel.org Fixes: ca9c54d2d6a5 ("net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)") Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Long Li <longli@microsoft.com> Reviewed-by: Shradha Gupta <shradhagupta@microsoft.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1741723974-1534-1-git-send-email-haiyangz@microsoft.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-18net: ethernet: ti: am65-cpsw: Fix NAPI registration sequenceVignesh Raghavendra
Registering the interrupts for TX or RX DMA Channels prior to registering their respective NAPI callbacks can result in a NULL pointer dereference. This is seen in practice as a random occurrence since it depends on the randomness associated with the generation of traffic by Linux and the reception of traffic from the wire. Fixes: 681eb2beb3ef ("net: ethernet: ti: am65-cpsw: ensure proper channel cleanup in error path") Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com> Co-developed-by: Siddharth Vadapalli <s-vadapalli@ti.com> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Reviewed-by: Alexander Sverdlin <alexander.sverdlin@siemens.com> Reviewed-by: Roger Quadros <rogerq@kernel.org> Link: https://patch.msgid.link/20250311154259.102865-1-s-vadapalli@ti.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-13net: mana: cleanup mana struct after debugfs_remove()Shradha Gupta
When on a MANA VM hibernation is triggered, as part of hibernate_snapshot(), mana_gd_suspend() and mana_gd_resume() are called. If during this mana_gd_resume(), a failure occurs with HWC creation, mana_port_debugfs pointer does not get reinitialized and ends up pointing to older, cleaned-up dentry. Further in the hibernation path, as part of power_down(), mana_gd_shutdown() is triggered. This call, unaware of the failures in resume, tries to cleanup the already cleaned up mana_port_debugfs value and hits the following bug: [ 191.359296] mana 7870:00:00.0: Shutdown was called [ 191.359918] BUG: kernel NULL pointer dereference, address: 0000000000000098 [ 191.360584] #PF: supervisor write access in kernel mode [ 191.361125] #PF: error_code(0x0002) - not-present page [ 191.361727] PGD 1080ea067 P4D 0 [ 191.362172] Oops: Oops: 0002 [#1] SMP NOPTI [ 191.362606] CPU: 11 UID: 0 PID: 1674 Comm: bash Not tainted 6.14.0-rc5+ #2 [ 191.363292] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 [ 191.364124] RIP: 0010:down_write+0x19/0x50 [ 191.364537] Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 55 48 89 e5 53 48 89 fb e8 de cd ff ff 31 c0 ba 01 00 00 00 <f0> 48 0f b1 13 75 16 65 48 8b 05 88 24 4c 6a 48 89 43 08 48 8b 5d [ 191.365867] RSP: 0000:ff45fbe0c1c037b8 EFLAGS: 00010246 [ 191.366350] RAX: 0000000000000000 RBX: 0000000000000098 RCX: ffffff8100000000 [ 191.366951] RDX: 0000000000000001 RSI: 0000000000000064 RDI: 0000000000000098 [ 191.367600] RBP: ff45fbe0c1c037c0 R08: 0000000000000000 R09: 0000000000000001 [ 191.368225] R10: ff45fbe0d2b01000 R11: 0000000000000008 R12: 0000000000000000 [ 191.368874] R13: 000000000000000b R14: ff43dc27509d67c0 R15: 0000000000000020 [ 191.369549] FS: 00007dbc5001e740(0000) GS:ff43dc663f380000(0000) knlGS:0000000000000000 [ 191.370213] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 191.370830] CR2: 0000000000000098 CR3: 0000000168e8e002 CR4: 0000000000b73ef0 [ 191.371557] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 191.372192] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400 [ 191.372906] Call Trace: [ 191.373262] <TASK> [ 191.373621] ? show_regs+0x64/0x70 [ 191.374040] ? __die+0x24/0x70 [ 191.374468] ? page_fault_oops+0x290/0x5b0 [ 191.374875] ? do_user_addr_fault+0x448/0x800 [ 191.375357] ? exc_page_fault+0x7a/0x160 [ 191.375971] ? asm_exc_page_fault+0x27/0x30 [ 191.376416] ? down_write+0x19/0x50 [ 191.376832] ? down_write+0x12/0x50 [ 191.377232] simple_recursive_removal+0x4a/0x2a0 [ 191.377679] ? __pfx_remove_one+0x10/0x10 [ 191.378088] debugfs_remove+0x44/0x70 [ 191.378530] mana_detach+0x17c/0x4f0 [ 191.378950] ? __flush_work+0x1e2/0x3b0 [ 191.379362] ? __cond_resched+0x1a/0x50 [ 191.379787] mana_remove+0xf2/0x1a0 [ 191.380193] mana_gd_shutdown+0x3b/0x70 [ 191.380642] pci_device_shutdown+0x3a/0x80 [ 191.381063] device_shutdown+0x13e/0x230 [ 191.381480] kernel_power_off+0x35/0x80 [ 191.381890] hibernate+0x3c6/0x470 [ 191.382312] state_store+0xcb/0xd0 [ 191.382734] kobj_attr_store+0x12/0x30 [ 191.383211] sysfs_kf_write+0x3e/0x50 [ 191.383640] kernfs_fop_write_iter+0x140/0x1d0 [ 191.384106] vfs_write+0x271/0x440 [ 191.384521] ksys_write+0x72/0xf0 [ 191.384924] __x64_sys_write+0x19/0x20 [ 191.385313] x64_sys_call+0x2b0/0x20b0 [ 191.385736] do_syscall_64+0x79/0x150 [ 191.386146] ? __mod_memcg_lruvec_state+0xe7/0x240 [ 191.386676] ? __lruvec_stat_mod_folio+0x79/0xb0 [ 191.387124] ? __pfx_lru_add+0x10/0x10 [ 191.387515] ? queued_spin_unlock+0x9/0x10 [ 191.387937] ? do_anonymous_page+0x33c/0xa00 [ 191.388374] ? __handle_mm_fault+0xcf3/0x1210 [ 191.388805] ? __count_memcg_events+0xbe/0x180 [ 191.389235] ? handle_mm_fault+0xae/0x300 [ 191.389588] ? do_user_addr_fault+0x559/0x800 [ 191.390027] ? irqentry_exit_to_user_mode+0x43/0x230 [ 191.390525] ? irqentry_exit+0x1d/0x30 [ 191.390879] ? exc_page_fault+0x86/0x160 [ 191.391235] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 191.391745] RIP: 0033:0x7dbc4ff1c574 [ 191.392111] Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d d5 ea 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89 [ 191.393412] RSP: 002b:00007ffd95a23ab8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001 [ 191.393990] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007dbc4ff1c574 [ 191.394594] RDX: 0000000000000005 RSI: 00005a6eeadb0ce0 RDI: 0000000000000001 [ 191.395215] RBP: 00007ffd95a23ae0 R08: 00007dbc50003b20 R09: 0000000000000000 [ 191.395805] R10: 0000000000000001 R11: 0000000000000202 R12: 0000000000000005 [ 191.396404] R13: 00005a6eeadb0ce0 R14: 00007dbc500045c0 R15: 00007dbc50001ee0 [ 191.396987] </TASK> To fix this, we explicitly set such mana debugfs variables to NULL after debugfs_remove() is called. Fixes: 6607c17c6c5e ("net: mana: Enable debugfs files for MANA device") Cc: stable@vger.kernel.org Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Link: https://patch.msgid.link/1741688260-28922-1-git-send-email-shradhagupta@linux.microsoft.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-13net/mlx5e: Prevent bridge link show failure for non-eswitch-allowed devicesCarolina Jubran
mlx5_eswitch_get_vepa returns -EPERM if the device lacks eswitch_manager capability, blocking mlx5e_bridge_getlink from retrieving VEPA mode. Since mlx5e_bridge_getlink implements ndo_bridge_getlink, returning -EPERM causes bridge link show to fail instead of skipping devices without this capability. To avoid this, return -EOPNOTSUPP from mlx5e_bridge_getlink when mlx5_eswitch_get_vepa fails, ensuring the command continues processing other devices while ignoring those without the necessary capability. Fixes: 4b89251de024 ("net/mlx5: Support ndo bridge_setlink and getlink") Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1741644104-97767-7-git-send-email-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-13net/mlx5: Bridge, fix the crash caused by LAG state checkJianbo Liu
When removing LAG device from bridge, NETDEV_CHANGEUPPER event is triggered. Driver finds the lower devices (PFs) to flush all the offloaded entries. And mlx5_lag_is_shared_fdb is checked, it returns false if one of PF is unloaded. In such case, mlx5_esw_bridge_lag_rep_get() and its caller return NULL, instead of the alive PF, and the flush is skipped. Besides, the bridge fdb entry's lastuse is updated in mlx5 bridge event handler. But this SWITCHDEV_FDB_ADD_TO_BRIDGE event can be ignored in this case because the upper interface for bond is deleted, and the entry will never be aged because lastuse is never updated. To make things worse, as the entry is alive, mlx5 bridge workqueue keeps sending that event, which is then handled by kernel bridge notifier. It causes the following crash when accessing the passed bond netdev which is already destroyed. To fix this issue, remove such checks. LAG state is already checked in commit 15f8f168952f ("net/mlx5: Bridge, verify LAG state when adding bond to bridge"), driver still need to skip offload if LAG becomes invalid state after initialization. Oops: stack segment: 0000 [#1] SMP CPU: 3 UID: 0 PID: 23695 Comm: kworker/u40:3 Tainted: G OE 6.11.0_mlnx #1 Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 Workqueue: mlx5_bridge_wq mlx5_esw_bridge_update_work [mlx5_core] RIP: 0010:br_switchdev_event+0x2c/0x110 [bridge] Code: 44 00 00 48 8b 02 48 f7 00 00 02 00 00 74 69 41 54 55 53 48 83 ec 08 48 8b a8 08 01 00 00 48 85 ed 74 4a 48 83 fe 02 48 89 d3 <4c> 8b 65 00 74 23 76 49 48 83 fe 05 74 7e 48 83 fe 06 75 2f 0f b7 RSP: 0018:ffffc900092cfda0 EFLAGS: 00010297 RAX: ffff888123bfe000 RBX: ffffc900092cfe08 RCX: 00000000ffffffff RDX: ffffc900092cfe08 RSI: 0000000000000001 RDI: ffffffffa0c585f0 RBP: 6669746f6e690a30 R08: 0000000000000000 R09: ffff888123ae92c8 R10: 0000000000000000 R11: fefefefefefefeff R12: ffff888123ae9c60 R13: 0000000000000001 R14: ffffc900092cfe08 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88852c980000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f15914c8734 CR3: 0000000002830005 CR4: 0000000000770ef0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> ? __die_body+0x1a/0x60 ? die+0x38/0x60 ? do_trap+0x10b/0x120 ? do_error_trap+0x64/0xa0 ? exc_stack_segment+0x33/0x50 ? asm_exc_stack_segment+0x22/0x30 ? br_switchdev_event+0x2c/0x110 [bridge] ? sched_balance_newidle.isra.149+0x248/0x390 notifier_call_chain+0x4b/0xa0 atomic_notifier_call_chain+0x16/0x20 mlx5_esw_bridge_update+0xec/0x170 [mlx5_core] mlx5_esw_bridge_update_work+0x19/0x40 [mlx5_core] process_scheduled_works+0x81/0x390 worker_thread+0x106/0x250 ? bh_worker+0x110/0x110 kthread+0xb7/0xe0 ? kthread_park+0x80/0x80 ret_from_fork+0x2d/0x50 ? kthread_park+0x80/0x80 ret_from_fork_asm+0x11/0x20 </TASK> Fixes: ff9b7521468b ("net/mlx5: Bridge, support LAG") Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1741644104-97767-6-git-send-email-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-13net/mlx5: Lag, Check shared fdb before creating MultiPort E-SwitchShay Drory
Currently, MultiPort E-Switch is requesting to create a LAG with shared FDB without checking the LAG is supporting shared FDB. Add the check. Fixes: a32327a3a02c ("net/mlx5: Lag, Control MultiPort E-Switch single FDB mode") Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1741644104-97767-5-git-send-email-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-13net/mlx5: Fix incorrect IRQ pool usage when releasing IRQsShay Drory
mlx5_irq_pool_get() is a getter for completion IRQ pool only. However, after the cited commit, mlx5_irq_pool_get() is called during ctrl IRQ release flow to retrieve the pool, resulting in the use of an incorrect IRQ pool. Hence, use the newly introduced mlx5_irq_get_pool() getter to retrieve the correct IRQ pool based on the IRQ itself. While at it, rename mlx5_irq_pool_get() to mlx5_irq_table_get_comp_irq_pool() which accurately reflects its purpose and improves code readability. Fixes: 0477d5168bbb ("net/mlx5: Expose SFs IRQs") Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Maher Sanalla <msanalla@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1741644104-97767-4-git-send-email-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-13net/mlx5: HWS, Rightsize bwc matcher priorityVlad Dogaru
The bwc layer was clamping the matcher priority from 32 bits to 16 bits. This didn't show up until a matcher was resized, since the initial native matcher was created using the correct 32 bit value. The fix also reorders fields to avoid some padding. Fixes: 2111bb970c78 ("net/mlx5: HWS, added backward-compatible API handling") Signed-off-by: Vlad Dogaru <vdogaru@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1741644104-97767-3-git-send-email-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-13net/mlx5: DR, use the right action structs for STEv3Yevgeny Kliteynik
Some actions in ConnectX-8 (STEv3) have different structure, and they are handled separately in ste_ctx_v3. This separate handling was missing two actions: INSERT_HDR and REMOVE_HDR, which broke SWS for Linux Bridge. This patch resolves the issue by introducing dedicated callbacks for the insert and remove header functions, with version-specific implementations for each STE variant. Fixes: 4d617b57574f ("net/mlx5: DR, add support for ConnectX-8 steering") Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Itamar Gozlan <igozlan@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1741644104-97767-2-git-send-email-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-11qlcnic: fix memory leak issues in qlcnic_sriov_common.cHaoxiang Li
Add qlcnic_sriov_free_vlans() in qlcnic_sriov_alloc_vlans() if any sriov_vlans fails to be allocated. Add qlcnic_sriov_free_vlans() to free the memory allocated by qlcnic_sriov_alloc_vlans() if "sriov->allowed_vlans" fails to be allocated. Fixes: 91b7282b613d ("qlcnic: Support VLAN id config.") Cc: stable@vger.kernel.org Signed-off-by: Haoxiang Li <haoxiang_li2024@163.com> Link: https://patch.msgid.link/20250307094952.14874-1-haoxiang_li2024@163.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-11rtase: Fix improper release of ring list entries in rtase_sw_resetJustin Lai
Since rtase_init_ring, which is called within rtase_sw_reset, adds ring entries already present in the ring list back into the list, it causes the ring list to form a cycle. This results in list_for_each_entry_safe failing to find an endpoint during traversal, leading to an error. Therefore, it is necessary to remove the previously added ring_list nodes before calling rtase_init_ring. Fixes: 079600489960 ("rtase: Implement net_device_ops") Signed-off-by: Justin Lai <justinlai0215@realtek.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250306070510.18129-1-justinlai0215@realtek.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-10eth: bnxt: fix memory leak in queue resetTaehee Yoo
When the queue is reset, the bnxt_alloc_one_tpa_info() is called to allocate tpa_info for the new queue. And then the old queue's tpa_info should be removed by the bnxt_free_one_tpa_info(), but it is not called. So memory leak occurs. It adds the bnxt_free_one_tpa_info() in the bnxt_queue_mem_free(). unreferenced object 0xffff888293cc0000 (size 16384): comm "ncdevmem", pid 2076, jiffies 4296604081 hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 40 75 78 93 82 88 ff ff ........@ux..... 40 75 78 93 02 00 00 00 00 00 00 00 00 00 00 00 @ux............. backtrace (crc 5d7d4798): ___kmalloc_large_node+0x10d/0x1b0 __kmalloc_large_node_noprof+0x17/0x60 __kmalloc_noprof+0x3f6/0x520 bnxt_alloc_one_tpa_info+0x5f/0x300 [bnxt_en] bnxt_queue_mem_alloc+0x8e8/0x14f0 [bnxt_en] netdev_rx_queue_restart+0x233/0x620 net_devmem_bind_dmabuf_to_queue+0x2a3/0x600 netdev_nl_bind_rx_doit+0xc00/0x10a0 genl_family_rcv_msg_doit+0x1d4/0x2b0 genl_rcv_msg+0x3fb/0x6c0 netlink_rcv_skb+0x12c/0x360 genl_rcv+0x24/0x40 netlink_unicast+0x447/0x710 netlink_sendmsg+0x712/0xbc0 __sys_sendto+0x3fd/0x4d0 __x64_sys_sendto+0xdc/0x1b0 Fixes: 2d694c27d32e ("bnxt_en: implement netdev_queue_mgmt_ops") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Link: https://patch.msgid.link/20250309134219.91670-7-ap420073@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-10eth: bnxt: fix kernel panic in the bnxt_get_queue_stats{rx | tx}Taehee Yoo
When qstats-get operation is executed, callbacks of netdev_stats_ops are called. The bnxt_get_queue_stats{rx | tx} collect per-queue stats from sw_stats in the rings. But {rx | tx | cp}_ring are allocated when the interface is up. So, these rings are not allocated when the interface is down. The qstats-get is allowed even if the interface is down. However, the bnxt_get_queue_stats{rx | tx}() accesses cp_ring and tx_ring without null check. So, it needs to avoid accessing rings if the interface is down. Reproducer: ip link set $interface down ./cli.py --spec netdev.yaml --dump qstats-get OR ip link set $interface down python ./stats.py Splat looks like: BUG: kernel NULL pointer dereference, address: 0000000000000000 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 1680fa067 P4D 1680fa067 PUD 16be3b067 PMD 0 Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 UID: 0 PID: 1495 Comm: python3 Not tainted 6.14.0-rc4+ #32 5cd0f999d5a15c574ac72b3e4b907341 Hardware name: ASUS System Product Name/PRIME Z690-P D4, BIOS 0603 11/01/2021 RIP: 0010:bnxt_get_queue_stats_rx+0xf/0x70 [bnxt_en] Code: c6 87 b5 18 00 00 02 eb a2 66 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 01 RSP: 0018:ffffabef43cdb7e0 EFLAGS: 00010282 RAX: 0000000000000000 RBX: ffffffffc04c8710 RCX: 0000000000000000 RDX: ffffabef43cdb858 RSI: 0000000000000000 RDI: ffff8d504e850000 RBP: ffff8d506c9f9c00 R08: 0000000000000004 R09: ffff8d506bcd901c R10: 0000000000000015 R11: ffff8d506bcd9000 R12: 0000000000000000 R13: ffffabef43cdb8c0 R14: ffff8d504e850000 R15: 0000000000000000 FS: 00007f2c5462b080(0000) GS:ffff8d575f600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 0000000167fd0000 CR4: 00000000007506f0 PKRU: 55555554 Call Trace: <TASK> ? __die+0x20/0x70 ? page_fault_oops+0x15a/0x460 ? sched_balance_find_src_group+0x58d/0xd10 ? exc_page_fault+0x6e/0x180 ? asm_exc_page_fault+0x22/0x30 ? bnxt_get_queue_stats_rx+0xf/0x70 [bnxt_en cdd546fd48563c280cfd30e9647efa420db07bf1] netdev_nl_stats_by_netdev+0x2b1/0x4e0 ? xas_load+0x9/0xb0 ? xas_find+0x183/0x1d0 ? xa_find+0x8b/0xe0 netdev_nl_qstats_get_dumpit+0xbf/0x1e0 genl_dumpit+0x31/0x90 netlink_dump+0x1a8/0x360 Fixes: af7b3b4adda5 ("eth: bnxt: support per-queue statistics") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com> Link: https://patch.msgid.link/20250309134219.91670-6-ap420073@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-10eth: bnxt: do not update checksum in bnxt_xdp_build_skb()Taehee Yoo
The bnxt_rx_pkt() updates ip_summed value at the end if checksum offload is enabled. When the XDP-MB program is attached and it returns XDP_PASS, the bnxt_xdp_build_skb() is called to update skb_shared_info. The main purpose of bnxt_xdp_build_skb() is to update skb_shared_info, but it updates ip_summed value too if checksum offload is enabled. This is actually duplicate work. When the bnxt_rx_pkt() updates ip_summed value, it checks if ip_summed is CHECKSUM_NONE or not. It means that ip_summed should be CHECKSUM_NONE at this moment. But ip_summed may already be updated to CHECKSUM_UNNECESSARY in the XDP-MB-PASS path. So the by skb_checksum_none_assert() WARNS about it. This is duplicate work and updating ip_summed in the bnxt_xdp_build_skb() is not needed. Splat looks like: WARNING: CPU: 3 PID: 5782 at ./include/linux/skbuff.h:5155 bnxt_rx_pkt+0x479b/0x7610 [bnxt_en] Modules linked in: bnxt_re bnxt_en rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs veth xt_nat xt_tcpudp xt_conntrack nft_chain_nat xt_MASQUERADE nf_] CPU: 3 UID: 0 PID: 5782 Comm: socat Tainted: G W 6.14.0-rc4+ #27 Tainted: [W]=WARN Hardware name: ASUS System Product Name/PRIME Z690-P D4, BIOS 0603 11/01/2021 RIP: 0010:bnxt_rx_pkt+0x479b/0x7610 [bnxt_en] Code: 54 24 0c 4c 89 f1 4c 89 ff c1 ea 1f ff d3 0f 1f 00 49 89 c6 48 85 c0 0f 84 4c e5 ff ff 48 89 c7 e8 ca 3d a0 c8 e9 8f f4 ff ff <0f> 0b f RSP: 0018:ffff88881ba09928 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 00000000c7590303 RCX: 0000000000000000 RDX: 1ffff1104e7d1610 RSI: 0000000000000001 RDI: ffff8881c91300b8 RBP: ffff88881ba09b28 R08: ffff888273e8b0d0 R09: ffff888273e8b070 R10: ffff888273e8b010 R11: ffff888278b0f000 R12: ffff888273e8b080 R13: ffff8881c9130e00 R14: ffff8881505d3800 R15: ffff888273e8b000 FS: 00007f5a2e7be080(0000) GS:ffff88881ba00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fff2e708ff8 CR3: 000000013e3b0000 CR4: 00000000007506f0 PKRU: 55555554 Call Trace: <IRQ> ? __warn+0xcd/0x2f0 ? bnxt_rx_pkt+0x479b/0x7610 ? report_bug+0x326/0x3c0 ? handle_bug+0x53/0xa0 ? exc_invalid_op+0x14/0x50 ? asm_exc_invalid_op+0x16/0x20 ? bnxt_rx_pkt+0x479b/0x7610 ? bnxt_rx_pkt+0x3e41/0x7610 ? __pfx_bnxt_rx_pkt+0x10/0x10 ? napi_complete_done+0x2cf/0x7d0 __bnxt_poll_work+0x4e8/0x1220 ? __pfx___bnxt_poll_work+0x10/0x10 ? __pfx_mark_lock.part.0+0x10/0x10 bnxt_poll_p5+0x36a/0xfa0 ? __pfx_bnxt_poll_p5+0x10/0x10 __napi_poll.constprop.0+0xa0/0x440 net_rx_action+0x899/0xd00 ... Following ping.py patch adds xdp-mb-pass case. so ping.py is going to be able to reproduce this issue. Fixes: 1dc4c557bfed ("bnxt: adding bnxt_xdp_build_skb to build skb from multibuffer xdp_buff") Signed-off-by: Taehee Yoo <ap420073@gmail.com> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com> Link: https://patch.msgid.link/20250309134219.91670-5-ap420073@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>