summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-04-29igb: Get rid of spurious interruptsKurt Kanzenbach
When running the igc with XDP/ZC in busy polling mode with deferral of hard interrupts, interrupts still happen from time to time. That is caused by the igb task watchdog which triggers Rx interrupts periodically. That mechanism has been introduced to overcome skb/memory allocation failures [1]. So the Rx clean functions stop processing the Rx ring in case of such failure. The task watchdog triggers Rx interrupts periodically in the hope that memory became available in the mean time. The current behavior is undesirable for real time applications, because the driver induced Rx interrupts trigger also the softirq processing. However, all real time packets should be processed by the application which uses the busy polling method. Therefore, only trigger the Rx interrupts in case of real allocation failures. Introduce a new flag for signaling that condition. Follow the same logic as in commit 8dcf2c212078 ("igc: Get rid of spurious interrupts"). [1] - https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git/commit/?id=3be507547e6177e5c808544bd6a2efa2c7f1d436 Reviewed-by: Joe Damato <jdamato@fastly.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Sweta Kumari <sweta.kumari@intel.com> Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-04-29igb: Add support for persistent NAPI configKurt Kanzenbach
Use netif_napi_add_config() to assign persistent per-NAPI config. This is useful for preserving NAPI settings when changing queue counts or for user space programs using SO_INCOMING_NAPI_ID. Reviewed-by: Joe Damato <jdamato@fastly.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-04-29igb: Link queues to NAPI instancesKurt Kanzenbach
Link queues to NAPI instances via netdev-genl API. This is required to use XDP/ZC busy polling. See commit 5ef44b3cb43b ("xsk: Bring back busy polling support") for details. This also allows users to query the info with netlink: |$ ./tools/net/ynl/pyynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ | --dump queue-get --json='{"ifindex": 2}' |[{'id': 0, 'ifindex': 2, 'napi-id': 8201, 'type': 'rx'}, | {'id': 1, 'ifindex': 2, 'napi-id': 8202, 'type': 'rx'}, | {'id': 2, 'ifindex': 2, 'napi-id': 8203, 'type': 'rx'}, | {'id': 3, 'ifindex': 2, 'napi-id': 8204, 'type': 'rx'}, | {'id': 0, 'ifindex': 2, 'napi-id': 8201, 'type': 'tx'}, | {'id': 1, 'ifindex': 2, 'napi-id': 8202, 'type': 'tx'}, | {'id': 2, 'ifindex': 2, 'napi-id': 8203, 'type': 'tx'}, | {'id': 3, 'ifindex': 2, 'napi-id': 8204, 'type': 'tx'}] Add rtnl locking to PCI error handlers, because netif_queue_set_napi() requires the lock held. While at __igb_open() use RCT coding style. Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: Joe Damato <jdamato@fastly.com> Tested-by: Sweta Kumari <sweta.kumari@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-04-29igb: Link IRQs to NAPI instancesKurt Kanzenbach
Link IRQs to NAPI instances via netdev-genl API. This allows users to query that information via netlink: |$ ./tools/net/ynl/pyynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ | --dump napi-get --json='{"ifindex": 2}' |[{'defer-hard-irqs': 0, | 'gro-flush-timeout': 0, | 'id': 8204, | 'ifindex': 2, | 'irq': 127, | 'irq-suspend-timeout': 0}, | {'defer-hard-irqs': 0, | 'gro-flush-timeout': 0, | 'id': 8203, | 'ifindex': 2, | 'irq': 126, | 'irq-suspend-timeout': 0}, | {'defer-hard-irqs': 0, | 'gro-flush-timeout': 0, | 'id': 8202, | 'ifindex': 2, | 'irq': 125, | 'irq-suspend-timeout': 0}, | {'defer-hard-irqs': 0, | 'gro-flush-timeout': 0, | 'id': 8201, | 'ifindex': 2, | 'irq': 124, | 'irq-suspend-timeout': 0}] |$ cat /proc/interrupts | grep enp2s0 |123: 0 1 IR-PCI-MSIX-0000:02:00.0 0-edge enp2s0 |124: 0 7 IR-PCI-MSIX-0000:02:00.0 1-edge enp2s0-TxRx-0 |125: 0 0 IR-PCI-MSIX-0000:02:00.0 2-edge enp2s0-TxRx-1 |126: 0 5 IR-PCI-MSIX-0000:02:00.0 3-edge enp2s0-TxRx-2 |127: 0 0 IR-PCI-MSIX-0000:02:00.0 4-edge enp2s0-TxRx-3 Reviewed-by: Joe Damato <jdamato@fastly.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-04-29Merge branch 'ip-improve-tcp-sock-multipath-routing'Paolo Abeni
Willem de Bruijn says: ==================== ip: improve tcp sock multipath routing From: Willem de Bruijn <willemb@google.com> Improve layer 4 multipath hash policy for local tcp connections: patch 1: Select a source address that matches the nexthop device. Due to tcp_v4_connect making separate route lookups for saddr and route, the two can currently be inconsistent. patch 2: Use all paths when opening multiple local tcp connections to the same ip address and port. patch 3: Test the behavior. Extend the fib_tests.sh testsuite with one opening many connections, and count SYNs on both egress devices, for packets matching the source address of the dev. Changelog in the individual patches ==================== Link: https://patch.msgid.link/20250424143549.669426-1-willemdebruijn.kernel@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-04-29selftests/net: test tcp connection load balancingWillem de Bruijn
Verify that TCP connections use both routes when connecting multiple times to a remote service over a two nexthop multipath route. Use socat to create the connections. Use tc prio + tc filter to count routes taken, counting SYN packets across the two egress devices. Also verify that the saddr matches that of the device. To avoid flaky tests when testing inherently randomized behavior, set a low bar and pass if even a single SYN is observed on each device. Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Tested-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20250424143549.669426-4-willemdebruijn.kernel@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-04-29ip: load balance tcp connections to single dst addr and portWillem de Bruijn
Load balance new TCP connections across nexthops also when they connect to the same service at a single remote address and port. This affects only port-based multipath hashing: fib_multipath_hash_policy 1 or 3. Local connections must choose both a source address and port when connecting to a remote service, in ip_route_connect. This "chicken-and-egg problem" (commit 2d7192d6cbab ("ipv4: Sanitize and simplify ip_route_{connect,newports}()")) is resolved by first selecting a source address, by looking up a route using the zero wildcard source port and address. As a result multiple connections to the same destination address and port have no entropy in fib_multipath_hash. This is not a problem when forwarding, as skb-based hashing has a 4-tuple. Nor when establishing UDP connections, as autobind there selects a port before reaching ip_route_connect. Load balance also TCP, by using a random port in fib_multipath_hash. Port assignment in inet_hash_connect is not atomic with ip_route_connect. Thus ports are unpredictable, effectively random. Implementation details: Do not actually pass a random fl4_sport, as that affects not only hashing, but routing more broadly, and can match a source port based policy route, which existing wildcard port 0 will not. Instead, define a new wildcard flowi flag that is used only for hashing. Selecting a random source is equivalent to just selecting a random hash entirely. But for code clarity, follow the normal 4-tuple hash process and only update this field. fib_multipath_hash can be reached with zero sport from other code paths, so explicitly pass this flowi flag, rather than trying to infer this case in the function itself. Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20250424143549.669426-3-willemdebruijn.kernel@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-04-29ipv4: prefer multipath nexthop that matches source addressWillem de Bruijn
With multipath routes, try to ensure that packets leave on the device that is associated with the source address. Avoid the following tcpdump example: veth0 Out IP 10.1.0.2.38640 > 10.2.0.3.8000: Flags [S] veth1 Out IP 10.1.0.2.38648 > 10.2.0.3.8000: Flags [S] Which can happen easily with the most straightforward setup: ip addr add 10.0.0.1/24 dev veth0 ip addr add 10.1.0.1/24 dev veth1 ip route add 10.2.0.3 nexthop via 10.0.0.2 dev veth0 \ nexthop via 10.1.0.2 dev veth1 This is apparently considered WAI, based on the comment in ip_route_output_key_hash_rcu: * 2. Moreover, we are allowed to send packets with saddr * of another iface. --ANK It may be ok for some uses of multipath, but not all. For instance, when using two ISPs, a router may drop packets with unknown source. The behavior occurs because tcp_v4_connect makes three route lookups when establishing a connection: 1. ip_route_connect calls to select a source address, with saddr zero. 2. ip_route_connect calls again now that saddr and daddr are known. 3. ip_route_newports calls again after a source port is also chosen. With a route with multiple nexthops, each lookup may make a different choice depending on available entropy to fib_select_multipath. So it is possible for 1 to select the saddr from the first entry, but 3 to select the second entry. Leading to the above situation. Address this by preferring a match that matches the flowi4 saddr. This will make 2 and 3 make the same choice as 1. Continue to update the backup choice until a choice that matches saddr is found. Do this in fib_select_multipath itself, rather than passing an fl4_oif constraint, to avoid changing non-multipath route selection. Commit e6b45241c57a ("ipv4: reset flowi parameters on route connect") shows how that may cause regressions. Also read ipv4.sysctl_fib_multipath_use_neigh only once. No need to refresh in the loop. This does not happen in IPv6, which performs only one lookup. Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20250424143549.669426-2-willemdebruijn.kernel@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-04-28net: ti: icssg-prueth: Add ICSSG FW StatsMD Danish Anwar
The ICSSG firmware maintains set of stats called PA_STATS. Currently the driver only dumps 4 stats. Add support for dumping more stats. The offset for different stats are defined as MACROs in icssg_switch_map.h file. All the offsets are for Slice0. Slice1 offsets are slice0 + 4. The offset calculation is taken care while reading the stats in emac_update_hardware_stats(). The statistics are documented in Documentation/networking/device_drivers/icssg_prueth.rst Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: MD Danish Anwar <danishanwar@ti.com> Link: https://patch.msgid.link/20250424095316.2643573-1-danishanwar@ti.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28tools/Makefile: Add ynl targetJoe Damato
Add targets to build, clean, and install ynl headers, libynl.a, and python tooling. Signed-off-by: Joe Damato <jdamato@fastly.com> Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20250423204647.190784-1-jdamato@fastly.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28rtase: Modify the format specifier in snprintf to %uJustin Lai
Modify the format specifier in snprintf to %u. Signed-off-by: Justin Lai <justinlai0215@realtek.com> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250425064057.30035-1-justinlai0215@realtek.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28Merge branch 'phase-out-hybrid-pci-devres-api'Jakub Kicinski
Philipp Stanner says: ==================== Phase out hybrid PCI devres API Fixes a number of minor issues with the usage of the PCI API in net. Notbaly, it replaces calls to the sometimes-managed pci_request_regions() to the always-managed pcim_request_all_regions(), enabling us to remove that hybrid functionality from PCI. ==================== Link: https://patch.msgid.link/20250425085740.65304-2-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: thunder_bgx: Don't disable PCI device manuallyPhilipp Stanner
thunder_bgx's PCI device is enabled with pcim_enable_device(), a managed devres function which ensures that the device gets enabled on driver detach automatically. Remove the calls to pci_disable_device(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250425085740.65304-10-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: thunder_bgx: Use pure PCI devres APIPhilipp Stanner
The currently used function pci_request_regions() is one of the problematic "hybrid devres" PCI functions, which are sometimes managed through devres, and sometimes not (depending on whether pci_enable_device() or pcim_enable_device() has been called before). The PCI subsystem wants to remove this behavior and, therefore, needs to port all users to functions that don't have this problem. Furthermore, the PCI function being managed implies that it's not necessary to call pci_release_regions() manually. Remove the calls to pci_release_regions(). Replace pci_request_regions() with pcim_request_all_regions(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250425085740.65304-9-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: mdio: thunder: Use pure PCI devres APIPhilipp Stanner
The currently used function pci_request_regions() is one of the problematic "hybrid devres" PCI functions, which are sometimes managed through devres, and sometimes not (depending on whether pci_enable_device() or pcim_enable_device() has been called before). The PCI subsystem wants to remove this behavior and, therefore, needs to port all users to functions that don't have this problem. Furthermore, the PCI function being managed implies that it's not necessary to call pci_release_regions() manually. Remove the calls to pci_release_regions(). Replace pci_request_regions() with pcim_request_all_regions(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250425085740.65304-8-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: ethernet: sis900: Use pure PCI devres APIPhilipp Stanner
The currently used function pci_request_regions() is one of the problematic "hybrid devres" PCI functions, which are sometimes managed through devres, and sometimes not (depending on whether pci_enable_device() or pcim_enable_device() has been called before). The PCI subsystem wants to remove this behavior and, therefore, needs to port all users to functions that don't have this problem. Replace pci_request_regions() with pcim_request_all_regions(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Acked-by: Daniele Venzano <venza@brownhat.org> Link: https://patch.msgid.link/20250425085740.65304-7-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: ethernet: natsemi: Use pure PCI devres APIPhilipp Stanner
The currently used function pci_request_regions() is one of the problematic "hybrid devres" PCI functions, which are sometimes managed through devres, and sometimes not (depending on whether pci_enable_device() or pcim_enable_device() has been called before). The PCI subsystem wants to remove this behavior and, therefore, needs to port all users to functions that don't have this problem. Replace pci_request_regions() with pcim_request_all_regions(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250425085740.65304-6-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: tulip: Use pure PCI devres APIPhilipp Stanner
The currently used function pci_request_regions() is one of the problematic "hybrid devres" PCI functions, which are sometimes managed through devres, and sometimes not (depending on whether pci_enable_device() or pcim_enable_device() has been called before). The PCI subsystem wants to remove this behavior and, therefore, needs to port all users to functions that don't have this problem. Replace pci_request_regions() with pcim_request_all_regions(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250425085740.65304-5-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: octeontx2: Use pure PCI devres APIPhilipp Stanner
The currently used function pci_request_regions() is one of the problematic "hybrid devres" PCI functions, which are sometimes managed through devres, and sometimes not (depending on whether pci_enable_device() or pcim_enable_device() has been called before). The PCI subsystem wants to remove this behavior and, therefore, needs to port all users to functions that don't have this problem. Furthermore, the PCI function being managed implies that it's not necessary to call pci_release_regions() manually. Remove the calls to pci_release_regions(). Replace pci_request_regions() with pcim_request_all_regions(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Link: https://patch.msgid.link/20250425085740.65304-4-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: prestera: Use pure PCI devres APIPhilipp Stanner
The currently used function pci_request_regions() is one of the problematic "hybrid devres" PCI functions, which are sometimes managed through devres, and sometimes not (depending on whether pci_enable_device() or pcim_enable_device() has been called before). The PCI subsystem wants to remove this behavior and, therefore, needs to port all users to functions that don't have this problem. Furthermore, the PCI function being managed implies that it's not necessary to call pci_release_regions() manually. Remove the calls to pci_release_regions(). Replace pci_request_regions() with pcim_request_all_regions(). Signed-off-by: Philipp Stanner <phasta@kernel.org> Acked-by: Elad Nachman <enachman@marvell.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250425085740.65304-3-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28Merge branch 'virtio-net-disable-delayed-refill-when-pausing-rx'Jakub Kicinski
Bui Quang Minh says: ==================== virtio-net: disable delayed refill when pausing rx Hi everyone, This only includes the selftest for virtio-net deadlock bug. The fix commit has been applied already. Link: https://lore.kernel.org/virtualization/174537302875.2111809.8543884098526067319.git-patchwork-notify@kernel.org/T/ ==================== Link: https://patch.msgid.link/20250425071018.36078-1-minhquangbui99@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28selftests: net: add a virtio_net deadlock selftestBui Quang Minh
The selftest reproduces the deadlock scenario when binding/unbinding XDP program, XDP socket, rx ring resize on virtio_net interface. Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://patch.msgid.link/20250425071018.36078-5-minhquangbui99@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28selftests: net: retry when bind returns EBUSY in xdp_helperBui Quang Minh
When binding the XDP socket, we may get EBUSY because the deferred destructor of XDP socket in previous test has not been executed yet. If that is the case, just sleep and retry some times. Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://patch.msgid.link/20250425071018.36078-4-minhquangbui99@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28selftests: net: add flag to force zerocopy mode in xdp_helperBui Quang Minh
This commit adds an optional -z flag to xdp_helper. When this flag is provided, the XDP socket binding is forced to be in zerocopy mode. Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://patch.msgid.link/20250425071018.36078-3-minhquangbui99@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28selftests: net: move xdp_helper to net/libBui Quang Minh
Move xdp_helper to net/lib to make it easier for other selftests to use the helper. Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://patch.msgid.link/20250425071018.36078-2-minhquangbui99@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28Merge branch 'veth-qdisc-backpressure-and-qdisc-check-refactor'Jakub Kicinski
Jesper Dangaard Brouer says: ==================== veth: qdisc backpressure and qdisc check refactor This patch series addresses TX drops seen on veth devices under load, particularly when using threaded NAPI, which is our setup in production. The root cause is that the NAPI consumer often runs on a different CPU than the producer. Combined with scheduling delays or simply slower consumption, this increases the chance that the ptr_ring fills up before packets are drained, resulting in drops from veth_xmit() (ndo_start_xmit()). To make this easier to reproduce, we’ve created a script that sets up a test scenario using network namespaces. The script inserts 1000 iptables rules in the consumer namespace to slow down packet processing and amplify the issue. Reproducer script: https://github.com/xdp-project/xdp-project/blob/main/areas/core/veth_setup01_NAPI_TX_drops.sh This series first introduces a helper to detect no-queue qdiscs and then uses it in the veth driver to conditionally apply qdisc-level backpressure when a real qdisc is attached. The behavior is off by default and opt-in, ensuring minimal impact and easy activation. v6: https://lore.kernel.org/174549933665.608169.392044991754158047.stgit@firesoul v5: https://lore.kernel.org/174489803410.355490.13216831426556849084.stgit@firesoul v4 https://lore.kernel.org/174472463778.274639.12670590457453196991.stgit@firesoul v3: https://lore.kernel.org/174464549885.20396.6987653753122223942.stgit@firesoul v2: https://lore.kernel.org/174412623473.3702169.4235683143719614624.stgit@firesoul RFC-v1: https://lore.kernel.org/174377814192.3376479.16481605648460889310.stgit@firesoul ==================== Link: https://patch.msgid.link/174559288731.827981.8748257839971869213.stgit@firesoul Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28veth: apply qdisc backpressure on full ptr_ring to reduce TX dropsJesper Dangaard Brouer
In production, we're seeing TX drops on veth devices when the ptr_ring fills up. This can occur when NAPI mode is enabled, though it's relatively rare. However, with threaded NAPI - which we use in production - the drops become significantly more frequent. The underlying issue is that with threaded NAPI, the consumer often runs on a different CPU than the producer. This increases the likelihood of the ring filling up before the consumer gets scheduled, especially under load, leading to drops in veth_xmit() (ndo_start_xmit()). This patch introduces backpressure by returning NETDEV_TX_BUSY when the ring is full, signaling the qdisc layer to requeue the packet. The txq (netdev queue) is stopped in this condition and restarted once veth_poll() drains entries from the ring, ensuring coordination between NAPI and qdisc. Backpressure is only enabled when a qdisc is attached. Without a qdisc, the driver retains its original behavior - dropping packets immediately when the ring is full. This avoids unexpected behavior changes in setups without a configured qdisc. With a qdisc in place (e.g. fq, sfq) this allows Active Queue Management (AQM) to fairly schedule packets across flows and reduce collateral damage from elephant flows. A known limitation of this approach is that the full ring sits in front of the qdisc layer, effectively forming a FIFO buffer that introduces base latency. While AQM still improves fairness and mitigates flow dominance, the latency impact is measurable. In hardware drivers, this issue is typically addressed using BQL (Byte Queue Limits), which tracks in-flight bytes needed based on physical link rate. However, for virtual drivers like veth, there is no fixed bandwidth constraint - the bottleneck is CPU availability and the scheduler's ability to run the NAPI thread. It is unclear how effective BQL would be in this context. This patch serves as a first step toward addressing TX drops. Future work may explore adapting a BQL-like mechanism to better suit virtual devices like veth. Reported-by: Yan Zhai <yan@cloudflare.com> Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org> Reviewed-by: Toshiaki Makita <toshiaki.makita1@gmail.com> Link: https://patch.msgid.link/174559294022.827981.1282809941662942189.stgit@firesoul Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: sched: generalize check for no-queue qdisc on TX queueJesper Dangaard Brouer
The "noqueue" qdisc can either be directly attached, or get default attached if net_device priv_flags has IFF_NO_QUEUE. In both cases, the allocated Qdisc structure gets it's enqueue function pointer reset to NULL by noqueue_init() via noqueue_qdisc_ops. This is a common case for software virtual net_devices. For these devices with no-queue, the transmission path in __dev_queue_xmit() will bypass the qdisc layer. Directly invoking device drivers ndo_start_xmit (via dev_hard_start_xmit). In this mode the device driver is not allowed to ask for packets to be queued (either via returning NETDEV_TX_BUSY or stopping the TXQ). The simplest and most reliable way to identify this no-queue case is by checking if enqueue == NULL. The vrf driver currently open-codes this check (!qdisc->enqueue). While functionally correct, this low-level detail is better encapsulated in a dedicated helper for clarity and long-term maintainability. To make this behavior more explicit and reusable, this patch introduce a new helper: qdisc_txq_has_no_queue(). Helper will also be used by the veth driver in the next patch, which introduces optional qdisc-based backpressure. This is a non-functional change. Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org> Link: https://patch.msgid.link/174559293172.827981.7583862632045264175.stgit@firesoul Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28mdio: fix CONFIG_MDIO_DEVRES selectsArnd Bergmann
The newly added rtl9300 driver needs MDIO_DEVRES: x86_64-linux-ld: drivers/net/mdio/mdio-realtek-rtl9300.o: in function `rtl9300_mdiobus_probe': mdio-realtek-rtl9300.c:(.text+0x941): undefined reference to `devm_mdiobus_alloc_size' x86_64-linux-ld: mdio-realtek-rtl9300.c:(.text+0x9e2): undefined reference to `__devm_mdiobus_register' Since this is a hidden symbol, it needs to be selected by each user, rather than the usual 'depends on'. I see that there are a few other drivers that accidentally use 'depends on', so fix these as well for consistency and to avoid dependency loops. Fixes: 37f9b2a6c086 ("net: ethernet: Add missing depends on MDIO_DEVRES") Fixes: 24e31e474769 ("net: mdio: Add RTL9300 MDIO driver") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Link: https://patch.msgid.link/20250425112819.1645342-1-arnd@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28Merge branch 'net-stmmac-dwmac-loongson-add-loongson-2k3000-support'Jakub Kicinski
Huacai Chen says: ==================== net: stmmac: dwmac-loongson: Add Loongson-2K3000 support This series add stmmac driver support for Loongson-2K3000/Loongson-3B6000M, which introduces a new CORE ID (0x12) and a new PCI device ID (0x7a23). The new core reduces channel numbers from 8 to 4, but checksum is supported for all channels. ==================== Note that the first patch of the series has been merged separately as commit f438eee2c8c9 ("net: stmmac: dwmac-loongson: Move queue number init to common function") Link: https://patch.msgid.link/20250424072209.3134762-1-chenhuacai@loongson.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: stmmac: dwmac-loongson: Add new GMAC's PCI device ID supportHuacai Chen
Add a new GMAC's PCI device ID (0x7a23) support which is used in Loongson-2K3000/Loongson-3B6000M. The new GMAC device use external PHY, so it reuses loongson_gmac_data() as the old GMAC device (0x7a03), and the new GMAC device still doesn't support flow control now. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Tested-by: Henry Chen <chenx97@aosc.io> Tested-by: Biao Dong <dongbiao@loongson.cn> Signed-off-by: Baoqi Zhang <zhangbaoqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Link: https://patch.msgid.link/20250424072209.3134762-4-chenhuacai@loongson.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: stmmac: dwmac-loongson: Add new multi-chan IP core supportHuacai Chen
Add a new multi-chan IP core (0x12) support which is used in Loongson- 2K3000/Loongson-3B6000M. Compared with the 0x10 core, the new 0x12 core reduces channel numbers from 8 to 4, but checksum is supported for all channels. Add a "multichan" flag to loongson_data, so that we can simply use a "if (ld->multichan)" condition rather than the complicated condition "if (ld->loongson_id == DWMAC_CORE_MULTICHAN_V1 || ld->loongson_id == DWMAC_CORE_MULTICHAN_V2)". Reviewed-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Henry Chen <chenx97@aosc.io> Tested-by: Biao Dong <dongbiao@loongson.cn> Signed-off-by: Baoqi Zhang <zhangbaoqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Link: https://patch.msgid.link/20250424072209.3134762-3-chenhuacai@loongson.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28Merge branch 'net-stmmac-socfpga-1000basex-support-and-cleanups'Jakub Kicinski
Maxime Chevallier says: ==================== net: stmmac: socfpga: 1000BaseX support and cleanups This small series sorts-out 1000BaseX support and does a bit of cleanup for the Lynx conversion. Patch 1 makes sure that we set the right phy_mode when working in 1000BaseX mode, so that the internal GMII is configured correctly. Patch 2 removes a check for phy_device upon calling fix_mac_speed(). As the SGMII adapter may be chained to a Lynx PCS, checking for a phy_device to be attached to the netdev before enabling the SGMII adapter doesn't make sense, as we won't have a downstream PHY when using 1000BaseX. Patch 3 cleans an unused field from the PCS conversion. v1: https://lore.kernel.org/20250422094701.49798-1-maxime.chevallier@bootlin.com v2: https://lore.kernel.org/20250423104646.189648-1-maxime.chevallier@bootlin.com ==================== Link: https://patch.msgid.link/20250424071223.221239-1-maxime.chevallier@bootlin.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: stmmac: socfpga: Remove unused pcs-mdiodev fieldMaxime Chevallier
When dwmac-socfpga was converted to using the Lynx PCS (previously referred to in the driver as the Altera TSE PCS), the lynx_pcs_create_mdiodev() was used to create the pcs instance. As this function didn't exist in the early versions of the series, a local mdiodev object was stored for PCS creation. It was never used, but still made it into the driver, so remove it. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Link: https://patch.msgid.link/20250424071223.221239-4-maxime.chevallier@bootlin.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: stmmac: socfpga: Don't check for phy to enable the SGMII adapterMaxime Chevallier
The SGMII adapter needs to be enabled for both Cisco SGMII and 1000BaseX operations. It doesn't make sense to check for an attached phydev here, as we simply might not have any, in particular if we're using the 1000BaseX interface mode. Make so that we only re-enable the SGMII adapter when it's present, and when we use a phy_mode that is handled by said adapter. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Link: https://patch.msgid.link/20250424071223.221239-3-maxime.chevallier@bootlin.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: stmmac: socfpga: Enable internal GMII when using 1000BaseXMaxime Chevallier
Dwmac Socfpga may be used with an instance of a Lynx / Altera TSE PCS, in which case it gains support for 1000BaseX. It appears that the PCS is wired to the MAC through an internal GMII bus. Make sure that we enable the GMII_MII mode for the internal MAC when using 1000BaseX. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Link: https://patch.msgid.link/20250424071223.221239-2-maxime.chevallier@bootlin.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-28net: stmmac: dwmac-loongson: Move queue number init to common functionHuacai Chen
Currently, the tx and rx queue number initialization is duplicated in loongson_gmac_data() and loongson_gnet_data(), so move it to the common function loongson_default_data(). This is a preparation for later patches. Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Tested-by: Henry Chen <chenx97@aosc.io> Tested-by: Biao Dong <dongbiao@loongson.cn> Signed-off-by: Baoqi Zhang <zhangbaoqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-04-28bonding: assign random address if device address is same as bondHangbin Liu
This change addresses a MAC address conflict issue in failover scenarios, similar to the problem described in commit a951bc1e6ba5 ("bonding: correct the MAC address for 'follow' fail_over_mac policy"). In fail_over_mac=follow mode, the bonding driver expects the formerly active slave to swap MAC addresses with the newly active slave during failover. However, under certain conditions, two slaves may end up with the same MAC address, which breaks this policy: 1) ip link set eth0 master bond0 -> bond0 adopts eth0's MAC address (MAC0). 2) ip link set eth1 master bond0 -> eth1 is added as a backup with its own MAC (MAC1). 3) ip link set eth0 nomaster -> eth0 is released and restores its MAC (MAC0). -> eth1 becomes the active slave, and bond0 assigns MAC0 to eth1. 4) ip link set eth0 master bond0 -> eth0 is re-added to bond0, now both eth0 and eth1 have MAC0. This results in a MAC address conflict and violates the expected behavior of the failover policy. To fix this, we assign a random MAC address to any newly added slave if its current MAC address matches that of the bond. The original (permanent) MAC address is saved and will be restored when the device is released from the bond. This ensures that each slave has a unique MAC address during failover transitions, preserving the integrity of the fail_over_mac=follow policy. Fixes: 3915c1e8634a ("bonding: Add "follow" option to fail_over_mac") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Acked-by: Jay Vosburgh <jv@jvosburgh.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-04-25Merge branch 'io_uring-zcrx-fix-selftests-and-add-new-test-for-rss-ctx'Jakub Kicinski
David Wei says: ==================== io_uring/zcrx: fix selftests and add new test for rss ctx Update io_uring zero copy receive selftest. Patch 1 does a requested cleanup to use defer() for undoing ethtool actions during the test and restoring the NIC under test back to its original state. Patch 2 adds a required call to set hds_thresh to 0. This is needed for the queue API. Patch 3 adds a new test case for steering into RSS contexts. A real application using io_uring zero copy receive relies on this working to shard work across multiple queues. There seems to be some differences/bugs with steering into RSS contexts and individual queues. ==================== Link: https://patch.msgid.link/20250425022049.3474590-1-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-25io_uring/zcrx: selftests: add test case for rss ctxDavid Wei
RSS contexts are used to shard work across multiple queues for an application using io_uring zero copy receive. Add a test case checking that steering flows into an RSS context works. Until I add multi-thread support to the selftest binary, this test case only has 1 queue in the RSS context. Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250425022049.3474590-4-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-25io_uring/zcrx: selftests: set hds_thresh to 0David Wei
Setting hds_thresh to 0 is required for queue reset. Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250425022049.3474590-3-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-25io_uring/zcrx: selftests: switch to using defer() for cleanupDavid Wei
Switch to using defer() for putting the NIC back to the original state prior to running the selftest. Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250425022049.3474590-2-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24Merge branch 'fix-netdevim-to-correctly-mark-napi-ids'Jakub Kicinski
Joe Damato says: ==================== Fix netdevim to correctly mark NAPI IDs This series fixes netdevsim to correctly set the NAPI ID on the skb. This is helpful for writing tests around features that use SO_INCOMING_NAPI_ID. In addition to the netdevsim fix in patch 1, patches 2 & 3 do some self test refactoring and add a test for NAPI IDs. The test itself (patch 3) introduces a C helper because apparently python doesn't have socket.SO_INCOMING_NAPI_ID. v3: https://lore.kernel.org/20250418013719.12094-1-jdamato@fastly.com v2: https://lore.kernel.org/20250417013301.39228-1-jdamato@fastly.com rfcv1: https://lore.kernel.org/20250329000030.39543-1-jdamato@fastly.com ==================== Link: https://patch.msgid.link/20250424002746.16891-1-jdamato@fastly.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24selftests: drv-net: Test that NAPI ID is non-zeroJoe Damato
Test that the SO_INCOMING_NAPI_ID of a network file descriptor is non-zero. This ensures that either the core networking stack or, in some cases like netdevsim, the driver correctly sets the NAPI ID. Signed-off-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250424002746.16891-4-jdamato@fastly.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24selftests: drv-net: Factor out ksft C helpersJoe Damato
Factor ksft C helpers to a header so they can be used by other C-based tests. Signed-off-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250424002746.16891-3-jdamato@fastly.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24netdevsim: Mark NAPI ID on skb in nsim_rcvJoe Damato
Previously, nsim_rcv was not marking the NAPI ID on the skb, leading to applications seeing a napi ID of 0 when using SO_INCOMING_NAPI_ID. To add to the userland confusion, netlink appears to correctly report the NAPI IDs for netdevsim queues but the resulting file descriptor from a call to accept() was reporting a NAPI ID of 0. Signed-off-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250424002746.16891-2-jdamato@fastly.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24tools: ynl: fix the header guard name for OVPNJakub Kicinski
Thorsten reports that after upgrading system headers from linux-next the YNL build breaks. I typo'ed the header guard, _H is missing. Reported-by: Thorsten Leemhuis <linux@leemhuis.info> Link: https://lore.kernel.org/59ba7a94-17b9-485f-aa6d-14e4f01a7a39@leemhuis.info Fixes: 12b196568a3a ("tools: ynl: add missing header deps") Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Thorsten Leemhuis <linux@leemhuis.info> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250423220231.1035931-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24net: ethernet: mtk_wed: annotate RCU release in attach()Johannes Berg
There are some sparse warnings in wifi, and it seems that it's actually possible to annotate a function pointer with __releases(), making the sparse warnings go away. In a way that also serves as documentation that rcu_read_unlock() must be called in the attach method, so add that annotation. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Link: https://patch.msgid.link/20250423150811.456205-2-johannes@sipsolutions.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24Merge branch 'tcp-fastopen-observability'Jakub Kicinski
Jeremy Harris says: ==================== tcp: fastopen: observability Whether TCP Fast Open was used for a connection is not reliably observable by an accepting application when the SYN passed no data. Fix this by noting during SYN receive processing that an acceptable Fast Open option was used, and provide this to userland via getsockopt TCP_INFO. ==================== Link: https://patch.msgid.link/20250423124334.4916-1-jgh@exim.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-24tcp: fastopen: pass TFO child indication through getsockoptJeremy Harris
tcp: fastopen: pass TFO child indication through getsockopt Note that this uses up the last bit of a field in struct tcp_info Signed-off-by: Jeremy Harris <jgh@exim.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Neal Cardwell <ncardwell@google.com> Link: https://patch.msgid.link/20250423124334.4916-3-jgh@exim.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>