summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-02-07net: dsa: rzn1_a5psw: Use of_get_available_child_by_name()Biju Das
Simplify a5psw_probe() by using of_get_available_child_by_name(). While at it, move of_node_put(mdio) inside the if block to avoid code duplication. Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-02-07of: base: Add of_get_available_child_by_name()Biju Das
There are lot of drivers using of_get_child_by_name() followed by of_device_is_available() to find the available child node by name for a given parent. Provide a helper for these users to simplify the code. Suggested-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-02-06Merge branch 'for-next' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== ice: managing MSI-X in driver Michal Swiatkowski says: It is another try to allow user to manage amount of MSI-X used for each feature in ice. First was via devlink resources API, it wasn't accepted in upstream. Also static MSI-X allocation using devlink resources isn't really user friendly. This try is using more dynamic way. "Dynamic" across whole kernel when platform supports it and "dynamic" across the driver when not. To achieve that reuse global devlink parameter pf_msix_max and pf_msix_min. It fits how ice hardware counts MSI-X. In case of ice amount of MSI-X reported on PCI is a whole MSI-X for the card (with MSI-X for VFs also). Having pf_msix_max allow user to statically set how many MSI-X he wants on PF and how many should be reserved for VFs. pf_msix_min is used to set minimum number of MSI-X with which ice driver should probe correctly. Meaning of this field in case of dynamic vs static allocation: - on system with dynamic MSI-X allocation support * alloc pf_msix_min as static, rest will be allocated dynamically - on system without dynamic MSI-X allocation support * try alloc pf_msix_max as static, minimum acceptable result is pf_msix_min As Jesse and Piotr suggested pf_msix_max and pf_msix_min can (an probably should) be stored in NVM. This patchset isn't implementing that. Dynamic (kernel or driver) way means that splitting MSI-X across the RDMA and eth in case there is a MSI-X shortage isn't correct. Can work when dynamic is only on driver site, but can't when dynamic is on kernel site. Let's remove this code and move to MSI-X allocation feature by feature. If there is no more MSI-X for a feature, a feature is working with less MSI-X or it is turned off. There is a regression here. With MSI-X splitting user can run RDMA and eth even on system with not enough MSI-X. Now only eth will work. RDMA can be turned on by changing number of PF queues (lowering) and reprobe RDMA driver. Example: 72 CPU number, eth, RDMA and flow director (1 MSI-X), 1 MSI-X for OICR on PF, and 1 more for RDMA. Card is using 1 + 72 + 1 + 72 + 1 = 147. We set pf_msix_min = 2, pf_msix_max = 128 OICR: 1 eth: 72 flow director: 1 RDMA: 128 - 74 = 54 We can change number of queues on pf to 36 and do devlink reinit OICR: 1 eth: 36 RDMA: 73 flow director: 1 We can also (implemented in "ice: enable_rdma devlink param") turned RDMA off. OICR: 1 eth: 72 RDMA: 0 (turned off) flow director: 1 After this changes we have a static base vector for SRIOV (SIOV probably in the feature). Last patch from this series is simplifying managing VF MSI-X code based on static vector. Now changing queues using ethtool is also changing MSI-X. If there is enough MSI-X it is always one to one. When there is not enough there will be more queues than MSI-X. * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: ice: init flow director before RDMA ice: simplify VF MSI-X managing ice: enable_rdma devlink param ice: treat dyn_allowed only as suggestion ice, irdma: move interrupts code to irdma ice: get rid of num_lan_msix field ice: remove splitting MSI-X between features ice: devlink PF MSI-X max and min parameter ice: count combined queues using Rx/Tx count ==================== Link: https://patch.msgid.link/20250205185512.895887-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: pcs: rzn1-miic: Convert to for_each_available_child_of_node() helperGeert Uytterhoeven
Simplify miic_parse_dt() by using the for_each_available_child_of_node() helper instead of manually skipping unavailable child nodes. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/3e394d4cf8204bcf17b184bfda474085aa8ed0dd.1738771631.git.geert+renesas@glider.be Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: pcs: rzn1-miic: fill in PCS supported_interfacesRussell King (Oracle)
Populate the PCS supported_interfaces bitmap with the interfaces that this PCS supports. This makes the manual checking in miic_validate() redundant, so remove that. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1tfhYq-003aTm-Nx@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06Merge branch 'enic-use-page-pool-api-for-receiving-packets'Jakub Kicinski
John Daley says: ==================== enic: Use Page Pool API for receiving packets Use the Page Pool API for RX. The Page Pool API improves bandwidth and CPU overhead by recycling pages instead of allocating new buffers in the driver. Also, page pool fragment allocation for smaller MTUs is used allow multiple packets to share pages. RX code was moved to its own file and some refactoring was done beforehand to make the page pool changes more trasparent and to simplify the resulting code. ==================== Link: https://patch.msgid.link/20250205235416.25410-1-johndale@cisco.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06enic: remove copybreak tunableJohn Daley
With the move to using the Page Pool API for RX, rx copybreak was not showing any improvement in host CPU overhead, latency or bandwidth so the driver no longer makes use of the rx_copybreak setting. This patch removes the ethtool tuneable hooks to set and get the rx copybreak since they and now no-ops. Rx copybreak was the only tunable supported, so remove the set and get tunable callbacks all together. Co-developed-by: Nelson Escobar <neescoba@cisco.com> Signed-off-by: Nelson Escobar <neescoba@cisco.com> Co-developed-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: John Daley <johndale@cisco.com> Link: https://patch.msgid.link/20250205235416.25410-5-johndale@cisco.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06enic: Use the Page Pool API for RXJohn Daley
The Page Pool API improves bandwidth and CPU overhead by recycling pages instead of allocating new buffers in the driver. Make use of page pool fragment allocation for smaller MTUs so that multiple packets can share a page. For MTUs larger than PAGE_SIZE, adjust the 'order' page parameter so that contiguous pages can be used to receive the larger packets. The RQ descriptor field 'os_buf' is repurposed to hold page pointers allocated from page_pool instead of SKBs. When packets arrive, SKBs are allocated and the page pointers are attached instead of preallocating SKBs. 'alloc_fail' netdev statistic is incremented when page_pool_dev_alloc() fails. Co-developed-by: Nelson Escobar <neescoba@cisco.com> Signed-off-by: Nelson Escobar <neescoba@cisco.com> Co-developed-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: John Daley <johndale@cisco.com> Link: https://patch.msgid.link/20250205235416.25410-4-johndale@cisco.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06enic: Simplify RX handler functionJohn Daley
Split up RX handler functions in preparation for moving to a page pool based implementation. No functional changes. Co-developed-by: Nelson Escobar <neescoba@cisco.com> Signed-off-by: Nelson Escobar <neescoba@cisco.com> Co-developed-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: John Daley <johndale@cisco.com> Link: https://patch.msgid.link/20250205235416.25410-3-johndale@cisco.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06enic: Move RX functions to their own fileJohn Daley
Move RX handler code into its own file in preparation for further changes. Some formatting changes were necessary in order to satisfy checkpatch but there were no functional changes. Co-developed-by: Nelson Escobar <neescoba@cisco.com> Signed-off-by: Nelson Escobar <neescoba@cisco.com> Co-developed-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: Satish Kharat <satishkh@cisco.com> Signed-off-by: John Daley <johndale@cisco.com> Link: https://patch.msgid.link/20250205235416.25410-2-johndale@cisco.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06netdev-genl: Elide napi_id when not presentJoe Damato
There are at least two cases where napi_id may not present and the napi_id should be elided: 1. Queues could be created, but napi_enable may not have been called yet. In this case, there may be a NAPI but it may not have an ID and output of a napi_id should be elided. 2. TX-only NAPIs currently do not have NAPI IDs. If a TX queue happens to be linked with a TX-only NAPI, elide the NAPI ID from the netlink output as a NAPI ID of 0 is not useful for users. Signed-off-by: Joe Damato <jdamato@fastly.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20250205193751.297211-1-jdamato@fastly.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06Merge branch 'io_uring-zero-copy-rx'Jakub Kicinski
David Wei says: ==================== io_uring zero copy rx This patchset contains net/ patches needed by a new io_uring request implementing zero copy rx into userspace pages, eliminating a kernel to user copy. We configure a page pool that a driver uses to fill a hw rx queue to hand out user pages instead of kernel pages. Any data that ends up hitting this hw rx queue will thus be dma'd into userspace memory directly, without needing to be bounced through kernel memory. 'Reading' data out of a socket instead becomes a _notification_ mechanism, where the kernel tells userspace where the data is. The overall approach is similar to the devmem TCP proposal. This relies on hw header/data split, flow steering and RSS to ensure packet headers remain in kernel memory and only desired flows hit a hw rx queue configured for zero copy. Configuring this is outside of the scope of this patchset. We share netdev core infra with devmem TCP. The main difference is that io_uring is used for the uAPI and the lifetime of all objects are bound to an io_uring instance. Data is 'read' using a new io_uring request type. When done, data is returned via a new shared refill queue. A zero copy page pool refills a hw rx queue from this refill queue directly. Of course, the lifetime of these data buffers are managed by io_uring rather than the networking stack, with different refcounting rules. This patchset is the first step adding basic zero copy support. We will extend this iteratively with new features e.g. dynamically allocated zero copy areas, THP support, dmabuf support, improved copy fallback, general optimisations and more. In terms of netdev support, we're first targeting Broadcom bnxt. Patches aren't included since Taehee Yoo has already sent a more comprehensive patchset adding support in [1]. Google gve should already support this, and Mellanox mlx5 support is WIP pending driver changes. =========== Performance =========== Note: Comparison with epoll + TCP_ZEROCOPY_RECEIVE isn't done yet. Test setup: * AMD EPYC 9454 * Broadcom BCM957508 200G * Kernel v6.11 base [2] * liburing fork [3] * kperf fork [4] * 4K MTU * Single TCP flow With application thread + net rx softirq pinned to _different_ cores: +-------------------------------+ | epoll | io_uring | |-----------|-------------------| | 82.2 Gbps | 116.2 Gbps (+41%) | +-------------------------------+ Pinned to _same_ core: +-------------------------------+ | epoll | io_uring | |-----------|-------------------| | 62.6 Gbps | 80.9 Gbps (+29%) | +-------------------------------+ ===== Links ===== Broadcom bnxt support: [1]: https://lore.kernel.org/20241003160620.1521626-8-ap420073@gmail.com Linux kernel branch including io_uring bits: [2]: https://github.com/isilence/linux.git zcrx/v13 liburing for testing: [3]: https://github.com/isilence/liburing.git zcrx/next kperf for testing: [4]: https://git.kernel.dk/kperf.git ==================== Link: https://patch.msgid.link/20250204215622.695511-1-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: add helpers for setting a memory provider on an rx queueDavid Wei
Add helpers that properly prep or remove a memory provider for an rx queue then restart the queue. Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-11-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: page_pool: add memory provider helpersPavel Begunkov
Add helpers for memory providers to interact with page pools. net_mp_niov_{set,clear}_page_pool() serve to [dis]associate a net_iov with a page pool. If used, the memory provider is responsible to match "set" calls with "clear" once a net_iov is not going to be used by a page pool anymore, changing a page pool, etc. Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-10-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: prepare for non devmem TCP memory providersPavel Begunkov
There is a good bunch of places in generic paths assuming that the only page pool memory provider is devmem TCP. As we want to reuse the net_iov and provider infrastructure, we need to patch it up and explicitly check the provider type when we branch into devmem TCP code. Reviewed-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-9-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: page_pool: add a mp hook to unregister_netdevice*Pavel Begunkov
Devmem TCP needs a hook in unregister_netdevice_many_notify() to upkeep the set tracking queues it's bound to, i.e. ->bound_rxqs. Instead of devmem sticking directly out of the genetic path, add a mp function. Reviewed-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Mina Almasry <almasrymina@google.com> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-8-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: page_pool: add callback for mp info printingPavel Begunkov
Add a mandatory callback that prints information about the memory provider to netlink. Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-7-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06netdev: add io_uring memory provider infoDavid Wei
Add a nested attribute for io_uring memory provider info. For now it is empty and its presence indicates that a particular page pool or queue has an io_uring memory provider attached. $ ./cli.py --spec netlink/specs/netdev.yaml --dump page-pool-get [{'id': 80, 'ifindex': 2, 'inflight': 64, 'inflight-mem': 262144, 'napi-id': 525}, {'id': 79, 'ifindex': 2, 'inflight': 320, 'inflight-mem': 1310720, 'io_uring': {}, 'napi-id': 525}, ... $ ./cli.py --spec netlink/specs/netdev.yaml --dump queue-get [{'id': 0, 'ifindex': 1, 'type': 'rx'}, {'id': 0, 'ifindex': 1, 'type': 'tx'}, {'id': 0, 'ifindex': 2, 'napi-id': 513, 'type': 'rx'}, {'id': 1, 'ifindex': 2, 'napi-id': 514, 'type': 'rx'}, ... {'id': 12, 'ifindex': 2, 'io_uring': {}, 'napi-id': 525, 'type': 'rx'}, ... Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-6-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: page_pool: create hooks for custom memory providersPavel Begunkov
A spin off from the original page pool memory providers patch by Jakub, which allows extending page pools with custom allocators. One of such providers is devmem TCP, and the other is io_uring zerocopy added in following patches. Link: https://lore.kernel.org/netdev/20230707183935.997267-7-kuba@kernel.org/ Co-developed-by: Jakub Kicinski <kuba@kernel.org> # initial mp proposal Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-5-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: generalise net_iov chunk ownersPavel Begunkov
Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner, which serves as a useful abstraction to share data and provide a context. However, it's too devmem specific, and we want to reuse it for other memory providers, and for that we need to decouple net_iov from devmem. Make net_iov to point to a new base structure called net_iov_area, which dmabuf_genpool_chunk_owner extends. Reviewed-by: Mina Almasry <almasrymina@google.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-4-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: prefix devmem specific helpersPavel Begunkov
Add prefixes to all helpers that are specific to devmem TCP, i.e. net_iov_binding[_id]. Reviewed-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Mina Almasry <almasrymina@google.com> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-3-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06net: page_pool: don't cast mp param to devmemPavel Begunkov
page_pool_check_memory_provider() is a generic path and shouldn't assume anything about the actual type of the memory provider argument. It's fine while devmem is the only provider, but cast away the devmem specific binding types to avoid confusion. Reviewed-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Mina Almasry <almasrymina@google.com> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David Wei <dw@davidwei.uk> Link: https://patch.msgid.link/20250204215622.695511-2-dw@davidwei.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.14-rc2). No conflicts or adjacent changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06tools: ynl: add all headers to makefile depsJakub Kicinski
The Makefile.deps lists uAPI headers to make the build work when system headers are older than in-tree headers. The problem doesn't occur for new headers, because system headers are not there at all. But out-of-tree YNL clone on GH also uses this header to identify header dependencies, and one day the system headers will exist, and will get out of date. So let's add the headers we missed. I don't think this is a fix, but FWIW the commits which added the missing headers are: commit 04e65df94b31 ("netlink: spec: add shaper YAML spec") commit 49922401c219 ("ethtool: separate definitions that are gonna be generated") Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20250205173352.446704-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-06Merge tag 'net-6.14-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Interestingly the recent kmemleak improvements allowed our CI to catch a couple of percpu leaks addressed here. We (mostly Jakub, to be accurate) are working to increase review coverage over the net code-base tweaking the MAINTAINER entries. Current release - regressions: - core: harmonize tstats and dstats - ipv6: fix dst refleaks in rpl, seg6 and ioam6 lwtunnels - eth: tun: revert fix group permission check - eth: stmmac: revert "specify hardware capability value when FIFO size isn't specified" Previous releases - regressions: - udp: gso: do not drop small packets when PMTU reduces - rxrpc: fix race in call state changing vs recvmsg() - eth: ice: fix Rx data path for heavy 9k MTU traffic - eth: vmxnet3: fix tx queue race condition with XDP Previous releases - always broken: - sched: pfifo_tail_enqueue: drop new packet when sch->limit == 0 - ethtool: ntuple: fix rss + ring_cookie check - rxrpc: fix the rxrpc_connection attend queue handling Misc: - recognize Kuniyuki Iwashima as a maintainer" * tag 'net-6.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (34 commits) Revert "net: stmmac: Specify hardware capability value when FIFO size isn't specified" MAINTAINERS: add a sample ethtool section entry MAINTAINERS: add entry for ethtool rxrpc: Fix race in call state changing vs recvmsg() rxrpc: Fix call state set to not include the SERVER_SECURING state net: sched: Fix truncation of offloaded action statistics tun: revert fix group permission check selftests/tc-testing: Add a test case for qdisc_tree_reduce_backlog() netem: Update sch->q.qlen before qdisc_tree_reduce_backlog() selftests/tc-testing: Add a test case for pfifo_head_drop qdisc when limit==0 pfifo_tail_enqueue: Drop new packet when sch->limit == 0 selftests: mptcp: connect: -f: no reconnect net: rose: lock the socket in rose_bind() net: atlantic: fix warning during hot unplug rxrpc: Fix the rxrpc_connection attend queue handling net: harmonize tstats and dstats selftests: drv-net: rss_ctx: don't fail reconfigure test if queue offset not supported selftests: drv-net: rss_ctx: add missing cleanup in queue reconfigure ethtool: ntuple: fix rss + ring_cookie check ethtool: rss: fix hiding unsupported fields in dumps ...
2025-02-06Revert "net: stmmac: Specify hardware capability value when FIFO size isn't ↵Russell King (Oracle)
specified" This reverts commit 8865d22656b4, which caused breakage for platforms which are not using xgmac2 or gmac4. Only these two cores have the capability of providing the FIFO sizes from hardware capability fields (which are provided in priv->dma_cap.[tr]x_fifo_size.) All other cores can not, which results in these two fields containing zero. We also have platforms that do not provide a value in priv->plat->[tr]x_fifo_size, resulting in these also being zero. This causes the new tests introduced by the reverted commit to fail, and produce e.g.: stmmaceth f0804000.eth: Can't specify Rx FIFO size An example of such a platform which fails is QEMU's npcm750-evb. This uses dwmac1000 which, as noted above, does not have the capability to provide the FIFO sizes from hardware. Therefore, revert the commit to maintain compatibility with the way the driver used to work. Reported-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/4e98f967-f636-46fb-9eca-d383b9495b86@roeck-us.net Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Tested-by: Steven Price <steven.price@arm.com> Fixes: 8865d22656b4 ("net: stmmac: Specify hardware capability value when FIFO size isn't specified") Link: https://patch.msgid.link/E1tfeyR-003YGJ-Gb@rmk-PC.armlinux.org.uk Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06eth: fbnic: set IFF_UNICAST_FLT to avoid enabling promiscuous mode when ↵Alexander Duyck
adding unicast addrs I realized when we were adding unicast addresses we were enabling promiscuous mode. I did a bit of digging and realized we had overlooked setting the driver private flag to indicate we supported unicast filtering. Example below shows the table with 00deadbeef01 as the main NIC address, and 5 additional addresses in the 00deadbeefX0 format. # cat $dbgfs/mac_addr Idx S TCAM Bitmap Addr/Mask ---------------------------------- 00 0 00000000,00000000 000000000000 000000000000 01 0 00000000,00000000 000000000000 000000000000 02 0 00000000,00000000 000000000000 000000000000 ... 24 0 00000000,00000000 000000000000 000000000000 25 1 00100000,00000000 00deadbeef50 000000000000 26 1 00100000,00000000 00deadbeef40 000000000000 27 1 00100000,00000000 00deadbeef30 000000000000 28 1 00100000,00000000 00deadbeef20 000000000000 29 1 00100000,00000000 00deadbeef10 000000000000 30 1 00100000,00000000 00deadbeef01 000000000000 31 0 00000000,00000000 000000000000 000000000000 Before rule 31 would be active. With this change it correctly sticks to just the unicast filters. Signed-off-by: Alexander Duyck <alexanderduyck@meta.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250204010038.1404268-2-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06eth: fbnic: add MAC address TCAM to debugfsAlexander Duyck
Add read only access to the 32-entry MAC address TCAM via debugfs. BMC filtering shares the same table so this is quite useful to access during debug. See next commit for an example output. Signed-off-by: Alexander Duyck <alexanderduyck@meta.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250204010038.1404268-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06tools: ynl-gen: support limits using definitionsJakub Kicinski
Support using defines / constants in integer checks. Carolina will need this for rate API extensions. Reported-by: Carolina Jubran <cjubran@nvidia.com> Link: https://lore.kernel.org/1e886aaf-e1eb-4f1a-b7ef-f63b350a3320@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20250203215510.1288728-2-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06tools: ynl-gen: don't output external constantsJakub Kicinski
A definition with a "header" property is an "external" definition for C code, as in it is defined already in another C header file. Other languages will need the exact value but C codegen should not recreate it. So don't output those definitions in the uAPI header. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20250203215510.1288728-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06MAINTAINERS: add a sample ethtool section entryJakub Kicinski
I feel like we don't do a good enough keeping authors of driver APIs around. The ethtool code base was very nicely compartmentalized by Michal. Establish a precedent of creating MAINTAINERS entries for "sections" of the ethtool API. Use Andrew and cable test as a sample entry. The entry should ideally cover 3 elements: a core file, test(s), and keywords. The last one is important because we intend the entries to cover core code *and* reviews of drivers implementing given API! Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250204215750.169249-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06MAINTAINERS: add entry for ethtoolJakub Kicinski
Michal did an amazing job converting ethtool to Netlink, but never added an entry to MAINTAINERS for himself. Create a formal entry so that we can delegate (portions) of this code to folks. Over the last 3 years majority of the reviews have been done by Andrew and I. I suppose Michal didn't want to be on the receiving end of the flood of patches. Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20250204215729.168992-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06Merge branch 'support-one-ptp-device-per-hardware-clock'Paolo Abeni
Tariq Toukan says: ==================== Support one PTP device per hardware clock This series contains two features from Jianbo, followed by simple cleanups. Patches 1-9 by Jianbo add support for one PTP device per hardware clock, described below [1]. Patches 10-12 by Jianbo add support for 200Gbps per-lane link modes in kernel and mlx5 driver. Patches 13-15 are simple cleanups by Gal and Carolina. [1] PHC (PTP hardware clock) is normally shared by multiple functions (PF/VF/SF). mlx5 driver currently creates a separate PTP device for each network interface that shares one PHC. PHC can be configured to work as free running mode or real time mode. In this series, only one PTP device is created for the shared PHC when it is running in real time mode. To support this feature, * Firmware needs to support clock identity. When functions share a PHC, the clock identities they query are same. * Driver dynamically allocates mlx5_clock to represent a PHC. * New devcom component is added for hardware clock. Functions are grouped by the identity, and one mlx5_clock is allocated and shared by the functions with the same identity. * When PTP device accesses PHC by its callbacks, the first function in the clock devcom list is selected to send commands to firmware. * PPS IN event is armed on one function. It should be re-armed on the other one when current is unloaded. ==================== Link: https://patch.msgid.link/20250203213516.227902-1-tariqt@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5e: Avoid WARN_ON when configuring MQPRIO with HTB offload enabledCarolina Jubran
When attempting to enable MQPRIO while HTB offload is already configured, the driver currently returns `-EINVAL` and triggers a `WARN_ON`, leading to an unnecessary call trace. Update the code to handle this case more gracefully by returning `-EOPNOTSUPP` instead, while also providing a helpful user message. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Yael Chemla <ychemla@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5e: Remove unused mlx5e_tc_flow_action structGal Pressman
Commit 67efaf45930d ("net/mlx5e: TC, Remove CT action reordering") removed the usage of mlx5e_tc_flow_action struct, remove the struct as well. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Remove stray semicolon in LAG port selection table creationGal Pressman
Remove the stray semicolon in the mlx5_ldev_for_each_reverse() loop. Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5e: Support FEC settings for 200G per lane link modesJianbo Liu
Add support to show and config FEC by ethtool for 200G/lane link modes. The RS encoding setting is mapped, and can be overridden to FEC_RS_544_514_INTERLEAVED_QUAD for these modes. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Shahar Shitrit <shshitrit@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Add support for 200Gbps per lane link modesJianbo Liu
This patch exposes new link modes using 200Gbps per lane, including 200G, 400G and 800G modes. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Shahar Shitrit <shshitrit@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06ethtool: Add support for 200Gbps per lane link modesJianbo Liu
Define 200G, 400G and 800G link modes using 200Gbps per lane. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Shahar Shitrit <shshitrit@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Generate PPS IN event on new function for shared clockJianbo Liu
As a specific function (mdev) is chosen to send MTPPSE command to firmware, the event is generated only on that function. When that function is unloaded, the PPS event can't be forward to PTP device, even when there are other functions in the group, and PTP device is not destroyed. To resolve this problem, need to send MTPPSE again from new function, and dis-arm the event on old function after that. PPS events are handled by EQ notifier. The async EQs and notifiers are destroyed in mlx5_eq_table_destroy() which is called before mlx5_cleanup_clock(). During the period between mlx5_eq_table_destroy() and mlx5_cleanup_clock(), the events can't be handled. To avoid event loss, add mlx5_clock_unload() in mlx5_unload() to arm the event on other available function, and mlx5_clock_load in mlx5_load() for symmetry. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Support one PTP device per hardware clockJianbo Liu
Currently, mlx5 driver exposes a PTP device for each network interface, resulting in multiple device nodes representing the same underlying PHC (PTP hardware clock). This causes problem if it is trying to synchronize to itself. For instance, when ptp4l operates on multiple interfaces following different masters, phc2sys attempts to synchronize them in automatic mode. PHC can be configured to work as free running mode or real time mode. All functions can access it directly. In this patch, we create one PTP device for each PHC when it's running in real time mode. All the functions share the same PTP device if the clock identifies they query are same, and they are already grouped by devcom in previous commit. The first mdev in the peer list is chosen when sending MTPPS/MTUTC/MTPPSE/MRTCQ to firmware. Since the function can be unloaded at any time, we need to use a mutex lock to protect the mdev pointer used in PTP and PPS callbacks. Besides, new one should be picked from the peer list when the current is not available. The clock info, which is used by IB, is shared by all the interfaces using the same hardware clock. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Move PPS notifier and out_work to clock_stateJianbo Liu
The PPS notifier is currently in mlx5_clock, and mlx5_clock can be shared in later patch, so the notifier should be registered for each device to avoid any event miss. Besides, the out_work is scheduled by PPS out event which is triggered only when the device is in free running mode. So, both are moved to mlx5_core_dev's clock_state. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Add devcom component for the clock shared by functionsJianbo Liu
Add new devcom component for hardware clock. When it is running in real time mode, the functions are grouped by the identify they query. According to firmware document, the clock identify size is 64 bits, so it's safe to memcpy to component key, as the key size is also 64 bits. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Change clock in mlx5_core_dev to mlx5_clock pointerJianbo Liu
Change clock member in mlx5_core_dev to a pointer, so it can point to a clock shared by multiple functions in later patch. For now, each function has its own clock, so mdev in mlx5_clock_priv is the back pointer to the function. Later it points to one (normally the first one) of the multiple functions sharing the same clock. Change mlx5_init_clock() to return error if mlx5_clock is not allocated. Besides, a null clock is defined and used when hardware clock is not supported. So, the clock pointer is always pointing to something valid. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Add API to get mlx5_core_dev from mlx5_clockJianbo Liu
The mdev is calculated directly from mlx5_clock, as it's one of the fields in mlx5_core_dev. Move to a function so it can be easily changed in next patch. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Add init and destruction functions for a single HW clockJianbo Liu
Move hardware clock initialization and destruction to the functions, which will be used for dynamically allocated clock. Such clock is shared by all the devices if the queried clock identities are same. The out_work is for PPS out event, which can't be triggered when clock is shared, so INIT_WORK is not moved to the initialization function. Besides, we still need to register notifier for each device. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Change parameters for PTP internal functionsJianbo Liu
In later patch, the mlx5_clock will be allocated dynamically, its address can be obtained from mlx5_core_dev struct, but mdev can't be obtained from mlx5_clock because it can be shared by multiple interfaces. So change the parameter for such internal functions, only mdev is passed down from the callers. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-06net/mlx5: Add helper functions for PTP callbacksJianbo Liu
The PTP callback functions should not be used directly by internal callers. Add helpers that can be used internally and externally. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-02-05Merge branch 'vxlan-age-fdb-entries-based-on-rx-traffic'Jakub Kicinski
Ido Schimmel says: ==================== vxlan: Age FDB entries based on Rx traffic tl;dr - This patchset prevents VXLAN FDB entries from lingering if traffic is only forwarded to a silent host. The VXLAN driver maintains two timestamps for each FDB entry: 'used' and 'updated'. The first is refreshed by both the Rx and Tx paths and the second is refreshed upon migration. The driver ages out entries according to their 'used' time which means that an entry can linger when traffic is only forwarded to a silent host that might have migrated to a different remote. This patchset solves the problem by adjusting the above semantics and aligning them to those of the bridge driver. That is, 'used' time is refreshed by the Tx path, 'updated' time is refresh by Rx path or user space updates and entries are aged out according to their 'updated' time. Patches #1-#2 perform small changes in how the 'used' and 'updated' fields are accessed. Patches #3-#5 refresh the 'updated' time where needed. Patch #6 flips the driver to age out FDB entries according to their 'updated' time. Patch #7 removes unnecessary updates to the 'used' time. Patch #8 extends a test case to cover aging of FDB entries in the presence of Tx traffic. ==================== Link: https://patch.msgid.link/20250204145549.1216254-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-05selftests: forwarding: vxlan_bridge_1d: Check aging while forwardingIdo Schimmel
Extend the VXLAN FDB aging test case to verify that FDB entries are aged out when they only forward traffic and not refreshed by received traffic. The test fails before "vxlan: Age out FDB entries based on 'updated' time": # ./vxlan_bridge_1d.sh [...] TEST: VXLAN: Ageing of learned FDB entry [FAIL] [...] # echo $? 1 And passes after it: # ./vxlan_bridge_1d.sh [...] TEST: VXLAN: Ageing of learned FDB entry [ OK ] [...] # echo $? 0 Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Link: https://patch.msgid.link/20250204145549.1216254-9-idosch@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>