summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-03-11libbpf: Recognize __arena global variables.Andrii Nakryiko
LLVM automatically places __arena variables into ".arena.1" ELF section. In order to use such global variables bpf program must include definition of arena map in ".maps" section, like: struct { __uint(type, BPF_MAP_TYPE_ARENA); __uint(map_flags, BPF_F_MMAPABLE); __uint(max_entries, 1000); /* number of pages */ __ulong(map_extra, 2ull << 44); /* start of mmap() region */ } arena SEC(".maps"); libbpf recognizes both uses of arena and creates single `struct bpf_map *` instance in libbpf APIs. ".arena.1" ELF section data is used as initial data image, which is exposed through skeleton and bpf_map__initial_value() to the user, if they need to tune it before the load phase. During load phase, this initial image is copied over into mmap()'ed region corresponding to arena, and discarded. Few small checks here and there had to be added to make sure this approach works with bpf_map__initial_value(), mostly due to hard-coded assumption that map->mmaped is set up with mmap() syscall and should be munmap()'ed. For arena, .arena.1 can be (much) smaller than maximum arena size, so this smaller data size has to be tracked separately. Given it is enforced that there is only one arena for entire bpf_object instance, we just keep it in a separate field. This can be generalized if necessary later. All global variables from ".arena.1" section are accessible from user space via skel->arena->name_of_var. For bss/data/rodata the skeleton/libbpf perform the following sequence: 1. addr = mmap(MAP_ANONYMOUS) 2. user space optionally modifies global vars 3. map_fd = bpf_create_map() 4. bpf_update_map_elem(map_fd, addr) // to store values into the kernel 5. mmap(addr, MAP_FIXED, map_fd) after step 5 user spaces see the values it wrote at step 2 at the same addresses arena doesn't support update_map_elem. Hence skeleton/libbpf do: 1. addr = malloc(sizeof SEC ".arena.1") 2. user space optionally modifies global vars 3. map_fd = bpf_create_map(MAP_TYPE_ARENA) 4. real_addr = mmap(map->map_extra, MAP_SHARED | MAP_FIXED, map_fd) 5. memcpy(real_addr, addr) // this will fault-in and allocate pages At the end look and feel of global data vs __arena global data is the same from bpf prog pov. Another complication is: struct { __uint(type, BPF_MAP_TYPE_ARENA); } arena SEC(".maps"); int __arena foo; int bar; ptr1 = &foo; // relocation against ".arena.1" section ptr2 = &arena; // relocation against ".maps" section ptr3 = &bar; // relocation against ".bss" section Fo the kernel ptr1 and ptr2 has point to the same arena's map_fd while ptr3 points to a different global array's map_fd. For the verifier: ptr1->type == unknown_scalar ptr2->type == const_ptr_to_map ptr3->type == ptr_to_map_value After verification, from JIT pov all 3 ptr-s are normal ld_imm64 insns. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-11-alexei.starovoitov@gmail.com
2024-03-11bpftool: Recognize arena map typeAlexei Starovoitov
Teach bpftool to recognize arena map type. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-10-alexei.starovoitov@gmail.com
2024-03-11libbpf: Add support for bpf_arena.Alexei Starovoitov
mmap() bpf_arena right after creation, since the kernel needs to remember the address returned from mmap. This is user_vm_start. LLVM will generate bpf_arena_cast_user() instructions where necessary and JIT will add upper 32-bit of user_vm_start to such pointers. Fix up bpf_map_mmap_sz() to compute mmap size as map->value_size * map->max_entries for arrays and PAGE_SIZE * map->max_entries for arena. Don't set BTF at arena creation time, since it doesn't support it. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240308010812.89848-9-alexei.starovoitov@gmail.com
2024-03-11libbpf: Add __arg_arena to bpf_helpers.hAlexei Starovoitov
Add __arg_arena to bpf_helpers.h Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240308010812.89848-8-alexei.starovoitov@gmail.com
2024-03-11bpf: Recognize btf_decl_tag("arg: Arena") as PTR_TO_ARENA.Alexei Starovoitov
In global bpf functions recognize btf_decl_tag("arg:arena") as PTR_TO_ARENA. Note, when the verifier sees: __weak void foo(struct bar *p) it recognizes 'p' as PTR_TO_MEM and 'struct bar' has to be a struct with scalars. Hence the only way to use arena pointers in global functions is to tag them with "arg:arena". Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-7-alexei.starovoitov@gmail.com
2024-03-11bpf: Recognize addr_space_cast instruction in the verifier.Alexei Starovoitov
rY = addr_space_cast(rX, 0, 1) tells the verifier that rY->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have to be in 32-bit domain. The verifier will mark load/store through PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as kern_vm_start + 32bit_addr memory accesses. rY = addr_space_cast(rX, 1, 0) tells the verifier that rY->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set then convert cast_user to mov32 as well. Otherwise JIT will convert it to: rY = (u32)rX; if (rY) rY |= arena->user_vm_start & ~(u64)~0U; Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240308010812.89848-6-alexei.starovoitov@gmail.com
2024-03-11bpf: Add x86-64 JIT support for bpf_addr_space_cast instruction.Alexei Starovoitov
LLVM generates bpf_addr_space_cast instruction while translating pointers between native (zero) address space and __attribute__((address_space(N))). The addr_space=1 is reserved as bpf_arena address space. rY = addr_space_cast(rX, 0, 1) is processed by the verifier and converted to normal 32-bit move: wX = wY rY = addr_space_cast(rX, 1, 0) has to be converted by JIT: aux_reg = upper_32_bits of arena->user_vm_start aux_reg <<= 32 wX = wY // clear upper 32 bits of dst register if (wX) // if not zero add upper bits of user_vm_start wX |= aux_reg JIT can do it more efficiently: mov dst_reg32, src_reg32 // 32-bit move shl dst_reg, 32 or dst_reg, user_vm_start rol dst_reg, 32 xor r11, r11 test dst_reg32, dst_reg32 // check if lower 32-bit are zero cmove r11, dst_reg // if so, set dst_reg to zero // Intel swapped src/dst register encoding in CMOVcc Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-5-alexei.starovoitov@gmail.com
2024-03-11bpf: Add x86-64 JIT support for PROBE_MEM32 pseudo instructions.Alexei Starovoitov
Add support for [LDX | STX | ST], PROBE_MEM32, [B | H | W | DW] instructions. They are similar to PROBE_MEM instructions with the following differences: - PROBE_MEM has to check that the address is in the kernel range with src_reg + insn->off >= TASK_SIZE_MAX + PAGE_SIZE check - PROBE_MEM doesn't support store - PROBE_MEM32 relies on the verifier to clear upper 32-bit in the register - PROBE_MEM32 adds 64-bit kern_vm_start address (which is stored in %r12 in the prologue) Due to bpf_arena constructions such %r12 + %reg + off16 access is guaranteed to be within arena virtual range, so no address check at run-time. - PROBE_MEM32 allows STX and ST. If they fault the store is a nop. When LDX faults the destination register is zeroed. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-4-alexei.starovoitov@gmail.com
2024-03-11bpf: Disasm support for addr_space_cast instruction.Alexei Starovoitov
LLVM generates rX = addr_space_cast(rY, dst_addr_space, src_addr_space) instruction when pointers in non-zero address space are used by the bpf program. Recognize this insn in uapi and in bpf disassembler. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-3-alexei.starovoitov@gmail.com
2024-03-11bpf: Introduce bpf_arena.Alexei Starovoitov
Introduce bpf_arena, which is a sparse shared memory region between the bpf program and user space. Use cases: 1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed anonymous region, like memcached or any key/value storage. The bpf program implements an in-kernel accelerator. XDP prog can search for a key in bpf_arena and return a value without going to user space. 2. The bpf program builds arbitrary data structures in bpf_arena (hash tables, rb-trees, sparse arrays), while user space consumes it. 3. bpf_arena is a "heap" of memory from the bpf program's point of view. The user space may mmap it, but bpf program will not convert pointers to user base at run-time to improve bpf program speed. Initially, the kernel vm_area and user vma are not populated. User space can fault in pages within the range. While servicing a page fault, bpf_arena logic will insert a new page into the kernel and user vmas. The bpf program can allocate pages from that region via bpf_arena_alloc_pages(). This kernel function will insert pages into the kernel vm_area. The subsequent fault-in from user space will populate that page into the user vma. The BPF_F_SEGV_ON_FAULT flag at arena creation time can be used to prevent fault-in from user space. In such a case, if a page is not allocated by the bpf program and not present in the kernel vm_area, the user process will segfault. This is useful for use cases 2 and 3 above. bpf_arena_alloc_pages() is similar to user space mmap(). It allocates pages either at a specific address within the arena or allocates a range with the maple tree. bpf_arena_free_pages() is analogous to munmap(), which frees pages and removes the range from the kernel vm_area and from user process vmas. bpf_arena can be used as a bpf program "heap" of up to 4GB. The speed of bpf program is more important than ease of sharing with user space. This is use case 3. In such a case, the BPF_F_NO_USER_CONV flag is recommended. It will tell the verifier to treat the rX = bpf_arena_cast_user(rY) instruction as a 32-bit move wX = wY, which will improve bpf prog performance. Otherwise, bpf_arena_cast_user is translated by JIT to conditionally add the upper 32 bits of user vm_start (if the pointer is not NULL) to arena pointers before they are stored into memory. This way, user space sees them as valid 64-bit pointers. Diff https://github.com/llvm/llvm-project/pull/84410 enables LLVM BPF backend generate the bpf_addr_space_cast() instruction to cast pointers between address_space(1) which is reserved for bpf_arena pointers and default address space zero. All arena pointers in a bpf program written in C language are tagged as __attribute__((address_space(1))). Hence, clang provides helpful diagnostics when pointers cross address space. Libbpf and the kernel support only address_space == 1. All other address space identifiers are reserved. rX = bpf_addr_space_cast(rY, /* dst_as */ 1, /* src_as */ 0) tells the verifier that rX->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have to be in the 32-bit domain. The verifier will mark load/store through PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as kern_vm_start + 32bit_addr memory accesses. The behavior is similar to copy_from_kernel_nofault() except that no address checks are necessary. The address is guaranteed to be in the 4GB range. If the page is not present, the destination register is zeroed on read, and the operation is ignored on write. rX = bpf_addr_space_cast(rY, 0, 1) tells the verifier that rX->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set, then the verifier converts such cast instructions to mov32. Otherwise, JIT will emit native code equivalent to: rX = (u32)rY; if (rY) rX |= clear_lo32_bits(arena->user_vm_start); /* replace hi32 bits in rX */ After such conversion, the pointer becomes a valid user pointer within bpf_arena range. The user process can access data structures created in bpf_arena without any additional computations. For example, a linked list built by a bpf program can be walked natively by user space. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Barret Rhoden <brho@google.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-2-alexei.starovoitov@gmail.com
2024-03-11ravb: Correct buffer size to map for R-Car RxNiklas Söderlund
When creating a helper to allocate and align an skb one location where the skb data size was updated was missed. This can lead to a warning being printed when the memory is being unmapped as it now always unmap the maximum frame size, instead of the size after it have been aligned. This was correctly done for RZ/G2L but missed for R-Car. Fixes: cfbad64706c1 ("ravb: Create helper to allocate skb and align it") Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20240308224237.496924-1-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11net: amt: Remove generic .ndo_get_stats64Breno Leitao
Commit 3e2f544dd8a33 ("net: get stats64 if device if driver is configured") moved the callback to dev_get_tstats64() to net core, so, unless the driver is doing some custom stats collection, it does not need to set .ndo_get_stats64. Since this driver is now relying in NETDEV_PCPU_STAT_TSTATS, then, it doesn't need to set the dev_get_tstats64() generic .ndo_get_stats64 function pointer. Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: Taehee Yoo <ap420073@gmail.com> Link: https://lore.kernel.org/r/20240308162606.1597287-2-leitao@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11net: amt: Move stats allocation to coreBreno Leitao
With commit 34d21de99cea9 ("net: Move {l,t,d}stats allocation to core and convert veth & vrf"), stats allocation could be done on net core instead of this driver. With this new approach, the driver doesn't have to bother with error handling (allocation failure checking, making sure free happens in the right spot, etc). This is core responsibility now. Move amt driver to leverage the core allocation. Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: Taehee Yoo <ap420073@gmail.com> Link: https://lore.kernel.org/r/20240308162606.1597287-1-leitao@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11netlink: specs: support generating code for genl socket privJakub Kicinski
The family struct is auto-generated for new families, support use of the sock_priv_* mechanism added in commit a731132424ad ("genetlink: introduce per-sock family private storage"). For example if the family wants to use struct sk_buff as its private struct (unrealistic but just for illustration), it would add to its spec: kernel-family: headers: [ "linux/skbuff.h" ] sock-priv: struct sk_buff ynl-gen-c will declare the appropriate priv size and hook in function prototypes to be implemented by the family. Link: https://lore.kernel.org/r/20240308190319.2523704-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11tools: ynl: remove trailing semicolonJakub Kicinski
Commit e8a6c515ff5f ("tools: ynl: allow user to pass enum string instead of scalar value") added a semicolon at the end of a line. Reviewed-by: Jiri Pirko <jiri@nvidia.com> Link: https://lore.kernel.org/r/20240308192555.2550253-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11net: ipv6: exthdrs: get rid of ipv6_skb_net()Justin Iurman
Get rid of ipv6_skb_net() which is only used in ipv6_hop_ioam(). Signed-off-by: Justin Iurman <justin.iurman@uliege.be> Link: https://lore.kernel.org/r/20240308185343.39272-1-justin.iurman@uliege.be Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11Merge branch 'selftests-mptcp-various-improvements'Jakub Kicinski
Matthieu Baerts says: ==================== selftests: mptcp: various improvements In this series from Geliang, there are various improvements in MPTCP selftests: sharing code, doing actions the same way, colours, etc. Patch 1 prints all error messages to stdout: what was done in almost all other MPTCP selftests. This can be now easily changed later if needed. Patch 2 makes sure the test counter is continuous in mptcp_connect.sh. Patch 3 aligns the messages that are printed in mptcp_connect.sh. Patch 4 prints each test results in mptcp_sockopt.sh, similar to what we have in the TAP output. Patch 5 moves the different test counters to a single one in mptcp_lib.sh, to uniform how it is used. Patch 6 moves how titles are printed from mptcp_join.sh to the lib, to be reused in patch 7 by all other MPTCP selftests. Patch 8 uses the '+=' operator to append strings instead of repeating twice the variable name: that's shorter, easier to read. Patch 9 adds colours for the [ OK ], [SKIP], [FAIL] and INFO keywords in all MPTCP selftests. Patch 10 to 12 are some preparation patches for patch 13: patch 10 modifies how some 'test_fail' helpers, patch 11 moves a helper from userspace_pm.sh to the lib, and patch 12 changes where titles are printed in userspace_pm.sh. Patch 13 moves some duplicated helpers from mptcp_join.sh and userspace_pm.sh to mptcp_lib.sh. Patch 14 moves duplicated read-only variables from mptcp_join.sh and userspace_pm.sh to mptcp_lib.sh as well. Patch 15 uses explicit variables instead of hard-coded numbers for the exit status. ==================== Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-0-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: use KSFT_SKIP/KSFT_PASS/KSFT_FAILGeliang Tang
This patch uses the public var KSFT_SKIP in mptcp_lib.sh instead of ksft_skip, and drop 'ksft_skip=4' in mptcp_join.sh. Use KSFT_PASS and KSFT_FAIL macros instead of 0 and 1 after 'exit ' and 'ret=' in all scripts: exit 0 -> exit ${KSFT_PASS} exit 1 -> exit ${KSFT_FAIL} ret=0 -> ret=${KSFT_PASS} ret=1 -> ret=${KSFT_FAIL} Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-15-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: declare event macros in mptcp_libGeliang Tang
MPTCP event macros (SUB_ESTABLISHED, LISTENER_CREATED, LISTENER_CLOSED), and the protocol family macros (AF_INET, AF_INET6) are defined in both mptcp_join.sh and userspace_pm.sh. In order not to duplicate code, this patch declares them all in mptcp_lib.sh with MPTCP_LIB_ prefixs. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-14-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: add mptcp_lib_verify_listener_eventsGeliang Tang
To avoid duplicated code in different MPTCP selftests, we can add and use helpers defined in mptcp_lib.sh. The helper verify_listener_events() is defined both in mptcp_join.sh and userspace_pm.sh, export it into mptcp_lib.sh and rename it with mptcp_lib_ prefix. Use this new helper in both scripts. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-13-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: print_test out of verify_listener_eventsGeliang Tang
verify_listener_events() helper will be exported into mptcp_lib.sh as a public function, but print_test() is invoked in it, which is a private function in userspace_pm.sh only. So this patch moves print_test() out of verify_listener_events(). Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-12-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: extract mptcp_lib_check_expectedGeliang Tang
Extract the main part of check_expected() in userspace_pm.sh to a new function mptcp_lib_check_expected() in mptcp_lib.sh. It will be used in both mptcp_john.sh and userspace_pm.sh. check_expected_one() is moved into mptcp_lib.sh too as mptcp_lib_check_expected_one(). Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-11-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: call test_fail without argumentGeliang Tang
This patch modifies test_fail() to call mptcp_lib_pr_fail() only if there are arguments (if [ ${#} -gt 0 ]) in userspace_pm.sh, add arguments "unexpected type: ${type}" when calling test_fail() from test_remove(). Then mptcp_lib_pr_fail() can be used in check_expected_one() instead of test_fail(). The same in mptcp_join.sh, calling fail_test() without argument, and adapt this helper not to call print_fail() in this case. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-10-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: print test results with colorsGeliang Tang
To unify the output formats of all test scripts, this patch adds four more helpers: mptcp_lib_pr_ok() mptcp_lib_pr_skip() mptcp_lib_pr_fail() mptcp_lib_pr_info() to print out [ OK ], [SKIP], [FAIL] and 'INFO: ' with colors. Use them in all scripts to print the "ok/skip/fail/info' using the same 'format'. Having colors helps to quickly identify issues when looking at a long list of output logs and results. Note that now all print the same keywords, which was not the case before, but it is good to uniform that. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-9-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: use += operator to append stringsGeliang Tang
This patch uses addition assignment operator (+=) to append strings instead of duplicating the variable name in mptcp_connect.sh and mptcp_join.sh. This can make the statements shorter. Note: in mptcp_connect.sh, add a local variable extra in do_transfer to save the various extra warning logs, using += to append it. And add a new variable tc_info to save various tc info, also using += to append it. This can make the code more readable and prepare for the next commit. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-8-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: print test results with countersGeliang Tang
This patch adds a new helper mptcp_lib_print_title(), a wrapper of mptcp_lib_inc_test_counter() and mptcp_lib_pr_title_counter(), to print out test counter in each test result and increase the counter. Use this helper to print out test counters for every tests in diag.sh, mptcp_connect.sh, mptcp_sockopt.sh, pm_netlink.sh, simult_flows.sh, and userspace_pm.sh. diag.sh: 01 no msk on netns creation [ ok ] 02 listen match for dport 10000 [ ok ] 03 listen match for sport 10000 [ ok ] 04 listen match for saddr and sport [ ok ] 05 all listen sockets [ ok ] mptcp_connect.sh: 01 New MPTCP socket can be blocked via sysctl [ OK ] 02 Validating network environment with pings [ OK ] INFO: Using loss of 0.85% delay 31 ms reorder .. with delay 7ms on ns3eth4 03 ns1 MPTCP -> ns1 (10.0.1.1:10000 ) MPTCP (duration 69ms) [ OK ] 04 ns1 MPTCP -> ns1 (10.0.1.1:10001 ) TCP (duration 20ms) [ OK ] 05 ns1 TCP -> ns1 (10.0.1.1:10002 ) MPTCP (duration 16ms) [ OK ] mptcp_sockopt.sh: 01 Transfer v4 [ OK ] 02 Mark v4 [ OK ] 03 Transfer v6 [ OK ] 04 Mark v6 [ OK ] 05 SOL_MPTCP sockopt v4 [ OK ] pm_netlink.sh: 01 defaults addr list [ OK ] 02 simple add/get addr [ OK ] 03 dump addrs [ OK ] 04 simple del addr [ OK ] 05 dump addrs after del [ OK ] simult_flows.sh: 01 balanced bwidth 7391 max 8456 [ OK ] 02 balanced bwidth - reverse direction 7403 max 8456 [ OK ] 03 balanced bwidth with unbalanced delay 7429 max 8456 [ OK ] 04 balanced bwidth with unbalanced delay - reverse ... 7485 max 8456 [ OK ] 05 unbalanced bwidth 7549 max 8456 [ OK ] userspace_pm.sh: 01 Created network namespaces ns1, ns2 [ OK ] INFO: Make connections 02 Established IPv4 MPTCP Connection ns2 => ns1 [ OK ] 03 Established IPv6 MPTCP Connection ns2 => ns1 [ OK ] INFO: Announce tests 04 ADD_ADDR 10.0.2.2 (ns2) => ns1, invalid token [ OK ] 05 ADD_ADDR id:67 10.0.2.2 (ns2) => ns1, reuse port [ OK ] Having test counters helps to quickly identify issues when looking at a long list of output logs and results. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-7-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: add print_title in mptcp_libGeliang Tang
This patch adds a new variable MPTCP_LIB_TEST_FORMAT as the test title printing format. Also add a helper mptcp_lib_print_title() to use this format to print the test title with test counters. They are used in mptcp_join.sh first. Each MPTCP selftest is having subtests, and it helps to give them a number to quickly identify them. This can be managed by mptcp_lib.sh, reusing what has been done here. The following commit will use these new helpers in the other tests. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-6-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: export TEST_COUNTER variableGeliang Tang
Variable TEST_COUNT are used in mptcp_connect.sh and mptcp_join.sh as test counters, which are initialized to 0, while variable test_cnt are used in diag.sh and simult_flows.sh, which are initialized to 1. To maintain consistency, this patch renames them all as MPTCP_LIB_TEST_COUNTER, initializes it to 1, and exports it into mptcp_lib.sh. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-5-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: sockopt: print every test resultGeliang Tang
Only total test results are printed out in mptcp_sockopt.sh: PASS: all packets had packet mark set PASS: SOL_MPTCP getsockopt has expected information PASS: TCP_INQ cmsg/ioctl -t tcp PASS: TCP_INQ cmsg/ioctl -6 -t tcp PASS: TCP_INQ cmsg/ioctl -r tcp PASS: TCP_INQ cmsg/ioctl -6 -r tcp PASS: TCP_INQ cmsg/ioctl -r tcp -t tcp They mismatch with the test results: ok 1 - mptcp_sockopt: mark ipv4 ok 2 - mptcp_sockopt: transfer ipv4 ok 3 - mptcp_sockopt: mark ipv6 ok 4 - mptcp_sockopt: transfer ipv6 ok 5 - mptcp_sockopt: sockopt v4 ok 6 - mptcp_sockopt: sockopt v6 ok 7 - mptcp_sockopt: TCP_INQ: -t tcp ok 8 - mptcp_sockopt: TCP_INQ: -6 -t tcp ok 9 - mptcp_sockopt: TCP_INQ: -r tcp ok 10 - mptcp_sockopt: TCP_INQ: -6 -r tcp ok 11 - mptcp_sockopt: TCP_INQ: -r tcp -t tcp 'mptcp_sockopt.sh' now display more detailed results + why (what you had in a former patch from v6, merged here). It no longer displays 'PASS:', because it is duplicated info now that the detailed are displayed: Transfer v4 [ OK ] Mark v4 [ OK ] Transfer v6 [ OK ] Mark v6 [ OK ] SOL_MPTCP sockopt v4 [ OK ] SOL_MPTCP sockopt v6 [ OK ] TCP_INQ cmsg/ioctl -t tcp [ OK ] TCP_INQ cmsg/ioctl -6 -t tcp [ OK ] TCP_INQ cmsg/ioctl -r tcp [ OK ] TCP_INQ cmsg/ioctl -6 -r tcp [ OK ] TCP_INQ cmsg/ioctl -r tcp -t tcp [ OK ] Also fix the TAP output: ok 1 - mptcp_sockopt: transfer ipv4 ok 2 - mptcp_sockopt: mark ipv4 ok 3 - mptcp_sockopt: transfer ipv6 ok 4 - mptcp_sockopt: mark ipv6 ok 5 - mptcp_sockopt: sockopt v4 ok 6 - mptcp_sockopt: sockopt v6 ok 7 - mptcp_sockopt: TCP_INQ: -t tcp ok 8 - mptcp_sockopt: TCP_INQ: -6 -t tcp ok 9 - mptcp_sockopt: TCP_INQ: -r tcp ok 10 - mptcp_sockopt: TCP_INQ: -6 -r tcp ok 11 - mptcp_sockopt: TCP_INQ: -r tcp -t tcp Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-4-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: connect: fix misaligned outputGeliang Tang
The first [ OK ] in the output of mptcp_connect.sh misaligns with the others: New MPTCP socket can be blocked via sysctl [ OK ] INFO: validating network environment with pings INFO: Using loss of 0.85% delay 16 ms reorder 95% 70% with delay 4ms on ns1 MPTCP -> ns1 (10.0.1.1:10000 ) MPTCP (duration 184ms) [ OK ] ns1 MPTCP -> ns1 (10.0.1.1:10001 ) TCP (duration 50ms) [ OK ] ns1 TCP -> ns1 (10.0.1.1:10002 ) MPTCP (duration 55ms) [ OK ] This patch aligns them by using 69 chars to display the first two lines, and 50 chars for the other. Since 19 chars are used to display duration time. Also print out a [ OK ] at the end of the 2nd line for consistency. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-3-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: connect: add dedicated port counterGeliang Tang
This patch adds a new dedicated counter 'port' instead of TEST_COUNT to increase port numbers in mptcp_connect.sh. This can avoid outputting discontinuous test counters. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-2-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: mptcp: print all error messages to stdoutGeliang Tang
Some error messages are printed to stderr while the others are printed to 'stdout'. As part of the unification, this patch drop "1>&2" to let all errors messages are printed to 'stdout'. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-1-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11net: wan: framer/pef2256: Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: Herve Codina <herve.codina@bootlin.com> Link: https://lore.kernel.org/r/9684419fd714cc489a3ef36d838d3717bb6aec6d.1709886922.git.u.kleine-koenig@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11Merge tag 'timers-core-2024-03-10' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer updates from Thomas Gleixner: "A large set of updates and features for timers and timekeeping: - The hierarchical timer pull model When timer wheel timers are armed they are placed into the timer wheel of a CPU which is likely to be busy at the time of expiry. This is done to avoid wakeups on potentially idle CPUs. This is wrong in several aspects: 1) The heuristics to select the target CPU are wrong by definition as the chance to get the prediction right is close to zero. 2) Due to #1 it is possible that timers are accumulated on a single target CPU 3) The required computation in the enqueue path is just overhead for dubious value especially under the consideration that the vast majority of timer wheel timers are either canceled or rearmed before they expire. The timer pull model avoids the above by removing the target computation on enqueue and queueing timers always on the CPU on which they get armed. This is achieved by having separate wheels for CPU pinned timers and global timers which do not care about where they expire. As long as a CPU is busy it handles both the pinned and the global timers which are queued on the CPU local timer wheels. When a CPU goes idle it evaluates its own timer wheels: - If the first expiring timer is a pinned timer, then the global timers can be ignored as the CPU will wake up before they expire. - If the first expiring timer is a global timer, then the expiry time is propagated into the timer pull hierarchy and the CPU makes sure to wake up for the first pinned timer. The timer pull hierarchy organizes CPUs in groups of eight at the lowest level and at the next levels groups of eight groups up to the point where no further aggregation of groups is required, i.e. the number of levels is log8(NR_CPUS). The magic number of eight has been established by experimention, but can be adjusted if needed. In each group one busy CPU acts as the migrator. It's only one CPU to avoid lock contention on remote timer wheels. The migrator CPU checks in its own timer wheel handling whether there are other CPUs in the group which have gone idle and have global timers to expire. If there are global timers to expire, the migrator locks the remote CPU timer wheel and handles the expiry. Depending on the group level in the hierarchy this handling can require to walk the hierarchy downwards to the CPU level. Special care is taken when the last CPU goes idle. At this point the CPU is the systemwide migrator at the top of the hierarchy and it therefore cannot delegate to the hierarchy. It needs to arm its own timer device to expire either at the first expiring timer in the hierarchy or at the first CPU local timer, which ever expires first. This completely removes the overhead from the enqueue path, which is e.g. for networking a true hotpath and trades it for a slightly more complex idle path. This has been in development for a couple of years and the final series has been extensively tested by various teams from silicon vendors and ran through extensive CI. There have been slight performance improvements observed on network centric workloads and an Intel team confirmed that this allows them to power down a die completely on a mult-die socket for the first time in a mostly idle scenario. There is only one outstanding ~1.5% regression on a specific overloaded netperf test which is currently investigated, but the rest is either positive or neutral performance wise and positive on the power management side. - Fixes for the timekeeping interpolation code for cross-timestamps: cross-timestamps are used for PTP to get snapshots from hardware timers and interpolated them back to clock MONOTONIC. The changes address a few corner cases in the interpolation code which got the math and logic wrong. - Simplifcation of the clocksource watchdog retry logic to automatically adjust to handle larger systems correctly instead of having more incomprehensible command line parameters. - Treewide consolidation of the VDSO data structures. - The usual small improvements and cleanups all over the place" * tag 'timers-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (62 commits) timer/migration: Fix quick check reporting late expiry tick/sched: Fix build failure for CONFIG_NO_HZ_COMMON=n vdso/datapage: Quick fix - use asm/page-def.h for ARM64 timers: Assert no next dyntick timer look-up while CPU is offline tick: Assume timekeeping is correctly handed over upon last offline idle call tick: Shut down low-res tick from dying CPU tick: Split nohz and highres features from nohz_mode tick: Move individual bit features to debuggable mask accesses tick: Move got_idle_tick away from common flags tick: Assume the tick can't be stopped in NOHZ_MODE_INACTIVE mode tick: Move broadcast cancellation up to CPUHP_AP_TICK_DYING tick: Move tick cancellation up to CPUHP_AP_TICK_DYING tick: Start centralizing tick related CPU hotplug operations tick/sched: Don't clear ts::next_tick again in can_stop_idle_tick() tick/sched: Rename tick_nohz_stop_sched_tick() to tick_nohz_full_stop_tick() tick: Use IS_ENABLED() whenever possible tick/sched: Remove useless oneshot ifdeffery tick/nohz: Remove duplicate between lowres and highres handlers tick/nohz: Remove duplicate between tick_nohz_switch_to_nohz() and tick_setup_sched_timer() hrtimer: Select housekeeping CPU during migration ...
2024-03-11Merge tag 'timers-ptp-2024-03-10' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull clocksource updates from Thomas Gleixner: "Updates for timekeeping and PTP core. The cross-timestamp mechanism which allows to correlate hardware clocks uses clocksource pointers for describing the correlation. That's suboptimal as drivers need to obtain the pointer, which requires needless exports and exposing internals. This can all be completely avoided by assigning clocksource IDs and using them for describing the correlated clock source. So this adds clocksource IDs to all clocksources in the tree which can be exposed to this mechanism and removes the pointer and now needless exports. A related improvement for the core and the correlation handling has not made it this time, but is expected to get ready for the next round" * tag 'timers-ptp-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: kvmclock: Unexport kvmclock clocksource treewide: Remove system_counterval_t.cs, which is never read timekeeping: Evaluate system_counterval_t.cs_id instead of .cs ptp/kvm, arm_arch_timer: Set system_counterval_t.cs_id to constant x86/kvm, ptp/kvm: Add clocksource ID, set system_counterval_t.cs_id x86/tsc: Add clocksource ID, set system_counterval_t.cs_id timekeeping: Add clocksource ID to struct system_counterval_t x86/tsc: Correct kernel-doc notation
2024-03-11Merge branch 'mlxsw-support-for-nexthop-group-statistics'Jakub Kicinski
Petr Machata says: ==================== mlxsw: Support for nexthop group statistics ECMP is a fundamental component in L3 designs. However, it's fragile. Many factors influence whether an ECMP group will operate as intended: hash policy (i.e. the set of fields that contribute to ECMP hash calculation), neighbor validity, hash seed (which might lead to polarization) or the type of ECMP group used (hash-threshold or resilient). At the same time, collection of statistics that would help an operator determine that the group performs as desired, is difficult. Support for nexthop group statistics and their HW collection has been introduced recently. In this patch set, add HW stats collection support to mlxsw. This patchset progresses as follows: - Patches #1 and #2 add nexthop IDs to notifiers. - Patches #3 and #4 are code-shaping. - Patches #5, #6 and #7 adjust the flow counter code. - Patches #8 and #9 add HW nexthop counters. - Patch #10 adjusts the HW counter code to allow sharing the same counter for several resilient group buckets with the same NH ID. - Patch #11 adds a selftest. ==================== Link: https://lore.kernel.org/r/cover.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11selftests: forwarding: Add a test for NH group statsPetr Machata
Add to lib.sh support for fetching NH stats, and a new library, router_mpath_nh_lib.sh, with the common code for testing NH stats. Use the latter from router_mpath_nh.sh and router_mpath_nh_res.sh. The test works by sending traffic through a NH group, and checking that the reported values correspond to what the link that ultimately receives the traffic reports having seen. Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/2a424c54062a5f1efd13b9ec5b2b0e29c6af2574.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11Merge tag 'smp-core-2024-03-10' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull cpu core updates from Thomas Gleixner: "A small boring set of cleanups for the SMP and CPU hotplug code" * tag 'smp-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: cpu: Remove stray semicolon smp: Make __smp_processor_id() 0-argument macro cpu: Mark cpu_possible_mask as __ro_after_init kernel/cpu: Convert snprintf() to sysfs_emit() cpu/hotplug: Delete an extraneous kernel-doc description
2024-03-11mlxsw: spectrum_router: Share nexthop counters in resilient groupsPetr Machata
For resilient groups, we can reuse the same counter for all the buckets that share the same nexthop. Keep a reference count per counter, and keep all these counters in a per-next hop group xarray, which serves as a NHID->counter cache. If a counter is already present for a given NHID, just take a reference and use the same counter. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/cdd00084533fc83ac5917562f54642f008205bf3.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11mlxsw: spectrum_router: Support nexthop group hardware statisticsPetr Machata
When hw_stats is set on a group, install nexthop counters on members of a group. Counter allocation request is moved from nexthop object initialization to the update code. The previous placement made sense: when the counters are enabled by dpipe, the counters are installed to all existing nexthops and all nexthops created from then on get them. For the finer-grained nexthop group statistics, this is unsuitable. The existing placement was kept for the IPv4 and IPv6 nexthops. Resilient group replacement emits a pre_replace notification, and then any bucket_replace notifications if there were any replacements at all. If the group is balanced and the nexthop composition of the replaced group didn't change, there will be no such notifiers. Therefore hook to the pre_replace notifier and mark all buckets for update, to un/install the counters. When reporting deltas for resilient groups, use the nexthop ID that we stored in a previous patch to look up to which nexthop a bucket contributes. Co-developed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/87495a72f187df2e5d491d02729c550d235fcc85.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11mlxsw: spectrum_router: Track NH ID's of group membersPetr Machata
The core interfaces for collecting per-NH statistics are built around nexthops even for resilient groups. Because mlxsw models each bucket as a nexthop, the core next hop that a given bucket contributes to needs to be looked up. In order to be able to match the two up, we need to track nexthop ID for members of group nexthop objects. For simplicity, do it for all nexthop objects, not just group members. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/184ceb6b154e08f5bcf116a705b0fcb01c31895c.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11mlxsw: spectrum_router: Add helpers for nexthop countersPetr Machata
The next patch will add the ability to share nexthop counters among mlxsw nexthops backed by the same core nexthop. To have a place to store reference count, the counter should be kept in a dedicated structure. In this patch, introduce the structure together with the related helpers, sans the refcount, which comes in the next patch. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/61f23fa4f8c5d7879f68dacd793d8ab7425f33c0.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11mlxsw: spectrum_router: Avoid allocating NH counters twicePetr Machata
mlxsw_sp_nexthop_counter_disable() decays to a nop when called on a disabled counter, but mlxsw_sp_nexthop_counter_enable() can't similarly be called on an enabled counter. This would be useful in the following patches. Add the missing condition. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/0cc9050e196366c1387ab5ee47f1cee8ecde9c86.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11mlxsw: spectrum: Allow fetch-and-clear of flow countersPetr Machata
For the report_delta-like interface like a previous patch has added for collection of NH group statistics, it's easiest to read the counter and have the HW clear it right away. Thus, change mlxsw_sp_flow_counter_get() to take a bool indicating whether this should be done. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/6a096ede8ee92d5041e3832242c3bbc137198aba.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11mlxsw: spectrum_router: Have mlxsw_sp_nexthop_counter_enable() return intPetr Machata
In order to be able to diagnose failures in counter allocation, have the function mlxsw_sp_nexthop_counter_enable() return an error code. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/e0bb5c0cc6234ade2ade1e92abac991359c3f446.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11mlxsw: spectrum_router: Rename two functionsPetr Machata
The function mlxsw_sp_nexthop_counter_alloc() doesn't directly allocate anything, and mlxsw_sp_nexthop_counter_free() doesn't directly free. For the following patches, we will need names for functions that actually do those things. Therefore rename to mlxsw_sp_nexthop_counter_enable() and mlxsw_sp_nexthop_counter_disable() to free up the namespace. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/f59272958697a718f090f59f892d32beabcd8972.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11net: nexthop: Have all NH notifiers carry NH IDPetr Machata
When sending the notifications to collect NH statistics for resilient groups, the driver will need to know the nexthop IDs in individual buckets to look up the right counter. To that end, move the nexthop ID from struct nh_notifier_grp_entry_info to nh_notifier_single_info. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/8f964cd50b1a56d3606ce7ab4c50354ae019c43b.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11net: nexthop: Initialize NH group ID in resilient NH group notifiersPetr Machata
The NEXTHOP_EVENT_RES_TABLE_PRE_REPLACE notifier currently keeps the group ID unset. That makes it impossible to look up the group for which the notifier is intended. This is not an issue at the moment, because the only client is netdevsim, and that just so that it veto replacements, which is a static property not tied to a particular group. But for any practical use, the ID is necessary. Set it. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/025fef095dcfb408042568bb5439da014d47239e.1709901020.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11net: gro: move two declarations to include/net/gro.hEric Dumazet
Move gro_find_receive_by_type() and gro_find_complete_by_type() to include/net/gro.h where they belong. Also use _NET_GRO_H instead of _NET_IPV6_GRO_H to protect include/net/gro.h from multiple inclusions. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240308102230.296224-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-11r8152: fix unknown device for choose_configurationHayes Wang
For the unknown device, rtl8152_cfgselector_choose_configuration() should return a negative value. Then, usb_choose_configuration() would set a configuration for CDC ECM or NCM mode. Otherwise, there is no usb interface driver for the device. Fixes: aa4f2b3e418e ("r8152: Choose our USB config with choose_configuration() rather than probe()") Signed-off-by: Hayes Wang <hayeswang@realtek.com> Link: https://lore.kernel.org/r/20240308075206.33553-436-nic_swsd@realtek.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>