summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2024-01-03perf x86 test: Add hybrid test for conflicting legacy/sysfs eventIan Rogers
The cpu-cycles event is both a legacy event and declared in /sys/devices/cpu_core/events/cpu-cycles. The cycles event is a legacy event but with no sysfs version. Add a test that the sysfs version is preferred to the legacy for cpu-cycles, while for cycles we use the legacy version. Suggested-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240103170159.1435753-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-01-03perf x86 test: Update hybrid expectationsIan Rogers
The legacy events cpu-cycles and instructions have sysfs event equivalents on x86 (see /sys/devices/cpu_core/events). As sysfs/JSON events are now higher in priority than legacy events this causes the hybrid test expectations not to be met. To fix this switch to legacy events that don't have sysfs versions, namely cpu-cycles becomes cycles and instructions becomes branches. Fixes: a24d9d9dc096fc0d ("perf parse-events: Make legacy events lower priority than sysfs/JSON") Reported-by: Arnaldo Carvalho de Melo <acme@kernel.org> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Closes: https://lore.kernel.org/lkml/ZYbm5L7tw7bdpDpE@kernel.org/ Link: https://lore.kernel.org/r/20240103170159.1435753-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-01-03perf vendor events amd: Add Zen 4 memory controller eventsSandipan Das
Make the jevents parser aware of the Unified Memory Controller (UMC) PMU and add events taken from Section 8.2.1 "UMC Performance Monitor Events" of the Processor Programming Reference (PPR) for AMD Family 19h Model 11h processors. The events capture UMC command activity such as CAS, ACTIVATE, PRECHARGE etc. while the metrics derive data bus utilization and memory bandwidth out of these events. Signed-off-by: Sandipan Das <sandipan.das@amd.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ananth Narayan <ananth.narayan@amd.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Stephane Eranian <eranian@google.com> Link: https://lore.kernel.org/r/e0d8a7e8ca8ee3e378d8029e80b456ac327d6419.1701238314.git.sandipan.das@amd.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-01-03perf stat: Fix hard coded LL miss unitsIan Rogers
Copy-paste error where LL cache misses are reported as l1i. Fixes: 0a57b910807ad163 ("perf stat: Use counts rather than saved_value") Suggested-by: Guillaume Endignoux <guillaumee@google.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20231211181242.1721059-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-01-03perf record: Reduce memory for recording PERF_RECORD_LOST_SAMPLES eventIan Rogers
Reduce from PERF_SAMPLE_MAX_SIZE to "sizeof(*lost) + session->machines.host.id_hdr_size". Suggested-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20231207021627.1322884-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-01-03perf env: Avoid recursively taking env->bpf_progs.lockIan Rogers
Add variants of perf_env__insert_bpf_prog_info(), perf_env__insert_btf() and perf_env__find_btf prefixed with __ to indicate the env->bpf_progs.lock is assumed held. Call these variants when the lock is held to avoid recursively taking it and potentially having a thread deadlock with itself. Fixes: f8dfeae009effc0b ("perf bpf: Show more BPF program info in print_bpf_prog_info()") Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ming Wang <wangming01@loongson.cn> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Link: https://lore.kernel.org/r/20231207014655.1252484-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-01-03selftests/bpf: Convert profiler.c to bpf_cmp.Alexei Starovoitov
Convert profiler[123].c to "volatile compare" to compare barrier_var() approach vs bpf_cmp_likely() vs bpf_cmp_unlikely(). bpf_cmp_unlikely() produces correct code, but takes much longer to verify: ./veristat -C -e prog,insns,states before after_with_unlikely Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF) ------------------------------------ --------- --------- ------------------ ---------- ---------- ----------------- kprobe__proc_sys_write 1603 19606 +18003 (+1123.08%) 123 1678 +1555 (+1264.23%) kprobe__vfs_link 11815 70305 +58490 (+495.05%) 971 4967 +3996 (+411.53%) kprobe__vfs_symlink 5464 42896 +37432 (+685.07%) 434 3126 +2692 (+620.28%) kprobe_ret__do_filp_open 5641 44578 +38937 (+690.25%) 446 3162 +2716 (+608.97%) raw_tracepoint__sched_process_exec 2770 35962 +33192 (+1198.27%) 226 3121 +2895 (+1280.97%) raw_tracepoint__sched_process_exit 1526 2135 +609 (+39.91%) 133 208 +75 (+56.39%) raw_tracepoint__sched_process_fork 265 337 +72 (+27.17%) 19 24 +5 (+26.32%) tracepoint__syscalls__sys_enter_kill 18782 140407 +121625 (+647.56%) 1286 12176 +10890 (+846.81%) bpf_cmp_likely() is equivalent to barrier_var(): ./veristat -C -e prog,insns,states before after_with_likely Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF) ------------------------------------ --------- --------- -------------- ---------- ---------- ------------- kprobe__proc_sys_write 1603 1663 +60 (+3.74%) 123 127 +4 (+3.25%) kprobe__vfs_link 11815 12090 +275 (+2.33%) 971 971 +0 (+0.00%) kprobe__vfs_symlink 5464 5448 -16 (-0.29%) 434 426 -8 (-1.84%) kprobe_ret__do_filp_open 5641 5739 +98 (+1.74%) 446 446 +0 (+0.00%) raw_tracepoint__sched_process_exec 2770 2608 -162 (-5.85%) 226 216 -10 (-4.42%) raw_tracepoint__sched_process_exit 1526 1526 +0 (+0.00%) 133 133 +0 (+0.00%) raw_tracepoint__sched_process_fork 265 265 +0 (+0.00%) 19 19 +0 (+0.00%) tracepoint__syscalls__sys_enter_kill 18782 18970 +188 (+1.00%) 1286 1286 +0 (+0.00%) kprobe__proc_sys_write 2700 2809 +109 (+4.04%) 107 109 +2 (+1.87%) kprobe__vfs_link 12238 12366 +128 (+1.05%) 267 269 +2 (+0.75%) kprobe__vfs_symlink 7139 7365 +226 (+3.17%) 167 175 +8 (+4.79%) kprobe_ret__do_filp_open 7264 7070 -194 (-2.67%) 180 182 +2 (+1.11%) raw_tracepoint__sched_process_exec 3768 3453 -315 (-8.36%) 211 199 -12 (-5.69%) raw_tracepoint__sched_process_exit 3138 3138 +0 (+0.00%) 83 83 +0 (+0.00%) raw_tracepoint__sched_process_fork 265 265 +0 (+0.00%) 19 19 +0 (+0.00%) tracepoint__syscalls__sys_enter_kill 26679 24327 -2352 (-8.82%) 1067 1037 -30 (-2.81%) kprobe__proc_sys_write 1833 1833 +0 (+0.00%) 157 157 +0 (+0.00%) kprobe__vfs_link 9995 10127 +132 (+1.32%) 803 803 +0 (+0.00%) kprobe__vfs_symlink 5606 5672 +66 (+1.18%) 451 451 +0 (+0.00%) kprobe_ret__do_filp_open 5716 5782 +66 (+1.15%) 462 462 +0 (+0.00%) raw_tracepoint__sched_process_exec 3042 3042 +0 (+0.00%) 278 278 +0 (+0.00%) raw_tracepoint__sched_process_exit 1680 1680 +0 (+0.00%) 146 146 +0 (+0.00%) raw_tracepoint__sched_process_fork 299 299 +0 (+0.00%) 25 25 +0 (+0.00%) tracepoint__syscalls__sys_enter_kill 18372 18372 +0 (+0.00%) 1558 1558 +0 (+0.00%) default (mcpu=v3), no_alu32, cpuv4 have similar differences. Note one place where bpf_nop_mov() is used to workaround the verifier lack of link between the scalar register and its spill to stack. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231226191148.48536-7-alexei.starovoitov@gmail.com
2024-01-03bpf: Add bpf_nop_mov() asm macro.Alexei Starovoitov
bpf_nop_mov(var) asm macro emits nop register move: rX = rX. If 'var' is a scalar and not a fixed constant the verifier will assign ID to it. If it's later spilled the stack slot will carry that ID as well. Hence the range refining comparison "if rX < const" will update all copies including spilled slot. This macro is a temporary workaround until the verifier gets smarter. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231226191148.48536-6-alexei.starovoitov@gmail.com
2024-01-03selftests/bpf: Remove bpf_assert_eq-like macros.Alexei Starovoitov
Since the last user was converted to bpf_cmp, remove bpf_assert_eq/ne/... macros. __bpf_assert_op() macro is kept for experiments, since it's slightly more efficient than bpf_assert(bpf_cmp_unlikely()) until LLVM is fixed. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20231226191148.48536-5-alexei.starovoitov@gmail.com
2024-01-03selftests/bpf: Convert exceptions_assert.c to bpf_cmpAlexei Starovoitov
Convert exceptions_assert.c to bpf_cmp_unlikely() macro. Since bpf_assert(bpf_cmp_unlikely(var, ==, 100)); other code; will generate assembly code: if r1 == 100 goto L2; r0 = 0 call bpf_throw L1: other code; ... L2: goto L1; LLVM generates redundant basic block with extra goto. LLVM will be fixed eventually. Right now it's less efficient than __bpf_assert(var, ==, 100) macro that produces: if r1 == 100 goto L1; r0 = 0 call bpf_throw L1: other code; But extra goto doesn't hurt the verification process. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20231226191148.48536-4-alexei.starovoitov@gmail.com
2024-01-03bpf: Introduce "volatile compare" macrosAlexei Starovoitov
Compilers optimize conditional operators at will, but often bpf programmers want to force compilers to keep the same operator in asm as it's written in C. Introduce bpf_cmp_likely/unlikely(var1, conditional_op, var2) macros that can be used as: - if (seen >= 1000) + if (bpf_cmp_unlikely(seen, >=, 1000)) The macros take advantage of BPF assembly that is C like. The macros check the sign of variable 'seen' and emits either signed or unsigned compare. For example: int a; bpf_cmp_unlikely(a, >, 0) will be translated to 'if rX s> 0 goto' in BPF assembly. unsigned int a; bpf_cmp_unlikely(a, >, 0) will be translated to 'if rX > 0 goto' in BPF assembly. C type conversions coupled with comparison operator are tricky. int i = -1; unsigned int j = 1; if (i < j) // this is false. long i = -1; unsigned int j = 1; if (i < j) // this is true. Make sure BPF program is compiled with -Wsign-compare then the macros will catch the mistake. The macros check LHS (left hand side) only to figure out the sign of compare. 'if 0 < rX goto' is not allowed in the assembly, so the users have to use a variable on LHS anyway. The patch updates few tests to demonstrate the use of the macros. The macro allows to use BPF_JSET in C code, since LLVM doesn't generate it at present. For example: if (i & j) compiles into r0 &= r1; if r0 == 0 goto while if (bpf_cmp_unlikely(i, &, j)) compiles into if r0 & r1 goto Note that the macros has to be careful with RHS assembly predicate. Since: u64 __rhs = 1ull << 42; asm goto("if r0 < %[rhs] goto +1" :: [rhs] "ri" (__rhs)); LLVM will silently truncate 64-bit constant into s32 imm. Note that [lhs] "r"((short)LHS) the type cast is a workaround for LLVM issue. When LHS is exactly 32-bit LLVM emits redundant <<=32, >>=32 to zero upper 32-bits. When LHS is 64 or 16 or 8-bit variable there are no shifts. When LHS is 32-bit the (u64) cast doesn't help. Hence use (short) cast. It does _not_ truncate the variable before it's assigned to a register. Traditional likely()/unlikely() macros that use __builtin_expect(!!(x), 1 or 0) have no effect on these macros, hence macros implement the logic manually. bpf_cmp_unlikely() macro preserves compare operator as-is while bpf_cmp_likely() macro flips the compare. Consider two cases: A. for() { if (foo >= 10) { bar += foo; } other code; } B. for() { if (foo >= 10) break; other code; } It's ok to use either bpf_cmp_likely or bpf_cmp_unlikely macros in both cases, but consider that 'break' is effectively 'goto out_of_the_loop'. Hence it's better to use bpf_cmp_unlikely in the B case. While 'bar += foo' is better to keep as 'fallthrough' == likely code path in the A case. When it's written as: A. for() { if (bpf_cmp_likely(foo, >=, 10)) { bar += foo; } other code; } B. for() { if (bpf_cmp_unlikely(foo, >=, 10)) break; other code; } The assembly will look like: A. for() { if r1 < 10 goto L1; bar += foo; L1: other code; } B. for() { if r1 >= 10 goto L2; other code; } L2: The bpf_cmp_likely vs bpf_cmp_unlikely changes basic block layout, hence it will greatly influence the verification process. The number of processed instructions will be different, since the verifier walks the fallthrough first. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20231226191148.48536-3-alexei.starovoitov@gmail.com
2024-01-03selftests/bpf: Attempt to build BPF programs with -Wsign-compareAlexei Starovoitov
GCC's -Wall includes -Wsign-compare while clang does not. Since BPF programs are built with clang we need to add this flag explicitly to catch problematic comparisons like: int i = -1; unsigned int j = 1; if (i < j) // this is false. long i = -1; unsigned int j = 1; if (i < j) // this is true. C standard for reference: - If either operand is unsigned long the other shall be converted to unsigned long. - Otherwise, if one operand is a long int and the other unsigned int, then if a long int can represent all the values of an unsigned int, the unsigned int shall be converted to a long int; otherwise both operands shall be converted to unsigned long int. - Otherwise, if either operand is long, the other shall be converted to long. - Otherwise, if either operand is unsigned, the other shall be converted to unsigned. Unfortunately clang's -Wsign-compare is very noisy. It complains about (s32)a == (u32)b which is safe and doen't have surprising behavior. This patch fixes some of the issues. It needs a follow up to fix the rest. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20231226191148.48536-2-alexei.starovoitov@gmail.com
2024-01-03bpf: Add a possibly-zero-sized read testAndrei Matei
This patch adds a test for the condition that the previous patch mucked with - illegal zero-sized helper memory access. As opposed to existing tests, this new one uses a size whose lower bound is zero, as opposed to a known-zero one. Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231221232225.568730-3-andreimatei1@gmail.com
2024-01-03bpf: Simplify checking size of helper accessesAndrei Matei
This patch simplifies the verification of size arguments associated to pointer arguments to helpers and kfuncs. Many helpers take a pointer argument followed by the size of the memory access performed to be performed through that pointer. Before this patch, the handling of the size argument in check_mem_size_reg() was confusing and wasteful: if the size register's lower bound was 0, then the verification was done twice: once considering the size of the access to be the lower-bound of the respective argument, and once considering the upper bound (even if the two are the same). The upper bound checking is a super-set of the lower-bound checking(*), except: the only point of the lower-bound check is to handle the case where zero-sized-accesses are explicitly not allowed and the lower-bound is zero. This static condition is now checked explicitly, replacing a much more complex, expensive and confusing verification call to check_helper_mem_access(). Error messages change in this patch. Before, messages about illegal zero-size accesses depended on the type of the pointer and on other conditions, and sometimes the message was plain wrong: in some tests that changed you'll see that the old message was something like "R1 min value is outside of the allowed memory range", where R1 is the pointer register; the error was wrongly claiming that the pointer was bad instead of the size being bad. Other times the information that the size came for a register with a possible range of values was wrong, and the error presented the size as a fixed zero. Now the errors refer to the right register. However, the old error messages did contain useful information about the pointer register which is now lost; recovering this information was deemed not important enough. (*) Besides standing to reason that the checks for a bigger size access are a super-set of the checks for a smaller size access, I have also mechanically verified this by reading the code for all types of pointers. I could convince myself that it's true for all but PTR_TO_BTF_ID (check_ptr_to_btf_access). There, simply looking line-by-line does not immediately prove what we want. If anyone has any qualms, let me know. Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231221232225.568730-2-andreimatei1@gmail.com
2024-01-03RISC-V: selftests: Add which-cpus hwprobe testAndrew Jones
Test the RISCV_HWPROBE_WHICH_CPUS flag of hwprobe. The test also has a command line interface in order to get the cpu list for arbitrary hwprobe pairs. Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231122164700.127954-10-ajones@ventanamicro.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-01-03RISC-V: hwprobe: Clarify cpus size parameterAndrew Jones
The "count" parameter associated with the 'cpus' parameter of the hwprobe syscall is the size in bytes of 'cpus'. Naming it 'cpu_count' may mislead users (it did me) to think it's the number of CPUs that are or can be represented by 'cpus' instead. This is particularly easy (IMO) to get wrong since 'cpus' is documented to be defined by CPU_SET(3) and CPU_SET(3) also documents a CPU_COUNT() (the number of CPUs in set) macro. CPU_SET(3) refers to the size of cpu sets with 'setsize'. Adopt 'cpusetsize' for the hwprobe parameter and specifically state it is in bytes in Documentation/riscv/hwprobe.rst to clarify. Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231122164700.127954-7-ajones@ventanamicro.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-01-03selftests/landlock: Rename "permitted" to "allowed" in ftruncate testsGünther Noack
Suggested-by: Mickaël Salaün <mic@digikod.net> Signed-off-by: Günther Noack <gnoack@google.com> Link: https://lore.kernel.org/r/20231208155121.1943775-3-gnoack@google.com Signed-off-by: Mickaël Salaün <mic@digikod.net>
2024-01-02Merge tag 'kvm-riscv-6.8-1' of https://github.com/kvm-riscv/linux into HEADPaolo Bonzini
KVM/riscv changes for 6.8 part #1 - KVM_GET_REG_LIST improvement for vector registers - Generate ISA extension reg_list using macros in get-reg-list selftest - Steal time account support along with selftest
2024-01-02Merge tag 'loongarch-kvm-6.8' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson into HEAD LoongArch KVM changes for v6.8 1. Optimization for memslot hugepage checking. 2. Cleanup and fix some HW/SW timer issues. 3. Add LSX/LASX (128bit/256bit SIMD) support.
2024-01-02net/sched: Remove uapi support for CBQ qdiscJamal Hadi Salim
Commit 051d44209842 ("net/sched: Retire CBQ qdisc") retired the CBQ qdisc. Remove UAPI for it. Iproute2 will sync by equally removing it from user space. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02net/sched: Remove uapi support for ATM qdiscJamal Hadi Salim
Commit fb38306ceb9e ("net/sched: Retire ATM qdisc") retired the ATM qdisc. Remove UAPI for it. Iproute2 will sync by equally removing it from user space. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02net/sched: Remove uapi support for dsmark qdiscJamal Hadi Salim
Commit bbe77c14ee61 ("net/sched: Retire dsmark qdisc") retired the dsmark classifier. Remove UAPI support for it. Iproute2 will sync by equally removing it from user space. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02net/sched: Remove uapi support for tcindex classifierJamal Hadi Salim
commit 8c710f75256b ("net/sched: Retire tcindex classifier") retired the TC tcindex classifier. Remove UAPI for it. Iproute2 will sync by equally removing it from user space. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02net/sched: Remove uapi support for rsvp classifierJamal Hadi Salim
commit 265b4da82dbf ("net/sched: Retire rsvp classifier") retired the TC RSVP classifier. Remove UAPI for it. Iproute2 will sync by equally removing it from user space. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02selftests: bonding: do not set port down when adding to bondHangbin Liu
Similar to commit be809424659c ("selftests: bonding: do not set port down before adding to bond"). The bond-arp-interval-causes-panic test failed after commit a4abfa627c38 ("net: rtnetlink: Enslave device before bringing it up") as the kernel will set the port down _after_ adding to bond if setting port down specifically. Fix it by removing the link down operation when adding to bond. Fixes: 2ffd57327ff1 ("selftests: bonding: cause oops in bond_rr_gen_slave_id") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Tested-by: Benjamin Poirier <benjamin.poirier@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02selftests: mptcp: diag: check CURRESTAB countersGeliang Tang
This patch adds a new helper chk_msk_cestab() to check the current established connections counter MIB_CURRESTAB in diag.sh. Invoke it to check the counter during the connection after every chk_msk_inuse(). Signed-off-by: Geliang Tang <geliang.tang@linux.dev> Reviewed-by: Matthieu Baerts <matttbe@kernel.org> Signed-off-by: Matthieu Baerts <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02selftests: mptcp: join: check CURRESTAB countersGeliang Tang
This patch adds a new helper chk_cestab_nr() to check the current established connections counter MIB_CURRESTAB. Set the newly added variables cestab_ns1 and cestab_ns2 to indicate how many connections are expected in ns1 or ns2. Invoke check_cestab() to check the counter during the connection in do_transfer() and invoke chk_cestab_nr() to re-check it when the connection closed. These checks are embedded in add_tests(). Signed-off-by: Geliang Tang <geliang.tang@linux.dev> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Matthieu Baerts <matttbe@kernel.org> Signed-off-by: Matthieu Baerts <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02selftest/tcp-ao: Work on namespace-ified sysctl_optmem_maxDmitry Safonov
Since commit f5769faeec36 ("net: Namespace-ify sysctl_optmem_max") optmem_max is per-netns, so need of switching to root namespace. It seems trivial to keep the old logic working, so going to keep it for a while (at least, until kernel with netns-optmem_max will be release). Currently, there is a test that checks that optmem_max limit applies to TCP-AO keys and a little benchmark that measures linked-list TCP-AO keys scaling, those are fixed by this. Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Dmitry Safonov <dima@arista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02selftest/tcp-ao: Set routes in a proper VRF table idDmitry Safonov
In unsigned-md5 selftests ip_route_add() is not needed in client_add_ip(): the route was pre-setup in __test_init() => link_init() for subnet, rather than a specific ip-address. Currently, __ip_route_add() mistakenly always sets VRF table to RT_TABLE_MAIN - this seems to have sneaked in during unsigned-md5 tests debugging. That also explains, why ip_route_add_vrf() ignored EEXIST, returned by fib6. Yet, keep EEXIST ignoring in bench-lookups selftests as it's expected that those selftests may add the same (duplicate) routes. Reported-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Dmitry Safonov <dima@arista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02net/sched: Retire ipt actionJamal Hadi Salim
The tc ipt action was intended to run all netfilter/iptables target. Unfortunately it has not benefitted over the years from proper updates when netfilter changes, and for that reason it has remained rudimentary. Pinging a bunch of people that i was aware were using this indicates that removing it wont affect them. Retire it to reduce maintenance efforts. Buh-bye. Reviewed-by: Victor Noguiera <victor@mojatatu.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-02tools/thermal/tmon: Fix compilation warning for wrong formatFlorian Eckert
The following warnings are shown during compilation: tui.c: In function 'show_cooling_device': tui.c:216:40: warning: format '%d' expects argument of type 'int', but argument 7 has type 'long unsigned int' [-Wformat=] 216 | "%02d %12.12s%6d %6d", | ~~^ | | | int | %6ld ...... 219 | ptdata.cdi[j].cur_state, | ~~~~~~~~~~~~~~~~~~~~~~~ | | | long unsigned int tui.c:216:44: warning: format '%d' expects argument of type 'int', but argument 8 has type 'long unsigned int' [-Wformat=] 216 | "%02d %12.12s%6d %6d", | ~~^ | | | int | %6ld ...... 220 | ptdata.cdi[j].max_state); | ~~~~~~~~~~~~~~~~~~~~~~~ | | | long unsigned int To fix this, the correct string format must be used for printing. Signed-off-by: Florian Eckert <fe@dev.tdt.de> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20231204141335.2798194-1-fe@dev.tdt.de
2024-01-01Merge tag 'nf-next-23-12-22' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next Pablo Neira Ayuso says: ==================== netfilter pull request 23-12-22 The following patchset contains Netfilter updates for net-next: 1) Add locking for NFT_MSG_GETSETELEM_RESET requests, to address a race scenario with two concurrent processes running a dump-and-reset which exposes negative counters to userspace, from Phil Sutter. 2) Use GFP_KERNEL in pipapo GC, from Florian Westphal. 3) Reorder nf_flowtable struct members, place the read-mostly parts accessed by the datapath first. From Florian Westphal. 4) Set on dead flag for NFT_MSG_NEWSET in abort path, from Florian Westphal. 5) Support filtering zone in ctnetlink, from Felix Huettner. 6) Bail out if user tries to redefine an existing chain with different type in nf_tables. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2024-01-01Merge tag 'for-netdev' of ↵David S. Miller
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== bpf-next-for-netdev The following pull-request contains BPF updates for your *net-next* tree. We've added 22 non-merge commits during the last 3 day(s) which contain a total of 23 files changed, 652 insertions(+), 431 deletions(-). The main changes are: 1) Add verifier support for annotating user's global BPF subprogram arguments with few commonly requested annotations for a better developer experience, from Andrii Nakryiko. These tags are: - Ability to annotate a special PTR_TO_CTX argument - Ability to annotate a generic PTR_TO_MEM as non-NULL 2) Support BPF verifier tracking of BPF_JNE which helps cases when the compiler transforms (unsigned) "a > 0" into "if a == 0 goto xxx" and the like, from Menglong Dong. 3) Fix a warning in bpf_mem_cache's check_obj_size() as reported by LKP, from Hou Tao. 4) Re-support uid/gid options when mounting bpffs which had to be reverted with the prior token series revert to avoid conflicts, from Daniel Borkmann. 5) Fix a libbpf NULL pointer dereference in bpf_object__collect_prog_relos() found from fuzzing the library with malformed ELF files, from Mingyi Zhang. 6) Skip DWARF sections in libbpf's linker sanity check given compiler options to generate compressed debug sections can trigger a rejection due to misalignment, from Alyssa Ross. 7) Fix an unnecessary use of the comma operator in BPF verifier, from Simon Horman. 8) Fix format specifier for unsigned long values in cpustat sample, from Colin Ian King. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2023-12-30RISC-V: KVM: selftests: Add get-reg-list test for STA registersAndrew Jones
Add SBI STA and its two registers to the get-reg-list test. Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-30RISC-V: KVM: selftests: Add steal_time test supportAndrew Jones
With the introduction of steal-time accounting support for RISC-V KVM we can add RISC-V support to the steal_time test. Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-30RISC-V: KVM: selftests: Add guest_sbi_probe_extensionAndrew Jones
Add guest_sbi_probe_extension(), allowing guest code to probe for SBI extensions. As guest_sbi_probe_extension() needs SBI_ERR_NOT_SUPPORTED, take the opportunity to bring in all SBI error codes. We don't bring in all current extension IDs or base extension function IDs though, even though we need one of each, because we'd prefer to bring those in as necessary. Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-30RISC-V: KVM: selftests: Move sbi_ecall to processor.cAndrew Jones
sbi_ecall() isn't ucall specific and its prototype is already in processor.h. Move its implementation to processor.c. Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-29selftests/mm: log run_vmtests.sh results in TAP formatRyan Roberts
When running tests on a CI system (e.g. LAVA) it is useful to output test results in TAP (Test Anything Protocol) format so that the CI can parse the fine-grained results to show regressions. Many of the mm selftest binaries already output using the TAP format. And the kselftests runner (run_kselftest.sh) also uses the format. CI systems such as LAVA can already handle nested TAP reports. However, with the mm selftests we have 3 levels of nesting (run_kselftest.sh -> run_vmtests.sh -> individual test binaries) and the middle level did not previously support TAP, which breaks the parser. Let's fix that by teaching run_vmtests.sh to output using the TAP format. Ideally this would be opt-in via a command line argument to avoid the possibility of breaking anyone's existing scripts that might scrape the output. However, it is not possible to pass arguments to tests invoked via run_kselftest.sh. So I've implemented an opt-out option (-n), which will revert to the existing output format. Future changes to this file should be aware of 2 new conventions: - output that is part of the TAP reporting is piped through tap_output - general output is piped through tap_prefix Link: https://lkml.kernel.org/r/20231214162434.3580009-1-ryan.roberts@arm.com Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Tested-by: John Hubbard <jhubbard@nvidia.com> Cc: Aishwarya TCV <aishwarya.tcv@arm.com> Cc: Peter Xu <peterx@redhat.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-29selftests/mm: add UFFDIO_MOVE ioctl testSuren Baghdasaryan
Add tests for new UFFDIO_MOVE ioctl which uses uffd to move source into destination buffer while checking the contents of both after the move. After the operation the content of the destination buffer should match the original source buffer's content while the source buffer should be zeroed. Separate tests are designed for PMD aligned and unaligned cases because they utilize different code paths in the kernel. Link: https://lkml.kernel.org/r/20231206103702.3873743-6-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Brian Geffon <bgeffon@google.com> Cc: Christian Brauner <brauner@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jann Horn <jannh@google.com> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Lokesh Gidra <lokeshgidra@google.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Nicolas Geoffray <ngeoffray@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: ZhangPeng <zhangpeng362@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-29selftests/mm: add uffd_test_case_ops to allow test case-specific operationsSuren Baghdasaryan
Currently each test can specify unique operations using uffd_test_ops, however these operations are per-memory type and not per-test. Add uffd_test_case_ops which each test case can customize for its own needs regardless of the memory type being used. Pre- and post-allocation operations are added, some of which will be used in the next patch to implement test-specific operations like madvise after memory is allocated but before it is accessed. Link: https://lkml.kernel.org/r/20231206103702.3873743-5-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Brian Geffon <bgeffon@google.com> Cc: Christian Brauner <brauner@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jann Horn <jannh@google.com> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Lokesh Gidra <lokeshgidra@google.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Nicolas Geoffray <ngeoffray@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: ZhangPeng <zhangpeng362@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-29selftests/mm: call uffd_test_ctx_clear at the end of the testSuren Baghdasaryan
uffd_test_ctx_clear() is being called from uffd_test_ctx_init() to unmap areas used in the previous test run. This approach is problematic because while unmapping areas uffd_test_ctx_clear() uses page_size and nr_pages which might differ from one test run to another. Fix this by calling uffd_test_ctx_clear() after each test is done. Link: https://lkml.kernel.org/r/20231206103702.3873743-4-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Axel Rasmussen <axelrasmussen@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Brian Geffon <bgeffon@google.com> Cc: Christian Brauner <brauner@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jann Horn <jannh@google.com> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Lokesh Gidra <lokeshgidra@google.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Nicolas Geoffray <ngeoffray@google.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: ZhangPeng <zhangpeng362@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-29Merge branch 'topic/cs35l41' into for-nextTakashi Iwai
Pull CS35L41 codec extension series. Signed-off-by: Takashi Iwai <tiwai@suse.de>
2023-12-29RISC-V: KVM: selftests: Treat SBI ext regs like ISA ext regsAndrew Jones
SBI extension registers may not be present and indeed when running on a platform without sscofpmf the PMU SBI extension is not. Move the SBI extension registers from the base set of registers to the filter list. Individual configs should test for any that may or may not be present separately. Since the PMU extension may disappear and the DBCN extension is only present in later kernels, separate them from the rest into their own configs. The rest are lumped together into the same config. Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-29KVM: riscv: selftests: Use register subtypesAndrew Jones
Always use register subtypes in the get-reg-list test when registers have them. The only registers neglecting to do so were ISA extension registers. While we don't really need to use KVM_REG_RISCV_ISA_SINGLE (since it's zero), the main purpose is to avoid confusion and to self-document the tests. Also add print support for the multi registers like SBI extensions have, even though they're only used for debugging. Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Haibo Xu <haibo1.xu@intel.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-29KVM: riscv: selftests: Add RISCV_SBI_EXT_REGAndrew Jones
While adding RISCV_SBI_EXT_REG(), acknowledge that some registers have subtypes and extend __kvm_reg_id() to take a subtype field. Then, update all macros to set the new field appropriately. The general CSR macro gets renamed to include "GENERAL", but the other macros, like the new RISCV_SBI_EXT_REG, just use the SINGLE subtype. Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-29KVM: riscv: selftests: Drop SBI multi registersAndrew Jones
These registers are no longer getting added to get-reg-list. We keep sbi_ext_multi_id_to_str() for printing, even though we don't expect it to normally be used, because it may be useful for debug. Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-29KVM: riscv: selftests: Generate ISA extension reg_list using macrosAnup Patel
Various ISA extension reg_list have common pattern so let us generate these using macros. We define two macros for the above purpose: 1) KVM_ISA_EXT_SIMPLE_CONFIG - Macro to generate reg_list for ISA extension without any additional ONE_REG registers 2) KVM_ISA_EXT_SUBLIST_CONFIG - Macro to generate reg_list for ISA extension with additional ONE_REG registers This patch also adds the missing config for svnapot. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Anup Patel <anup@brainfault.org>
2023-12-29crypto: tcrypt - add script tcrypt_speed_compare.pyWangJinchao
Create a script for comparing tcrypt speed test logs. The script will systematically analyze differences item by item and provide a summary (average). This tool is useful for evaluating the stability of cryptographic module algorithms and assisting with performance optimization. Please note that for such a comparison, stability depends on whether we allow frequency to float or pin the frequency. The script produces comparisons in two scenes: 1. For operations in seconds ================================================================================ rfc4106(gcm(aes)) (pcrypt(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) encryption -------------------------------------------------------------------------------- bit key | byte blocks | base ops | new ops | differ(%) 160 | 16 | 66439 | 63063 | -5.08 160 | 64 | 62220 | 57439 | -7.68 ... 288 | 4096 | 15059 | 16278 | 8.09 288 | 8192 | 9043 | 9526 | 5.34 -------------------------------------------------------------------------------- average differ(%s) | total_differ(%) -------------------------------------------------------------------------------- 5.70 | -4.49 ================================================================================ 2. For avg cycles of operation ================================================================================ rfc4106(gcm(aes)) (pcrypt(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) encryption -------------------------------------------------------------------------------- bit key | byte blocks | base cycles | new cycles | differ(%) 160 | 16 | 32500 | 35847 | 10.3 160 | 64 | 33175 | 45808 | 38.08 ... 288 | 4096 | 131369 | 132132 | 0.58 288 | 8192 | 229503 | 234581 | 2.21 -------------------------------------------------------------------------------- average differ(%s) | total_differ(%) -------------------------------------------------------------------------------- 8.41 | -6.70 ================================================================================ Signed-off-by: WangJinchao <wangjinchao@xfusion.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-28sysclt: Clarify the results of selftest runJoel Granados
In some cases the result of test were hidden inside the stdout and it was difficult to identify when a test was skipped and why. List of changes 1. Capitalize all the words that express a test result : "OK", "SKIPPED" and "FAIL". 2. Place all test result text at the end of the message. This will prevent the result from being hidden when stdout is verbose. 3. Any other explanation that comes after the result text will be placed in a new line. 4. All failures are marked as "FAIL" 5. Pipped the failure to stderr in tests 8, 9, 10. 6. Replaced bogus "FAIL" with "SKIPPED" in test 0007 7. All "..." are prefixed and followed by a space. Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2023-12-28sysctl: Add a selftest for handling empty dirsJoel Granados
Basic test to ensure that empty directories can be registered and that they in turn can serve as a base dir for other registrations. Add one test to the sysctl selftest module. It first registers an empty directory under "empty_add" and then uses that as a base to register another empty dir. The sysctl bash script then checks that "empty_add" is present and that there an empty directory within it. Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>