summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2021-10-27perf bench futex: Add support for 32-bit systems with 64-bit time_tAlistair Francis
Some 32-bit architectures (such are 32-bit RISC-V) only have a 64-bit time_t and as such don't have the SYS_futex syscall. This patch will allow us to use the SYS_futex_time64 syscall on those platforms. This also converts the futex calls to be y2038 safe (when built for a 5.1+ kernel). Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alistair Francis <alistair23@gmail.com> Cc: Atish Patra <atish.patra@wdc.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-riscv@lists.infradead.org Link: http://lore.kernel.org/lkml/20211022013343.2262938-2-alistair.francis@opensource.wdc.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27perf bench futex: Call the futex syscall from a functionAlistair Francis
In preparation for a more complex futex() function let's convert the current macro into two functions. We need two functions to avoid compiler failures as the macro is overloaded. This will allow us to include pre-processor conditionals in the futex syscall functions. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alistair Francis <alistair23@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Atish Patra <atish.patra@wdc.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-riscv@lists.infradead.org Link: http://lore.kernel.org/lkml/20211022013343.2262938-1-alistair.francis@opensource.wdc.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27perf intel-pt: Support itrace d+o option to direct debug log to stdoutAdrian Hunter
It can be useful to see debug output in between normal output. Add support for AUXTRACE_LOG_FLG_USE_STDOUT to Intel PT. Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: https://lore.kernel.org/r/20211027080334.365596-7-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27perf auxtrace: Add itrace d+o option to direct debug log to stdoutAdrian Hunter
It can be useful to see debug output in between normal output. Add 'o' to the flags of debug option 'd', so that '--itrace=d+o' can specify output of the debug log to stdout. Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: https://lore.kernel.org/r/20211027080334.365596-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27perf dlfilter: Add dlfilter-show-cyclesAdrian Hunter
Add a new dlfilter to show cycles. Cycle counts are accumulated per CPU (or per thread if CPU is not recorded) from IPC information, and printed together with the change since the last print, at the start of each line. Separate counts are kept for branches, instructions or other events. Note also, the itrace A option can be useful to provide higher granularity cycle information. Example: $ perf record -e intel_pt/cyc/u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.044 MB perf.data ] $ perf script --itrace=A --call-trace --dlfilter dlfilter-show-cycles.so --deltatime | head 0 perf-exec 8509 [001] 0.000000000: psb offs: 0 0 perf-exec 8509 [001] 0.000000000: cbr: 42 freq: 4219 MHz (156%) 833 833 uname 8509 [001] 0.000047689: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _start 833 uname 8509 [001] 0.000003261: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start 2015 1182 uname 8509 [001] 0.000000282: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start 2676 661 uname 8509 [001] 0.000002629: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start 3612 936 uname 8509 [001] 0.000001232: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start 4579 967 uname 8509 [001] 0.000002519: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_start 6145 1566 uname 8509 [001] 0.000001050: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_setup_hash 6239 94 uname 8509 [001] 0.000000023: (/usr/lib/x86_64-linux-gnu/ld-2.31.so ) _dl_sysdep_start Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: https://lore.kernel.org/r/20211027080334.365596-5-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27perf intel-pt: Support itrace A option to approximate IPCAdrian Hunter
Normally, for cycle-acccurate mode, IPC values are an exact number of instructions and cycles. Due to the granularity of timestamps, that happens only when a CYC packet correlates to the event. Support the itrace 'A' option, to use instead, the number of cycles associated with the current timestamp. This provides IPC information for every change of timestamp, but at the expense of accuracy. Due to the granularity of timestamps, the actual number of cycles increases even though the cycles reported does not. The number of instructions is known, but if IPC is reported, cycles can be too low and so IPC is too high. Note that inaccuracy decreases as the period of sampling increases i.e. if the number of cycles is too low by a small amount, that becomes less significant if the number of cycles is large. Furthermore, it can be used in conjunction with dlfilter-show-cycles.so to provide higher granularity cycle information. Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: https://lore.kernel.org/r/20211027080334.365596-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27perf auxtrace: Add itrace A option to approximate IPCAdrian Hunter
Add an option to specify that synthesized IPC can be approximate, rather than completely accurate. Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: https://lore.kernel.org/r/20211027080334.365596-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27perf auxtrace: Add missing Z option to ITRACE_HELPAdrian Hunter
ITRACE_HELP is used by perf commands to display help text for the --itrace option. Add missing Z option. Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: https://lore.kernel.org/r/20211027080334.365596-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-27selftests/bpf: Adding a namespace reset for tc_redirectYucong Sun
This patch delete ns_src/ns_dst/ns_redir namespaces before recreating them, making the test more robust. Signed-off-by: Yucong Sun <sunyucong@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211025223345.2136168-5-fallentree@fb.com
2021-10-27selftests/bpf: Fix attach_probe in parallel modeYucong Sun
This patch makes attach_probe uses its own method as attach point, avoiding conflict with other tests like bpf_cookie. Signed-off-by: Yucong Sun <sunyucong@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211025223345.2136168-4-fallentree@fb.com
2021-10-27selfetests/bpf: Update vmtest.sh defaultsYucong Sun
Increase memory to 4G, 8 SMP core with host cpu passthrough. This make it run faster in parallel mode and more likely to succeed. Signed-off-by: Yucong Sun <sunyucong@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211025223345.2136168-2-fallentree@fb.com
2021-10-27libbpf: Deprecate bpf_objects_listJoe Burton
Add a flag to `enum libbpf_strict_mode' to disable the global `bpf_objects_list', preventing race conditions when concurrent threads call bpf_object__open() or bpf_object__close(). bpf_object__next() will return NULL if this option is set. Callers may achieve the same workflow by tracking bpf_objects in application code. [0] Closes: https://github.com/libbpf/libbpf/issues/293 Signed-off-by: Joe Burton <jevburton@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026223528.413950-1-jevburton.kernel@gmail.com
2021-10-27selftests/powerpc: Use date instead of EPOCHSECONDS in mitigation-patching.shRussell Currey
The EPOCHSECONDS environment variable was added in bash 5.0 (released 2019). Some distributions of the "stable" and "long-term" variety ship older versions of bash than this, so swap to using the date command instead. "%s" was added to coreutils `date` in 1993 so we should be good, but who knows, it is a GNU extension and not part of the POSIX spec for `date`. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211025102436.19177-1-ruscur@russell.cc
2021-10-26selftests/ftrace: Stop tracing while reading the trace file by defaultMasami Hiramatsu
Stop tracing while reading the trace file by default, to prevent the test results while checking it and to avoid taking a long time to check the result. If there is any testcase which wants to test the tracing while reading the trace file, please override this setting inside the test case. This also recovers the pause-on-trace when clean it up. Link: https://lkml.kernel.org/r/163529053143.690749.15365238954175942026.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-10-26Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfJakub Kicinski
Daniel Borkmann says: ==================== pull-request: bpf 2021-10-26 We've added 12 non-merge commits during the last 7 day(s) which contain a total of 23 files changed, 118 insertions(+), 98 deletions(-). The main changes are: 1) Fix potential race window in BPF tail call compatibility check, from Toke Høiland-Jørgensen. 2) Fix memory leak in cgroup fs due to missing cgroup_bpf_offline(), from Quanyang Wang. 3) Fix file descriptor reference counting in generic_map_update_batch(), from Xu Kuohai. 4) Fix bpf_jit_limit knob to the max supported limit by the arch's JIT, from Lorenz Bauer. 5) Fix BPF sockmap ->poll callbacks for UDP and AF_UNIX sockets, from Cong Wang and Yucong Sun. 6) Fix BPF sockmap concurrency issue in TCP on non-blocking sendmsg calls, from Liu Jian. 7) Fix build failure of INODE_STORAGE and TASK_STORAGE maps on !CONFIG_NET, from Tejun Heo. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: bpf: Fix potential race in tail call compatibility check bpf: Move BPF_MAP_TYPE for INODE_STORAGE and TASK_STORAGE outside of CONFIG_NET selftests/bpf: Use recv_timeout() instead of retries net: Implement ->sock_is_readable() for UDP and AF_UNIX skmsg: Extract and reuse sk_msg_is_readable() net: Rename ->stream_memory_read to ->sock_is_readable tcp_bpf: Fix one concurrency problem in the tcp_bpf_send_verdict function cgroup: Fix memory leak caused by missing cgroup_bpf_offline bpf: Fix error usage of map_fd and fdget() in generic_map_update_batch() bpf: Prevent increasing bpf_jit_limit above max bpf: Define bpf_jit_alloc_exec_limit for arm64 JIT bpf: Define bpf_jit_alloc_exec_limit for riscv JIT ==================== Link: https://lore.kernel.org/r/20211026201920.11296-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-26selftests/bpf: Use recv_timeout() instead of retriesYucong Sun
We use non-blocking sockets in those tests, retrying for EAGAIN is ugly because there is no upper bound for the packet arrival time, at least in theory. After we fix poll() on sockmap sockets, now we can switch to select()+recv(). Signed-off-by: Yucong Sun <sunyucong@gmail.com> Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211008203306.37525-5-xiyou.wangcong@gmail.com
2021-10-26tools build: Drop needless slang include path in test-allJohn Keeping
Commit cbefd24f0aee3a5d ("tools build: Add test to check if slang.h is in /usr/include/slang/") added a proper test to check whether slang.h is in a subdirectory, and commit 1955c8cf5e26b1f7 ("perf tools: Don't hardcode host include path for libslang") removed the include path for test-libslang.bin but missed test-all.bin. Apply the same change to test-all.bin. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Fixes: 1955c8cf5e26 ("perf tools: Don't hardcode host include path for libslang") Signed-off-by: John Keeping <john@metanate.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Nick Terrell <terrelln@fb.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20211025172314.3766032-1-john@metanate.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-26perf tests: Improve temp file cleanup in test_arm_coresight.shJames Clark
Cleanup perf.data.old files which are also dropped by perf, handle sigint and propagate it to the parent in case the test is run in a bash while loop and don't create the temp files if the test will be skipped. Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20210921131009.390810-3-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-26perf tests: Fix trace+probe_vfs_getname.sh /tmp cleanupJames Clark
The temp file is only cleaned up if the test is not skipped, so delay making it until after the skip so it doesn't get left behind in /tmp. Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20210921131009.390810-2-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-26perf test: Fix record+script_probe_vfs_getname.sh /tmp cleanupJames Clark
The temp files are only cleaned up if the test is not skipped, so delay making them until after the skip so they don't get left behind in /tmp. Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20210921131009.390810-1-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-26Merge remote-tracking branch 'torvalds/master' into perf/coreArnaldo Carvalho de Melo
To pick up the fixes from upstream. Fix simple conflict on session.c related to the file position fix that went upstream and is touched by the active decomp changes in perf/core. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-26selftests: mlxsw: Remove deprecated test casesDanielle Ratson
After adding the previous patches, the constraint that all the router interface MAC addresses have the same prefix is no longer relevant. Remove the test cases that validated that this constraint is honored. Signed-off-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26selftests: Add an occupancy test for RIF MAC profilesDanielle Ratson
When all the RIF MAC profiles are in use, test that it is possible to change the MAC of a netdev (i.e., a RIF) when its MAC profile is not shared with other RIFs. Test that replacement fails when the MAC profile is shared. Signed-off-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26selftests: mlxsw: Add forwarding test for RIF MAC profilesDanielle Ratson
Verify that MAC profile changes are indeed applied and that packets are forwarded with the correct source MAC. Output example: $ ./rif_mac_profiles.sh TEST: h1->h2: new mac profile [ OK ] TEST: h2->h1: new mac profile [ OK ] TEST: h1->h2: edit mac profile [ OK ] TEST: h2->h1: edit mac profile [ OK ] Signed-off-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26selftests: mlxsw: Add a scale test for RIF MAC profilesDanielle Ratson
Query the maximum number of supported RIF MAC profiles using devlink-resource and verify that all available MAC profiles can be utilized and that an error is generated when user space tries to exceed this number. Output example in Spectrum-2: $ TESTS='rif_mac_profile' ./resource_scale.sh TEST: 'rif_mac_profile' 4 [ OK ] TEST: 'rif_mac_profile' overflow 5 [ OK ] Signed-off-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-25selftests/bpf: Guess function end for test_get_branch_snapshotSong Liu
Function in modules could appear in /proc/kallsyms in random order. ffffffffa02608a0 t bpf_testmod_loop_test ffffffffa02600c0 t __traceiter_bpf_testmod_test_writable_bare ffffffffa0263b60 d __tracepoint_bpf_testmod_test_write_bare ffffffffa02608c0 T bpf_testmod_test_read ffffffffa0260d08 t __SCT__tp_func_bpf_testmod_test_writable_bare ffffffffa0263300 d __SCK__tp_func_bpf_testmod_test_read ffffffffa0260680 T bpf_testmod_test_write ffffffffa0260860 t bpf_testmod_test_mod_kfunc Therefore, we cannot reliably use kallsyms_find_next() to find the end of a function. Replace it with a simple guess (start + 128). This is good enough for this test. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022234814.318457-1-songliubraving@fb.com
2021-10-25selftests/bpf: Skip all serial_test_get_branch_snapshot in vmSong Liu
Skipping the second half of the test is not enough to silent the warning in dmesg. Skip the whole test before we can either properly silent the warning in kernel, or fix LBR snapshot for VM. Fixes: 025bd7c753aa ("selftests/bpf: Add test for bpf_get_branch_snapshot") Fixes: aa67fdb46436 ("selftests/bpf: Skip the second half of get_branch_snapshot in vm") Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026000733.477714-1-songliubraving@fb.com
2021-10-25selftests/bpf: Fix test_core_reloc_mods on big-endian machinesIlya Leoshkevich
This is the same as commit d164dd9a5c08 ("selftests/bpf: Fix test_core_autosize on big-endian machines"), but for test_core_reloc_mods. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026010831.748682-7-iii@linux.ibm.com
2021-10-25selftests/seccomp: Use __BYTE_ORDER__Ilya Leoshkevich
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined __BYTE_ORDER for consistency. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026010831.748682-6-iii@linux.ibm.com
2021-10-25selftests/bpf: Use __BYTE_ORDER__Ilya Leoshkevich
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined __BYTE_ORDER for consistency. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026010831.748682-4-iii@linux.ibm.com
2021-10-25libbpf: Use __BYTE_ORDER__Ilya Leoshkevich
Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined __BYTE_ORDER for consistency. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026010831.748682-3-iii@linux.ibm.com
2021-10-25libbpf: Fix endianness detection in BPF_CORE_READ_BITFIELD_PROBED()Ilya Leoshkevich
__BYTE_ORDER is supposed to be defined by a libc, and __BYTE_ORDER__ - by a compiler. bpf_core_read.h checks __BYTE_ORDER == __LITTLE_ENDIAN, which is true if neither are defined, leading to incorrect behavior on big-endian hosts if libc headers are not included, which is often the case. Fixes: ee26dade0e3b ("libbpf: Add support for relocatable bitfields") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211026010831.748682-2-iii@linux.ibm.com
2021-10-25tools/latency-collector: Use correct size when writing queue_full_warningViktor Rosendahl
queue_full_warning is a pointer, so it is wrong to use sizeof to calculate the number of characters of the string it points to. The effect is that we only print out the first few characters of the warning string. The correct way is to use strlen(). We don't need to add 1 to the strlen() because we don't want to write the terminating null character to stdout. Link: https://lkml.kernel.org/r/20211019160701.15587-1-Viktor.Rosendahl@bmw.de Link: https://lore.kernel.org/r/8fd4bb65ef3da67feac9ce3258cdbe9824752cf1.1629198502.git.jing.yangyang@zte.com.cn Link: https://lore.kernel.org/r/20211012025424.180781-1-davidcomponentone@gmail.com Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Viktor Rosendahl <Viktor.Rosendahl@bmw.de> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-10-25libbpf: Deprecate ambiguously-named bpf_program__size() APIAndrii Nakryiko
The name of the API doesn't convey clearly that this size is in number of bytes (there needed to be a separate comment to make this clear in libbpf.h). Further, measuring the size of BPF program in bytes is not exactly the best fit, because BPF programs always consist of 8-byte instructions. As such, bpf_program__insn_cnt() is a better alternative in pretty much any imaginable case. So schedule bpf_program__size() deprecation starting from v0.7 and it will be removed in libbpf 1.0. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-5-andrii@kernel.org
2021-10-25libbpf: Deprecate multi-instance bpf_program APIsAndrii Nakryiko
Schedule deprecation of a set of APIs that are related to multi-instance bpf_programs: - bpf_program__set_prep() ([0]); - bpf_program__{set,unset}_instance() ([1]); - bpf_program__nth_fd(). These APIs are obscure, very niche, and don't seem to be used much in practice. bpf_program__set_prep() is pretty useless for anything but the simplest BPF programs, as it doesn't allow to adjust BPF program load attributes, among other things. In short, it already bitrotted and will bitrot some more if not removed. With bpf_program__insns() API, which gives access to post-processed BPF program instructions of any given entry-point BPF program, it's now possible to do whatever necessary adjustments were possible with set_prep() API before, but also more. Given any such use case is automatically an advanced use case, requiring users to stick to low-level bpf_prog_load() APIs and managing their own prog FDs is reasonable. [0] Closes: https://github.com/libbpf/libbpf/issues/299 [1] Closes: https://github.com/libbpf/libbpf/issues/300 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-4-andrii@kernel.org
2021-10-25libbpf: Add ability to fetch bpf_program's underlying instructionsAndrii Nakryiko
Add APIs providing read-only access to bpf_program BPF instructions ([0]). This is useful for diagnostics purposes, but it also allows a cleaner support for cloning BPF programs after libbpf did all the FD resolution and CO-RE relocations, subprog instructions appending, etc. Currently, cloning BPF program is possible only through hijacking a half-broken bpf_program__set_prep() API, which doesn't really work well for anything but most primitive programs. For instance, set_prep() API doesn't allow adjusting BPF program load parameters which are necessary for loading fentry/fexit BPF programs (the case where BPF program cloning is a necessity if doing some sort of mass-attachment functionality). Given bpf_program__set_prep() API is set to be deprecated, having a cleaner alternative is a must. libbpf internally already keeps track of linear array of struct bpf_insn, so it's not hard to expose it. The only gotcha is that libbpf previously freed instructions array during bpf_object load time, which would make this API much less useful overall, because in between bpf_object__open() and bpf_object__load() a lot of changes to instructions are done by libbpf. So this patch makes libbpf hold onto prog->insns array even after BPF program loading. I think this is a small price for added functionality and improved introspection of BPF program code. See retsnoop PR ([1]) for how it can be used in practice and code savings compared to relying on bpf_program__set_prep(). [0] Closes: https://github.com/libbpf/libbpf/issues/298 [1] https://github.com/anakryiko/retsnoop/pull/1 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-3-andrii@kernel.org
2021-10-25libbpf: Fix off-by-one bug in bpf_core_apply_relo()Andrii Nakryiko
Fix instruction index validity check which has off-by-one error. Fixes: 3ee4f5335511 ("libbpf: Split bpf_core_apply_relo() into bpf_program independent helper.") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211025224531.1088894-2-andrii@kernel.org
2021-10-25bpftool: Switch to libbpf's hashmap for PIDs/names referencesQuentin Monnet
In order to show PIDs and names for processes holding references to BPF programs, maps, links, or BTF objects, bpftool creates hash maps to store all relevant information. This commit is part of a set that transitions from the kernel's hash map implementation to the one coming with libbpf. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This is the third and final step of the transition, in which we convert the hash maps used for storing the information about the processes holding references to BPF objects (programs, maps, links, BTF), and at last we drop the inclusion of tools/include/linux/hashtable.h. Note: Checkpatch complains about the use of __weak declarations, and the missing empty lines after the bunch of empty function declarations when compiling without the BPF skeletons (none of these were introduced in this patch). We want to keep things as they are, and the reports should be safe to ignore. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-6-quentin@isovalent.com
2021-10-25bpftool: Switch to libbpf's hashmap for programs/maps in BTF listingQuentin Monnet
In order to show BPF programs and maps using BTF objects when the latter are being listed, bpftool creates hash maps to store all relevant items. This commit is part of a set that transitions from the kernel's hash map implementation to the one coming with libbpf. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This commit focuses on the two hash maps used by bpftool when listing BTF objects to store references to programs and maps, and convert them to the libbpf's implementation. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-5-quentin@isovalent.com
2021-10-25bpftool: Switch to libbpf's hashmap for pinned paths of BPF objectsQuentin Monnet
In order to show pinned paths for BPF programs, maps, or links when listing them with the "-f" option, bpftool creates hash maps to store all relevant paths under the bpffs. So far, it would rely on the kernel implementation (from tools/include/linux/hashtable.h). We can make bpftool rely on libbpf's implementation instead. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This commit is the first step of the conversion: the hash maps for pinned paths for programs, maps, and links are converted to libbpf's hashmap.{c,h}. Other hash maps used for the PIDs of process holding references to BPF objects are left unchanged for now. On the build side, this requires adding a dependency to a second header internal to libbpf, and making it a dependency for the bootstrap bpftool version as well. The rest of the changes are a rather straightforward conversion. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-4-quentin@isovalent.com
2021-10-25bpftool: Do not expose and init hash maps for pinned path in main.cQuentin Monnet
BPF programs, maps, and links, can all be listed with their pinned paths by bpftool, when the "-f" option is provided. To do so, bpftool builds hash maps containing all pinned paths for each kind of objects. These three hash maps are always initialised in main.c, and exposed through main.h. There appear to be no particular reason to do so: we can just as well make them static to the files that need them (prog.c, map.c, and link.c respectively), and initialise them only when we want to show objects and the "-f" switch is provided. This may prevent unnecessary memory allocations if the implementation of the hash maps was to change in the future. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-3-quentin@isovalent.com
2021-10-25bpftool: Remove Makefile dep. on $(LIBBPF) for $(LIBBPF_INTERNAL_HDRS)Quentin Monnet
The dependency is only useful to make sure that the $(LIBBPF_HDRS_DIR) directory is created before we try to install locally the required libbpf internal header. Let's create this directory properly instead. This is in preparation of making $(LIBBPF_INTERNAL_HDRS) a dependency to the bootstrap bpftool version, in which case we want no dependency on $(LIBBPF). Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-2-quentin@isovalent.com
2021-10-25selftests/bpf: Split out bpf_verif_scale selftests into multiple testsAndrii Nakryiko
Instead of using subtests in bpf_verif_scale selftest, turn each scale sub-test into its own test. Each subtest is compltely independent and just reuses a bit of common test running logic, so the conversion is trivial. For convenience, keep all of BPF verifier scale tests in one file. This conversion shaves off a significant amount of time when running test_progs in parallel mode. E.g., just running scale tests (-t verif_scale): BEFORE ====== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m22.894s user 0m0.012s sys 0m22.797s AFTER ===== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m12.044s user 0m0.024s sys 0m27.869s Ten second saving right there. test_progs -j is not yet ready to be turned on by default, unfortunately, and some tests fail almost every time, but this is a good improvement nevertheless. Ignoring few failures, here is sequential vs parallel run times when running all tests now: SEQUENTIAL ========== Summary: 206/953 PASSED, 4 SKIPPED, 0 FAILED real 1m5.625s user 0m4.211s sys 0m31.650s PARALLEL ======== Summary: 204/952 PASSED, 4 SKIPPED, 2 FAILED real 0m35.550s user 0m4.998s sys 0m39.890s Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-5-andrii@kernel.org
2021-10-25selftests/bpf: Mark tc_redirect selftest as serialAndrii Nakryiko
It seems to cause a lot of harm to kprobe/tracepoint selftests. Yucong mentioned before that it does manipulate sysfs, which might be the reason. So let's mark it as serial, though ideally it would be less intrusive on the system at test. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-4-andrii@kernel.org
2021-10-25selftests/bpf: Support multiple tests per fileAndrii Nakryiko
Revamp how test discovery works for test_progs and allow multiple test entries per file. Any global void function with no arguments and serial_test_ or test_ prefix is considered a test. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-3-andrii@kernel.org
2021-10-25selftests/bpf: Normalize selftest entry pointsAndrii Nakryiko
Ensure that all test entry points are global void functions with no input arguments. Mark few subtest entry points as static. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-2-andrii@kernel.org
2021-10-25kunit: tool: continue past invalid utf-8 outputDaniel Latypov
kunit.py currently crashes and fails to parse kernel output if it's not fully valid utf-8. This can come from memory corruption or just inadvertently printing out binary data as strings. E.g. adding this line into a kunit test pr_info("\x80") will cause this exception UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 1961: invalid start byte We can tell Python how to handle errors, see https://docs.python.org/3/library/codecs.html#error-handlers Unfortunately, it doesn't seem like there's a way to specify this in just one location, so we need to repeat ourselves quite a bit. Specify `errors='backslashreplace'` so we instead: * print out the offending byte as '\x80' * try and continue parsing the output. * as long as the TAP lines themselves are valid, we're fine. Fixed spelling/grammar in commit log: Shuah Khan <<skhan@linuxfoundation.org> Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Tested-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-25perf jevents: Fix some would-be warningsJohn Garry
Before enabling warnings through HOSTCFLAGS, fix the would-be warnings: HOSTCC pmu-events/jevents.o pmu-events/jevents.c:74:22: warning: no previous prototype for ‘convert’ [-Wmissing-prototypes] 74 | enum aggr_mode_class convert(const char *aggr_mode) | ^~~~~~~ pmu-events/jevents.c: In function ‘print_events_table_entry’: pmu-events/jevents.c:373:8: warning: declaration of ‘topic’ shadows a global declaration [-Wshadow] 373 | char *topic = pd->topic; | ^~~~~ pmu-events/jevents.c:316:14: note: shadowed declaration is here 316 | static char *topic; | ^~~~~ pmu-events/jevents.c: In function ‘json_events’: pmu-events/jevents.c:554:9: warning: declaration of ‘func’ shadows a global declaration [-Wshadow] 554 | int (*func)(void *data, struct json_event *je), | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ pmu-events/jevents.c:85:15: note: shadowed declaration is here 85 | typedef int (*func)(void *data, struct json_event *je); | ^~~~ pmu-events/jevents.c: In function ‘main’: pmu-events/jevents.c:1211:25: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1211 | char *err_string_ext = ""; | ^~ pmu-events/jevents.c:1304:17: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1304 | err_string_ext = " for std arch event"; | ^ Signed-off-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/1634807805-40093-2-git-send-email-john.garry@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-25perf dso: Fix /proc/kcore access on 32 bit systemsJames Clark
Because _LARGEFILE64_SOURCE is set in perf, file offset sizes can be 64 bits. If a workflow needs to open /proc/kcore on a 32 bit system (for example to decode Arm ETM kernel trace) then the size value will be wrapped to 32 bits in the function file_size() at this line: dso->data.file_size = st.st_size; Setting the file_size member to be u64 fixes the issue and allows /proc/kcore to be opened. Reported-by: Denis Nikitin <denik@chromium.org> Signed-off-by: James Clark <james.clark@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lore.kernel.org/lkml/20211021112700.112499-1-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-10-25perf build: Suppress 'rm dlfilter' build messageAdrian Hunter
The following build message: rm dlfilters/dlfilter-test-api-v0.o is unwanted. The object file is being treated as an intermediate file and being automatically removed. Mark the object file as .SECONDARY to prevent removal and hence the message. Requested-by: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20210930062849.110416-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>