summaryrefslogtreecommitdiff
path: root/tools/lib/bpf
AgeCommit message (Collapse)Author
2019-06-18libbpf: identify maps by section index in addition to offsetAndrii Nakryiko
To support maps to be defined in multiple sections, it's important to identify map not just by offset within its section, but section index as well. This patch adds tracking of section index. For global data, we record section index of corresponding .data/.bss/.rodata ELF section for uniformity, and thus don't need a special value of offset for those maps. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: refactor map initializationAndrii Nakryiko
User and global data maps initialization has gotten pretty complicated and unnecessarily convoluted. This patch splits out the logic for global data map and user-defined map initialization. It also removes the restriction of pre-calculating how many maps will be initialized, instead allowing to keep adding new maps as they are discovered, which will be used later for BTF-defined map definitions. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: streamline ELF parsing error-handlingAndrii Nakryiko
Simplify ELF parsing logic by exiting early, as there is no common clean up path to execute. That makes it unnecessary to track when err was set and when it was cleared. It also reduces nesting in some places. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: extract BTF loading logicAndrii Nakryiko
As a preparation for adding BTF-based BPF map loading, extract .BTF and .BTF.ext loading logic. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18libbpf: add common min/max macro to libbpf_internal.hAndrii Nakryiko
Multiple files in libbpf redefine their own definitions for min/max. Let's define them in libbpf_internal.h and use those everywhere. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-15libbpf: fix check for presence of associated BTF for map creationAndrii Nakryiko
Kernel internally checks that either key or value type ID is specified, before using btf_fd. Do the same in libbpf's map creation code for determining when to retry map creation w/o BTF. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: fba01a0689a9 ("libbpf: use negative fd to specify missing BTF") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-11bpf: add a new API libbpf_num_possible_cpus()Hechao Li
Adding a new API libbpf_num_possible_cpus() that helps user with per-CPU map operations. Signed-off-by: Hechao Li <hechaol@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-10libbpf: remove qidconf and better support external bpf programs.Jonathan Lemon
Use the recent change to XSKMAP bpf_map_lookup_elem() to test if there is a xsk present in the map instead of duplicating the work with qidconf. Fix things so callers using XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD bypass any internal bpf maps, so xsk_socket__{create|delete} works properly. Clean up error handling path. Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Tested-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-06bpf, libbpf: enable recvmsg attach typesDaniel Borkmann
Another trivial patch to libbpf in order to enable identifying and attaching programs to BPF_CGROUP_UDP{4,6}_RECVMSG by section name. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-31libbpf: Return btf_fd for load_sk_storage_btfMichal Rostecki
Before this change, function load_sk_storage_btf expected that libbpf__probe_raw_btf was returning a BTF descriptor, but in fact it was returning an information about whether the probe was successful (0 or 1). load_sk_storage_btf was using that value as an argument of the close function, which was resulting in closing stdout and thus terminating the process which called that function. That bug was visible in bpftool. `bpftool feature` subcommand was always exiting too early (because of closed stdout) and it didn't display all requested probes. `bpftool -j feature` or `bpftool -p feature` were not returning a valid json object. This change renames the libbpf__probe_raw_btf function to libbpf__load_raw_btf, which now returns a BTF descriptor, as expected in load_sk_storage_btf. v2: - Fix typo in the commit message. v3: - Simplify BTF descriptor handling in bpf_object__probe_btf_* functions. - Rename libbpf__probe_raw_btf function to libbpf__load_raw_btf and return a BTF descriptor. v4: - Fix typo in the commit message. Fixes: d7c4b3980c18 ("libbpf: detect supported kernel BTF features and sanitize BTF") Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-30libbpf: reduce unnecessary line wrappingAndrii Nakryiko
There are a bunch of lines of code or comments that are unnecessary wrapped into multi-lines. Fix that without violating any code guidelines. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: typo and formatting fixesAndrii Nakryiko
A bunch of typo and formatting fixes. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: simplify two pieces of logicAndrii Nakryiko
Extra check for type is unnecessary in first case. Extra zeroing is unnecessary, as snprintf guarantees that it will zero-terminate string. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: use negative fd to specify missing BTFAndrii Nakryiko
0 is a valid FD, so it's better to initialize it to -1, as is done in other places. Also, technically, BTF type ID 0 is valid (it's a VOID type), so it's more reliable to check btf_fd, instead of btf_key_type_id, to determine if there is any BTF associated with a map. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: fix error code returned on corrupted ELFAndrii Nakryiko
All of libbpf errors are negative, except this one. Fix it. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: check map name retrieved from ELFAndrii Nakryiko
Validate there was no error retrieving symbol name corresponding to a BPF map. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: simplify endianness checkAndrii Nakryiko
Rewrite endianness check to use "more canonical" way, using compiler-defined macros, similar to few other places in libbpf. It also is more obvious and shorter. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: preserve errno before calling into user callbackAndrii Nakryiko
pr_warning ultimately may call into user-provided callback function, which can clobber errno value, so we need to save it before that. Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-30libbpf: fix detection of corrupted BPF instructions sectionAndrii Nakryiko
Ensure that size of a section w/ BPF instruction is exactly a multiple of BPF instruction size. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-29libbpf: prevent overwriting of log_level in bpf_object__load_progs()Quentin Monnet
There are two functions in libbpf that support passing a log_level parameter for the verifier for loading programs: bpf_object__load_xattr() and bpf_prog_load_xattr(). Both accept an attribute object containing the log_level, and apply it to the programs to load. It turns out that to effectively load the programs, the latter function eventually relies on the former. This was not taken into account when adding support for log_level in bpf_object__load_xattr(), and the log_level passed to bpf_prog_load_xattr() later gets overwritten with a zero value, thus disabling verifier logs for the program in all cases: bpf_prog_load_xattr() // prog->log_level = attr1->log_level; -> bpf_object__load() // attr2->log_level = 0; -> bpf_object__load_xattr() // <pass prog and attr2> -> bpf_object__load_progs() // prog->log_level = attr2->log_level; Fix this by OR-ing the log_level in bpf_object__load_progs(), instead of overwriting it. v2: Fix commit log description (confusion on function names in v1). Fixes: 60276f984998 ("libbpf: add bpf_object__load_xattr() API function to pass log_level") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-28libbpf: add bpf_object__load_xattr() API function to pass log_levelQuentin Monnet
libbpf was recently made aware of the log_level attribute for programs, used to specify the level of information expected to be dumped by the verifier. Function bpf_prog_load_xattr() got support for this log_level parameter. But some applications using libbpf rely on another function to load programs, bpf_object__load(), which does accept any parameter for log level. Create an API function based on bpf_object__load(), but accepting an "attr" object as a parameter. Then add a log_level field to that object, so that applications calling the new bpf_object__load_xattr() can pick the desired log level. v3: - Rewrite commit log. v2: - We are in a new cycle, bump libbpf extraversion number. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-28libbpf: fix warning that PTR_ERR_OR_ZERO can be usedHariprasad Kelam
Fix below warning reported by coccicheck: /tools/lib/bpf/libbpf.c:3461:1-3: WARNING: PTR_ERR_OR_ZERO can be used Signed-off-by: Hariprasad Kelam <hariprasad.kelam@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24libbpf: add "prog_flags" to bpf_program/bpf_prog_load_attr/bpf_load_program_attrJiong Wang
libbpf doesn't allow passing "prog_flags" during bpf program load in a couple of load related APIs, "bpf_load_program_xattr", "load_program" and "bpf_prog_load_xattr". It makes sense to allow passing "prog_flags" which is useful for customizing program loading. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: add btf_dump API for BTF-to-C conversionAndrii Nakryiko
BTF contains enough type information to allow generating valid compilable C header w/ correct layout of structs/unions and all the typedef/enum definitions. This patch adds a new "object" - btf_dump to facilitate dumping BTF as valid C. btf_dump__dump_type() is the main API which takes care of dumping out (through user-provided printf-like callback function) C definitions for given type ID and it's required dependencies. This allows for not just dumping out entirety of BTF types, but also selective filtering based on user-provided criterias w/ minimal set of dependent types. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: switch btf_dedup() to hashmap for dedup tableAndrii Nakryiko
Utilize libbpf's hashmap as a multimap fof dedup_table implementation. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: add resizable non-thread safe internal hashmapAndrii Nakryiko
There is a need for fast point lookups inside libbpf for multiple use cases (e.g., name resolution for BTF-to-C conversion, by-name lookups in BTF for upcoming BPF CO-RE relocation support, etc). This patch implements simple resizable non-thread safe hashmap using single linked list chains. Four different insert strategies are supported: - HASHMAP_ADD - only add key/value if key doesn't exist yet; - HASHMAP_SET - add key/value pair if key doesn't exist yet; otherwise, update value; - HASHMAP_UPDATE - update value, if key already exists; otherwise, do nothing and return -ENOENT; - HASHMAP_APPEND - always add key/value pair, even if key already exists. This turns hashmap into a multimap by allowing multiple values to be associated with the same key. Most useful read API for such hashmap is hashmap__for_each_key_entry() iteration. If hashmap__find() is still used, it will return last inserted key/value entry (first in a bucket chain). For HASHMAP_SET and HASHMAP_UPDATE, old key/value pair is returned, so that calling code can handle proper memory management, if necessary. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: add btf__parse_elf API to load .BTF and .BTF.extAndrii Nakryiko
Loading BTF and BTF.ext from ELF file is a common need. Instead of requiring every user to re-implement it, let's provide this API from libbpf itself. It's mostly copy/paste from `bpftool btf dump` implementation, which will be switched to libbpf's version in next patch. btf__parse_elf allows to load BTF and optionally BTF.ext. This is also useful for tests that need to load/work with BTF, loaded from test ELF files. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24libbpf: ensure libbpf.h is included along libbpf_internal.hAndrii Nakryiko
libbpf_internal.h expects a bunch of stuff defined in libbpf.h to be defined. This patch makes sure that libbpf.h is always included. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-22libbpf: emit diff of mismatched public API, if anyAndrii Nakryiko
It's easy to have a mismatch of "intended to be public" vs really exposed API functions. While Makefile does check for this mismatch, if it actually occurs it's not trivial to determine which functions are accidentally exposed. This patch dumps out a diff showing what's not supposed to be exposed facilitating easier fixing. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-16libbpf: move logging helpers into libbpf_internal.hAndrii Nakryiko
libbpf_util.h header was recently exposed as public as a dependency of xsk.h. In addition to memory barriers, it contained logging helpers, which are not supposed to be exposed. This patch moves those into libbpf_internal.h, which is kept as an internal header. Cc: Stanislav Fomichev <sdf@google.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Fixes: 7080da890984 ("libbpf: add libbpf_util.h to header install.") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-16libbpf: don't fail when feature probing failsStanislav Fomichev
Otherwise libbpf is unusable from unprivileged process with kernel.kernel.unprivileged_bpf_disabled=1. All I get is EPERM from the probes, even if I just want to open an ELF object and look at what progs/maps it has. Instead of dying on probes, let's just pr_debug the error and try to continue. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-13libbpf: detect supported kernel BTF features and sanitize BTFAndrii Nakryiko
Depending on used versions of libbpf, Clang, and kernel, it's possible to have valid BPF object files with valid BTF information, that still won't load successfully due to Clang emitting newer BTF features (e.g., BTF_KIND_FUNC, .BTF.ext's line_info/func_info, BTF_KIND_DATASEC, etc), that are not yet supported by older kernel. This patch adds detection of BTF features and sanitizes BPF object's BTF by substituting various supported BTF kinds, which have compatible layout: - BTF_KIND_FUNC -> BTF_KIND_TYPEDEF - BTF_KIND_FUNC_PROTO -> BTF_KIND_ENUM - BTF_KIND_VAR -> BTF_KIND_INT - BTF_KIND_DATASEC -> BTF_KIND_STRUCT Replacement is done in such a way as to preserve as much information as possible (names, sizes, etc) where possible without violating kernel's validation rules. v2->v3: - remove duplicate #defines from libbpf_util.h v1->v2: - add internal libbpf_internal.h w/ common stuff - switch SK storage BTF to use new libbpf__probe_raw_btf() Reported-by: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-06libbpf: remove unnecessary cast-to-voidBjörn Töpel
The patches with fixes tags added a cast-to-void in the places when the return value of a function was ignored. This is not common practice in the kernel, and is therefore removed in this patch. Reported-by: Daniel Borkmann <daniel@iogearbox.net> Fixes: 5750902a6e9b ("libbpf: proper XSKMAP cleanup") Fixes: 0e6741f09297 ("libbpf: fix invalid munmap call") Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-05libbpf: add libbpf_util.h to header install.William Tu
The libbpf_util.h is used by xsk.h, so add it to the install headers. Reported-by: Ben Pfaff <blp@ovn.org> Signed-off-by: William Tu <u9012063@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-05tools/bpf: fix perf build error with uClibc (seen on ARC)Vineet Gupta
When build perf for ARC recently, there was a build failure due to lack of __NR_bpf. | Auto-detecting system features: | | ... get_cpuid: [ OFF ] | ... bpf: [ on ] | | # error __NR_bpf not defined. libbpf does not support your arch. ^~~~~ | bpf.c: In function 'sys_bpf': | bpf.c:66:17: error: '__NR_bpf' undeclared (first use in this function) | return syscall(__NR_bpf, cmd, attr, size); | ^~~~~~~~ | sys_bpf Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-04libbpf: proper XSKMAP cleanupBjörn Töpel
The bpf_map_update_elem() function, when used on an XSKMAP, will fail if not a valid AF_XDP socket is passed as value. Therefore, this is function cannot be used to clear the XSKMAP. Instead, the bpf_map_delete_elem() function should be used for that. This patch also simplifies the code by breaking up xsk_update_bpf_maps() into three smaller functions. Reported-by: William Tu <u9012063@gmail.com> Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Tested-by: William Tu <u9012063@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-04libbpf: fix invalid munmap callBjörn Töpel
When unmapping the AF_XDP memory regions used for the rings, an invalid address was passed to the munmap() calls. Instead of passing the beginning of the memory region, the descriptor region was passed to munmap. When the userspace application tried to tear down an AF_XDP socket, the operation failed and the application would still have a reference to socket it wished to get rid of. Reported-by: William Tu <u9012063@gmail.com> Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Tested-by: William Tu <u9012063@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Three trivial overlapping conflicts. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-27bpf: Support BPF_MAP_TYPE_SK_STORAGE in bpf map probingMartin KaFai Lau
This patch supports probing for the new BPF_MAP_TYPE_SK_STORAGE. BPF_MAP_TYPE_SK_STORAGE enforces BTF usage, so the new probe requires to create and load a BTF also. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-26tools: sync bpf.hMatt Mullins
This adds BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE, and fixes up the error: enumeration value ‘BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE’ not handled in switch [-Werror=switch-enum] build errors it would otherwise cause in libbpf. Signed-off-by: Matt Mullins <mmullins@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-25libbpf: add binary to gitignoreMatteo Croce
Some binaries are generated when building libbpf from tools/lib/bpf/, namely libbpf.so.0.0.2 and libbpf.so.0. Add them to the local .gitignore. Signed-off-by: Matteo Croce <mcroce@redhat.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-25libbpf: fix samples/bpf build failure due to undefined UINT32_MAXDaniel T. Lee
Currently, building bpf samples will cause the following error. ./tools/lib/bpf/bpf.h:132:27: error: 'UINT32_MAX' undeclared here (not in a function) .. #define BPF_LOG_BUF_SIZE (UINT32_MAX >> 8) /* verifier maximum in kernels <= 5.1 */ ^ ./samples/bpf/bpf_load.h:31:25: note: in expansion of macro 'BPF_LOG_BUF_SIZE' extern char bpf_log_buf[BPF_LOG_BUF_SIZE]; ^~~~~~~~~~~~~~~~ Due to commit 4519efa6f8ea ("libbpf: fix BPF_LOG_BUF_SIZE off-by-one error") hard-coded size of BPF_LOG_BUF_SIZE has been replaced with UINT32_MAX which is defined in <stdint.h> header. Even with this change, bpf selftests are running fine since these are built with clang and it includes header(-idirafter) from clang/6.0.0/include. (it has <stdint.h>) clang -I. -I./include/uapi -I../../../include/uapi -idirafter /usr/local/include -idirafter /usr/include \ -idirafter /usr/lib/llvm-6.0/lib/clang/6.0.0/include -idirafter /usr/include/x86_64-linux-gnu \ -Wno-compare-distinct-pointer-types -O2 -target bpf -emit-llvm -c progs/test_sysctl_prog.c -o - | \ llc -march=bpf -mcpu=generic -filetype=obj -o /linux/tools/testing/selftests/bpf/test_sysctl_prog.o But bpf samples are compiled with GCC, and it only searches and includes headers declared at the target file. As '#include <stdint.h>' hasn't been declared in tools/lib/bpf/bpf.h, it causes build failure of bpf samples. gcc -Wp,-MD,./samples/bpf/.sockex3_user.o.d -Wall -Wmissing-prototypes -Wstrict-prototypes \ -O2 -fomit-frame-pointer -std=gnu89 -I./usr/include -I./tools/lib/ -I./tools/testing/selftests/bpf/ \ -I./tools/ lib/ -I./tools/include -I./tools/perf -c -o ./samples/bpf/sockex3_user.o ./samples/bpf/sockex3_user.c; This commit add declaration of '#include <stdint.h>' to tools/lib/bpf/bpf.h to fix this problem. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-25bpf, libbpf: fix segfault in bpf_object__init_maps' pr_debug statementDaniel Borkmann
Ran into it while testing; in bpf_object__init_maps() data can be NULL in the case where no map section is present. Therefore we simply cannot access data->d_size before NULL test. Move the pr_debug() where it's safe to access. Fixes: d859900c4c56 ("bpf, libbpf: support global data/bss/rodata sections") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-25bpf, libbpf: handle old kernels more graceful wrt global data sectionsDaniel Borkmann
Andrii reported a corner case where e.g. global static data is present in the BPF ELF file in form of .data/.bss/.rodata section, but without any relocations to it. Such programs could be loaded before commit d859900c4c56 ("bpf, libbpf: support global data/bss/rodata sections"), whereas afterwards if kernel lacks support then loading would fail. Add a probing mechanism which skips setting up libbpf internal maps in case of missing kernel support. In presence of relocation entries, we abort the load attempt. Fixes: d859900c4c56 ("bpf, libbpf: support global data/bss/rodata sections") Reported-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-19libbpf: fix BPF_LOG_BUF_SIZE off-by-one errorMcCabe, Robert J
The BPF_PROG_LOAD condition for kernel version <= 5.1 is log->len_total > UINT_MAX >> 8 /* (16 * 1024 * 1024) - 1 */ Signed-off-by: McCabe, Robert J <robert.mccabe@rockwellcollins.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-18libbpf: remove compile time warning from libbpf_util.hMagnus Karlsson
Having a helpful compile time warning in libbpf_util.h is not a good idea since all warnings are treated as errors. Change this into a comment in the code instead. Fixes: b7e3a28019c9 ("libbpf: remove dependency on barrier.h in xsk.h") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-16libbpf: optimize barrier for XDP socket ringsMagnus Karlsson
The full memory barrier in the XDP socket rings on the consumer side between the load of the data and the store of the consumer ring is there to protect the store from being executed before the load of the data. If this was allowed to happen, the producer might overwrite the data field with a new entry before the consumer got the chance to read it. On x86, stores are guaranteed not to be reordered with older loads, so it does not need a full memory barrier here. A compile time barrier would be enough. This patch introdcues a new primitive in libbpf_util.h that implements a new barrier type (libbpf_smp_rwmb) hindering stores to be reordered with older loads. It is then used in the XDP socket ring access code in libbpf to improve performance. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-16libbpf: remove dependency on barrier.h in xsk.hMagnus Karlsson
The use of smp_rmb() and smp_wmb() creates a Linux header dependency on barrier.h that is unnecessary in most parts. This patch implements the two small defines that are needed from barrier.h. As a bonus, the new implementations are faster than the default ones as they default to sfence and lfence for x86, while we only need a compiler barrier in our case. Just as it is when the same ring access code is compiled in the kernel. Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-16libbpf: remove likely/unlikely in xsk.hMagnus Karlsson
This patch removes the use of likely and unlikely in xsk.h since they create a dependency on Linux headers as reported by several users. There have also been reports that the use of these decreases performance as the compiler puts the code on two different cache lines instead of on a single one. All in all, I think we are better off without them. Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-16libbpf: fix XDP socket ring buffer memory orderingMagnus Karlsson
The ring buffer code of XDP sockets is missing a memory barrier on the consumer side between the load of the data and the write that signals that it is ok for the producer to put new data into the buffer. On architectures that does not guarantee that stores are not reordered with older loads, the producer might put data into the ring before the consumer had the chance to read it. As IA does guarantee this ordering, it would only need a compiler barrier here, but there are no primitives in barrier.h for this specific case (hinder writes to be ordered before older reads) so I had to add a smp_mb() here which will translate into a run-time synch operation on IA. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>