diff options
| author | Alexei Starovoitov <ast@kernel.org> | 2025-12-09 23:28:49 -0800 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2025-12-09 23:30:27 -0800 |
| commit | 297c3fba9d766b52b6b8e99fa53f0a85c5902909 (patch) | |
| tree | f02df38dd46e62838ff4a59bac510defbd5e2a79 /include/linux | |
| parent | 189e5deb944a6f9c7992355d60bffd8ec2e54a9c (diff) | |
| parent | 01bc3b6db18d6e0a2e93c37885996bf339bfe337 (diff) | |
Merge branch 'bpf-x86-unwind-orc-support-reliable-unwinding-through-bpf-stack-frames'
Josh Poimboeuf says:
====================
bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames
Fix livepatch stalls which may be seen when a task is blocked with BPF
JIT on its kernel stack.
Changes since v1 (https://lore.kernel.org/cover.1764699074.git.jpoimboe@kernel.org):
- fix NULL ptr deref in __arch_prepare_bpf_trampoline()
====================
Link: https://patch.msgid.link/cover.1764818927.git.jpoimboe@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/bpf.h | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 6498be4c44f8..e5be698256d1 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1283,6 +1283,8 @@ struct bpf_ksym { struct list_head lnode; struct latch_tree_node tnode; bool prog; + u32 fp_start; + u32 fp_end; }; enum bpf_tramp_prog_type { @@ -1511,6 +1513,7 @@ void bpf_image_ksym_add(struct bpf_ksym *ksym); void bpf_image_ksym_del(struct bpf_ksym *ksym); void bpf_ksym_add(struct bpf_ksym *ksym); void bpf_ksym_del(struct bpf_ksym *ksym); +bool bpf_has_frame_pointer(unsigned long ip); int bpf_jit_charge_modmem(u32 size); void bpf_jit_uncharge_modmem(u32 size); bool bpf_prog_has_trampoline(const struct bpf_prog *prog); |
