25 Aug, 2020
1 commit
-
For the problem of increasing fragmentation of the bpf loader programs,
instead of using bpf_loader.o, which is used in samples/bpf, this
commit refactors the existing kprobe tracing programs with libbbpf
bpf loader.- For kprobe events pointing to system calls, the SYSCALL() macro in
trace_common.h was used.
- Adding a kprobe event and attaching a bpf program to it was done
through bpf_program_attach().
- Instead of using the existing BPF MAP definition, MAP definition
has been refactored with the new BTF-defined MAP format.Signed-off-by: Daniel T. Lee
Signed-off-by: Alexei Starovoitov
Link: https://lore.kernel.org/bpf/20200823085334.9413-3-danieltimlee@gmail.com
21 Jan, 2020
1 commit
-
Fix all files in samples/bpf to include libbpf header files with the bpf/
prefix, to be consistent with external users of the library. Also ensure
that all includes of exported libbpf header files (those that are exported
on 'make install' of the library) use bracketed includes instead of quoted.To make sure no new files are introduced that doesn't include the bpf/
prefix in its include, remove tools/lib/bpf from the include path entirely,
and use tools/lib instead.Fixes: 6910d7d3867a ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
Signed-off-by: Toke Høiland-Jørgensen
Signed-off-by: Alexei Starovoitov
Acked-by: Jesper Dangaard Brouer
Acked-by: Andrii Nakryiko
Link: https://lore.kernel.org/bpf/157952560911.1683545.8795966751309534150.stgit@toke.dk
09 Oct, 2019
1 commit
-
Split-off PT_REGS-related helpers into bpf_tracing.h header. Adjust
selftests and samples to include it where necessary.Signed-off-by: Andrii Nakryiko
Signed-off-by: Daniel Borkmann
Acked-by: John Fastabend
Acked-by: Song Liu
Link: https://lore.kernel.org/bpf/20191008175942.1769476-5-andriin@fb.com
07 Apr, 2016
1 commit
-
Add the necessary definitions for building bpf samples on ppc.
Since ppc doesn't store function return address on the stack, modify how
PT_REGS_RET() and PT_REGS_FP() work.Also, introduce PT_REGS_IP() to access the instruction pointer.
Cc: Alexei Starovoitov
Cc: Daniel Borkmann
Cc: David S. Miller
Cc: Ananth N Mavinakayanahalli
Cc: Michael Ellerman
Signed-off-by: Naveen N. Rao
Acked-by: Alexei Starovoitov
Signed-off-by: David S. Miller
09 Mar, 2016
2 commits
-
increase stress by also calling bpf_get_stackid() from
various *spin* functionsSigned-off-by: Alexei Starovoitov
Signed-off-by: David S. Miller -
this test calls bpf programs from different contexts:
from inside of slub, from rcu, from pretty much everywhere,
since it kprobes all spin_lock functions.
It stresses the bpf hash and percpu map pre-allocation,
deallocation logic and call_rcu mechanisms.
User space part adding more stress by walking and deleting map elements.Note that due to nature bpf_load.c the earlier kprobe+bpf programs are
already active while loader loads new programs, creates new kprobes and
attaches them.Signed-off-by: Alexei Starovoitov
Signed-off-by: David S. Miller