13 Mar, 2020

3 commits

  • These files are generated, so ignore them.

    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Link: https://lore.kernel.org/bpf/20200312182332.3953408-4-songliubraving@fb.com

    Song Liu
     
  • Add the dependency to libbpf, to fix build errors like:

    In file included from skeleton/profiler.bpf.c:5:
    .../bpf_helpers.h:5:10: fatal error: 'bpf_helper_defs.h' file not found
    #include "bpf_helper_defs.h"
    ^~~~~~~~~~~~~~~~~~~
    1 error generated.
    make: *** [skeleton/profiler.bpf.o] Error 1
    make: *** Waiting for unfinished jobs....

    Fixes: 47c09d6a9f67 ("bpftool: Introduce "prog profile" command")
    Suggested-by: Quentin Monnet
    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200312182332.3953408-3-songliubraving@fb.com

    Song Liu
     
  • bpftool-prog-profile requires clang to generate BTF for global variables.
    When compared with older clang, skip this command. This is achieved by
    adding a new feature, clang-bpf-global-var, to tools/build/feature.

    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Link: https://lore.kernel.org/bpf/20200312182332.3953408-2-songliubraving@fb.com

    Song Liu
     

12 Mar, 2020

1 commit

  • When compiling bpftool on a system where the /usr/include/asm symlink
    doesn't exist (e.g. on an Ubuntu system without gcc-multilib installed),
    the build fails with:

    CLANG skeleton/profiler.bpf.o
    In file included from skeleton/profiler.bpf.c:4:
    In file included from /usr/include/linux/bpf.h:11:
    /usr/include/linux/types.h:5:10: fatal error: 'asm/types.h' file not found
    #include
    ^~~~~~~~~~~~~
    1 error generated.
    make: *** [Makefile:123: skeleton/profiler.bpf.o] Error 1

    This indicates that the build is using linux/types.h from system headers
    instead of source tree headers.

    To fix this, adjust the clang search path to include the necessary
    headers from tools/testing/selftests/bpf/include/uapi and
    tools/include/uapi. Also use __bitwise__ instead of __bitwise in
    skeleton/profiler.h to avoid clashing with the definition in
    tools/testing/selftests/bpf/include/uapi/linux/types.h.

    Signed-off-by: Tobias Klauser
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Link: https://lore.kernel.org/bpf/20200312130330.32239-1-tklauser@distanz.ch

    Tobias Klauser
     

11 Mar, 2020

3 commits

  • Libbpf compiles and runs subset of selftests on each PR in its Github mirror
    repository. To allow still building up-to-date selftests against outdated
    kernel images, add back BPF_F_CURRENT_CPU definitions back.

    N.B. BCC's runqslower version ([0]) doesn't need BPF_F_CURRENT_CPU due to use of
    locally checked in vmlinux.h, generated against kernel with 1aae4bdd7879 ("bpf:
    Switch BPF UAPI #define constants used from BPF program side to enums")
    applied.

    [0] https://github.com/iovisor/bcc/pull/2809

    Fixes: 367d82f17eff (" tools/runqslower: Drop copy/pasted BPF_F_CURRENT_CPU definiton")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200311043010.530620-1-andriin@fb.com

    Andrii Nakryiko
     
  • fmod_ret progs are emitted as:

    start = __bpf_prog_enter();
    call fmod_ret
    *(u64 *)(rbp - 8) = rax
    __bpf_prog_exit(, start);
    test eax, eax
    jne do_fexit

    That 'test eax, eax' is working by accident. The compiler is free to use rax
    inside __bpf_prog_exit() or inside functions that __bpf_prog_exit() is calling.
    Which caused "test_progs -t modify_return" to sporadically fail depending on
    compiler version and kconfig. Fix it by using 'cmp [rbp - 8], 0' instead of
    'test eax, eax'.

    Fixes: ae24082331d9 ("bpf: Introduce BPF_MODIFY_RETURN")
    Reported-by: Andrii Nakryiko
    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Andrii Nakryiko
    Acked-by: KP Singh
    Link: https://lore.kernel.org/bpf/20200311003906.3643037-1-ast@kernel.org

    Alexei Starovoitov
     
  • Add bpf_link_new_file() API for cases when we need to ensure anon_inode is
    successfully created before we proceed with expensive BPF program attachment
    procedure, which will require equally (if not more so) expensive and
    potentially failing compensation detachment procedure just because anon_inode
    creation failed. This API allows to simplify code by ensuring first that
    anon_inode is created and after BPF program is attached proceed with
    fd_install() that can't fail.

    After anon_inode file is created, link can't be just kfree()'d anymore,
    because its destruction will be performed by deferred file_operations->release
    call. For this, bpf_link API required specifying two separate operations:
    release() and dealloc(), former performing detachment only, while the latter
    frees memory used by bpf_link itself. dealloc() needs to be specified, because
    struct bpf_link is frequently embedded into link type-specific container
    struct (e.g., struct bpf_raw_tp_link), so bpf_link itself doesn't know how to
    properly free the memory. In case when anon_inode file was successfully
    created, but subsequent BPF attachment failed, bpf_link needs to be marked as
    "defunct", so that file's release() callback will perform only memory
    deallocation, but no detachment.

    Convert raw tracepoint and tracing attachment to new API and eliminate
    detachment from error handling path.

    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309231051.1270337-1-andriin@fb.com

    Andrii Nakryiko
     

10 Mar, 2020

16 commits

  • _bpftool_get_map_names => _bpftool_get_prog_names for prog-attach|detach.

    Fixes: 99f9863a0c45 ("bpftool: Match maps by name")
    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Link: https://lore.kernel.org/bpf/20200309173218.2739965-5-songliubraving@fb.com

    Song Liu
     
  • Add bash completion for "bpftool prog profile" command.

    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Link: https://lore.kernel.org/bpf/20200309173218.2739965-4-songliubraving@fb.com

    Song Liu
     
  • Add documentation for the new bpftool prog profile command.

    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Link: https://lore.kernel.org/bpf/20200309173218.2739965-3-songliubraving@fb.com

    Song Liu
     
  • With fentry/fexit programs, it is possible to profile BPF program with
    hardware counters. Introduce bpftool "prog profile", which measures key
    metrics of a BPF program.

    bpftool prog profile command creates per-cpu perf events. Then it attaches
    fentry/fexit programs to the target BPF program. The fentry program saves
    perf event value to a map. The fexit program reads the perf event again,
    and calculates the difference, which is the instructions/cycles used by
    the target program.

    Example input and output:

    ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses

    4228 run_cnt
    3403698 cycles (84.08%)
    3525294 instructions # 1.04 insn per cycle (84.05%)
    13 llc_misses # 3.69 LLC misses per million isns (83.50%)

    This command measures cycles and instructions for BPF program with id
    337 for 3 seconds. The program has triggered 4228 times. The rest of the
    output is similar to perf-stat. In this example, the counters were only
    counting ~84% of the time because of time multiplexing of perf counters.

    Note that, this approach measures cycles and instructions in very small
    increments. So the fentry/fexit programs introduce noticeable errors to
    the measurement results.

    The fentry/fexit programs are generated with BPF skeletons. Therefore, we
    build bpftool twice. The first time _bpftool is built without skeletons.
    Then, _bpftool is used to generate the skeletons. The second time, bpftool
    is built with skeletons.

    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Quentin Monnet
    Acked-by: Yonghong Song
    Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com

    Song Liu
     
  • Add Jakub and myself as maintainers for sockmap related code.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-13-lmb@cloudflare.com

    Lorenz Bauer
     
  • Remove the guard that disables UDP tests now that sockmap
    has support for them.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-12-lmb@cloudflare.com

    Lorenz Bauer
     
  • Expand the TCP sockmap test suite to also check UDP sockets.

    Signed-off-by: Jakub Sitnicki
    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-11-lmb@cloudflare.com

    Lorenz Bauer
     
  • Most tests for TCP sockmap can be adapted to UDP sockmap if the
    listen call is skipped. Rename listen_loopback, etc. to socket_loopback
    and skip listen() for SOCK_DGRAM.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-10-lmb@cloudflare.com

    Lorenz Bauer
     
  • Allow adding hashed UDP sockets to sockmaps.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Jakub Sitnicki
    Signed-off-by: Daniel Borkmann
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-9-lmb@cloudflare.com

    Lorenz Bauer
     
  • Add basic psock hooks for UDP sockets. This allows adding and
    removing sockets, as well as automatic removal on unhash and close.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Jakub Sitnicki
    Signed-off-by: Daniel Borkmann
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-8-lmb@cloudflare.com

    Lorenz Bauer
     
  • We can take advantage of the fact that both callers of
    sock_map_init_proto are holding a RCU read lock, and
    have verified that psock is valid.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-7-lmb@cloudflare.com

    Lorenz Bauer
     
  • The init, close and unhash handlers from TCP sockmap are generic,
    and can be reused by UDP sockmap. Move the helpers into the sockmap code
    base and expose them. This requires tcp_bpf_get_proto and tcp_bpf_clone to
    be conditional on BPF_STREAM_PARSER.

    The moved functions are unmodified, except that sk_psock_unlink is
    renamed to sock_map_unlink to better match its behaviour.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-6-lmb@cloudflare.com

    Lorenz Bauer
     
  • tcp_bpf.c is only included in the build if CONFIG_NET_SOCK_MSG is
    selected. The declaration should therefore be guarded as such.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-5-lmb@cloudflare.com

    Lorenz Bauer
     
  • We need to ensure that sk->sk_prot uses certain callbacks, so that
    code that directly calls e.g. tcp_sendmsg in certain corner cases
    works. To avoid spurious asserts, we must to do this only if
    sk_psock_update_proto has not yet been called. The same invariants
    apply for tcp_bpf_check_v6_needs_rebuild, so move the call as well.

    Doing so allows us to merge tcp_bpf_init and tcp_bpf_reinit.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-4-lmb@cloudflare.com

    Lorenz Bauer
     
  • Only update psock->saved_* if psock->sk_proto has not been initialized
    yet. This allows us to get rid of tcp_bpf_reinit_sk_prot.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-3-lmb@cloudflare.com

    Lorenz Bauer
     
  • The sock map code checks that a socket does not have an active upper
    layer protocol before inserting it into the map. This requires casting
    via inet_csk, which isn't valid for UDP sockets.

    Guard checks for ULP by checking inet_sk(sk)->is_icsk first.

    Signed-off-by: Lorenz Bauer
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Jakub Sitnicki
    Acked-by: John Fastabend
    Link: https://lore.kernel.org/bpf/20200309111243.6982-2-lmb@cloudflare.com

    Lorenz Bauer
     

06 Mar, 2020

2 commits

  • test_run.o is not built when CONFIG_NET is not set and
    bpf_prog_test_run_tracing being referenced in bpf_trace.o causes the
    linker error:

    ld: kernel/trace/bpf_trace.o:(.rodata+0x38): undefined reference to
    `bpf_prog_test_run_tracing'

    Add a __weak function in bpf_trace.c to handle this.

    Fixes: da00d2f117a0 ("bpf: Add test ops for BPF_PROG_TYPE_TRACING")
    Signed-off-by: KP Singh
    Reported-by: Randy Dunlap
    Acked-by: Randy Dunlap
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20200305220127.29109-1-kpsingh@chromium.org

    KP Singh
     
  • While well intentioned, checking CAP_MAC_ADMIN for attaching
    BPF_MODIFY_RETURN tracing programs to "security_" functions is not
    necessary as tracing BPF programs already require CAP_SYS_ADMIN.

    Fixes: 6ba43b761c41 ("bpf: Attachment verification for BPF_MODIFY_RETURN")
    Signed-off-by: KP Singh
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20200305204955.31123-1-kpsingh@chromium.org

    KP Singh
     

05 Mar, 2020

15 commits

  • Add a new entry for the 32-bit RISC-V JIT to MAINTAINERS and change
    mailing list to netdev and bpf following the guidelines from
    commit e42da4c62abb ("docs/bpf: Update bpf development Q/A file").

    Signed-off-by: Xi Wang
    Signed-off-by: Luke Nelson
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Björn Töpel
    Acked-by: Björn Töpel
    Link: https://lore.kernel.org/bpf/20200305050207.4159-5-luke.r.nels@gmail.com

    Luke Nelson
     
  • Update filter.txt and admin-guide to mention the BPF JIT for RV32G.

    Co-developed-by: Xi Wang
    Signed-off-by: Xi Wang
    Signed-off-by: Luke Nelson
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Björn Töpel
    Acked-by: Björn Töpel
    Link: https://lore.kernel.org/bpf/20200305050207.4159-4-luke.r.nels@gmail.com

    Luke Nelson
     
  • This is an eBPF JIT for RV32G, adapted from the JIT for RV64G and
    the 32-bit ARM JIT.

    There are two main changes required for this to work compared to
    the RV64 JIT.

    First, eBPF registers are 64-bit, while RV32G registers are 32-bit.
    BPF registers either map directly to 2 RISC-V registers, or reside
    in stack scratch space and are saved and restored when used.

    Second, many 64-bit ALU operations do not trivially map to 32-bit
    operations. Operations that move bits between high and low words,
    such as ADD, LSH, MUL, and others must emulate the 64-bit behavior
    in terms of 32-bit instructions.

    This patch also makes related changes to bpf_jit.h, such
    as adding RISC-V instructions required by the RV32 JIT.

    Supported features:

    The RV32 JIT supports the same features and instructions as the
    RV64 JIT, with the following exceptions:

    - ALU64 DIV/MOD: Requires loops to implement on 32-bit hardware.

    - BPF_XADD | BPF_DW: There's no 8-byte atomic instruction in RV32.

    These features are also unsupported on other BPF JITs for 32-bit
    architectures.

    Testing:

    - lib/test_bpf.c
    test_bpf: Summary: 378 PASSED, 0 FAILED, [349/366 JIT'ed]
    test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED

    The tests that are not JITed are all due to use of 64-bit div/mod
    or 64-bit xadd.

    - tools/testing/selftests/bpf/test_verifier.c
    Summary: 1415 PASSED, 122 SKIPPED, 43 FAILED

    Tested both with and without BPF JIT hardening.

    This is the same set of tests that pass using the BPF interpreter
    with the JIT disabled.

    Verification and synthesis:

    We developed the RV32 JIT using our automated verification tool,
    Serval. We have used Serval in the past to verify patches to the
    RV64 JIT. We also used Serval to superoptimize the resulting code
    through program synthesis.

    You can find the tool and a guide to the approach and results here:
    https://github.com/uw-unsat/serval-bpf/tree/rv32-jit-v5

    Co-developed-by: Xi Wang
    Signed-off-by: Xi Wang
    Signed-off-by: Luke Nelson
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Björn Töpel
    Acked-by: Björn Töpel
    Link: https://lore.kernel.org/bpf/20200305050207.4159-3-luke.r.nels@gmail.com

    Luke Nelson
     
  • This patch factors out code that can be used by both the RV64 and RV32
    BPF JITs to a common bpf_jit.h and bpf_jit_core.c.

    Move struct definitions and macro-like functions to header. Rename
    rv_sb_insn/rv_uj_insn to rv_b_insn/rv_j_insn to match the RISC-V
    specification.

    Move reusable functions emit_body() and bpf_int_jit_compile() to
    bpf_jit_core.c with minor simplifications. Rename emit_insn() and
    build_{prologue,epilogue}() to be prefixed with "bpf_jit_" as they are
    no longer static.

    Rename bpf_jit_comp.c to bpf_jit_comp64.c to be more explicit.

    Co-developed-by: Xi Wang
    Signed-off-by: Xi Wang
    Signed-off-by: Luke Nelson
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Björn Töpel
    Acked-by: Björn Töpel
    Link: https://lore.kernel.org/bpf/20200305050207.4159-2-luke.r.nels@gmail.com

    Luke Nelson
     
  • KP Singh says:

    ====================
    v3 -> v4:

    * Fix a memory leak noticed by Daniel.

    v2 -> v3:

    * bpf_trampoline_update_progs -> bpf_trampoline_get_progs + const
    qualification.
    * Typos in commit messages.
    * Added Andrii's Acks.

    v1 -> v2:

    * Adressed Andrii's feedback.
    * Fixed a bug that Alexei noticed about nop generation.
    * Rebase.

    This was brought up in the KRSI v4 discussion and found to be useful
    both for security and tracing programs.

    https://lore.kernel.org/bpf/20200225193108.GB22391@chromium.org/

    The modify_return programs are allowed for security hooks (with an
    extra CAP_MAC_ADMIN check) and functions whitelisted for error
    injection (ALLOW_ERROR_INJECTION).

    The "security_" check is expected to be cleaned up with the KRSI patch
    series.

    Here is an example of how a fmod_ret program behaves:

    int func_to_be_attached(int a, int b)
    {
    if (ret != 0)
    goto do_fexit;

    original_function:

    }

    Alexei Starovoitov
     
  • Test for two scenarios:

    * When the fmod_ret program returns 0, the original function should
    be called along with fentry and fexit programs.
    * When the fmod_ret program returns a non-zero value, the original
    function should not be called, no side effect should be observed and
    fentry and fexit programs should be called.

    The result from the kernel function call and whether a side-effect is
    observed is returned via the retval attr of the BPF_PROG_TEST_RUN (bpf)
    syscall.

    Signed-off-by: KP Singh
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200304191853.1529-8-kpsingh@chromium.org

    KP Singh
     
  • The current fexit and fentry tests rely on a different program to
    exercise the functions they attach to. Instead of doing this, implement
    the test operations for tracing which will also be used for
    BPF_MODIFY_RETURN in a subsequent patch.

    Also, clean up the fexit test to use the generated skeleton.

    Signed-off-by: KP Singh
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200304191853.1529-7-kpsingh@chromium.org

    KP Singh
     
  • Signed-off-by: KP Singh
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200304191853.1529-6-kpsingh@chromium.org

    KP Singh
     
  • - Allow BPF_MODIFY_RETURN attachment only to functions that are:

    * Whitelisted for error injection by checking
    within_error_injection_list. Similar discussions happened for the
    bpf_override_return helper.

    * security hooks, this is expected to be cleaned up with the LSM
    changes after the KRSI patches introduce the LSM_HOOK macro:

    https://lore.kernel.org/bpf/20200220175250.10795-1-kpsingh@chromium.org/

    - The attachment is currently limited to functions that return an int.
    This can be extended later other types (e.g. PTR).

    Signed-off-by: KP Singh
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200304191853.1529-5-kpsingh@chromium.org

    KP Singh
     
  • When multiple programs are attached, each program receives the return
    value from the previous program on the stack and the last program
    provides the return value to the attached function.

    The fmod_ret bpf programs are run after the fentry programs and before
    the fexit programs. The original function is only called if all the
    fmod_ret programs return 0 to avoid any unintended side-effects. The
    success value, i.e. 0 is not currently configurable but can be made so
    where user-space can specify it at load time.

    For example:

    int func_to_be_attached(int a, int b)
    {
    if (ret != 0)
    goto do_fexit;

    original_function:

    }
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200304191853.1529-4-kpsingh@chromium.org

    KP Singh
     
  • * Split the invoke_bpf program to prepare for special handling of
    fmod_ret programs introduced in a subsequent patch.
    * Move the definition of emit_cond_near_jump and emit_nops as they are
    needed for fmod_ret.
    * Refactor branch target alignment into its own generic helper function
    i.e. emit_align.

    Signed-off-by: KP Singh
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200304191853.1529-3-kpsingh@chromium.org

    KP Singh
     
  • As we need to introduce a third type of attachment for trampolines, the
    flattened signature of arch_prepare_bpf_trampoline gets even more
    complicated.

    Refactor the prog and count argument to arch_prepare_bpf_trampoline to
    use bpf_tramp_progs to simplify the addition and accounting for new
    attachment types.

    Signed-off-by: KP Singh
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20200304191853.1529-2-kpsingh@chromium.org

    KP Singh
     
  • Add detection of out-of-tree built vmlinux image for the purpose of
    VMLINUX_BTF detection. According to Documentation/kbuild/kbuild.rst, O takes
    precedence over KBUILD_OUTPUT.

    Also ensure ~/path/to/build/dir also works by relying on wildcard's resolution
    first, but then applying $(abspath) at the end to also handle
    O=../../whatever cases.

    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20200304184336.165766-1-andriin@fb.com

    Andrii Nakryiko
     
  • When CONFIG_DEBUG_INFO is enabled, the two kallsyms linking steps spend
    time collecting and writing the dwarf sections to the temporary output
    files. kallsyms does not need this information, and leaving it off
    halves their linking time. This is especially noticeable without
    CONFIG_DEBUG_INFO_REDUCED. The BTF linking stage, however, does still
    need those details.

    Refactor the BTF and kallsyms generation stages slightly for more
    regularized temporary names. Skip debug during kallsyms links.
    Additionally move "info BTF" to the correct place since commit
    8959e39272d6 ("kbuild: Parameterize kallsyms generation and correct
    reporting"), which added "info LD ..." to vmlinux_link calls.

    For a full debug info build with BTF, my link time goes from 1m06s to
    0m54s, saving about 12 seconds, or 18%.

    Signed-off-by: Kees Cook
    Signed-off-by: Daniel Borkmann
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/202003031814.4AEA3351@keescook

    Kees Cook
     
  • Andrii Nakryiko says:

    ====================
    Convert BPF-related UAPI constants, currently defined as #define macro, into
    anonymous enums. This has no difference in terms of usage of such constants in
    C code (they are still could be used in all the compile-time contexts that
    `#define`s can), but they are recorded as part of DWARF type info, and
    subsequently get recorded as part of kernel's BTF type info. This allows those
    constants to be emitted as part of vmlinux.h auto-generated header file and be
    used from BPF programs. Which is especially convenient for all kinds of BPF
    helper flags and makes CO-RE BPF programs nicer to write.

    libbpf's btf_dump logic currently assumes enum values are signed 32-bit
    values, but that doesn't match a typical case, so switch it to emit unsigned
    values. Once BTF encoding of BTF_KIND_ENUM is extended to capture signedness
    properly, this will be made more flexible.

    As an immediate validation of the approach, runqslower's copy of
    BPF_F_CURRENT_CPU #define is dropped in favor of its enum variant from
    vmlinux.h.

    v2->v3:
    - convert only constants usable from BPF programs (BPF helper flags, map
    create flags, etc) (Alexei);
    v1->v2:
    - fix up btf_dump test to use max 32-bit unsigned value instead of negative one.
    ====================

    Signed-off-by: Daniel Borkmann

    Daniel Borkmann