16 Nov, 2019

13 commits

  • Add few kernel functions with various number of arguments,
    their types and sizes for BPF trampoline testing to cover
    different calling conventions.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/20191114185720.1641606-9-ast@kernel.org

    Alexei Starovoitov
     
  • Add simple test for fentry and fexit programs around eth_type_trans.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Andrii Nakryiko
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/20191114185720.1641606-8-ast@kernel.org

    Alexei Starovoitov
     
  • Teach libbpf to recognize tracing programs types and attach them to
    fentry/fexit.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Song Liu
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20191114185720.1641606-7-ast@kernel.org

    Alexei Starovoitov
     
  • Introduce btf__find_by_name_kind() helper to search BTF by name and kind, since
    name alone can be ambiguous.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Song Liu
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20191114185720.1641606-6-ast@kernel.org

    Alexei Starovoitov
     
  • Introduce BPF trampoline concept to allow kernel code to call into BPF programs
    with practically zero overhead. The trampoline generation logic is
    architecture dependent. It's converting native calling convention into BPF
    calling convention. BPF ISA is 64-bit (even on 32-bit architectures). The
    registers R1 to R5 are used to pass arguments into BPF functions. The main BPF
    program accepts only single argument "ctx" in R1. Whereas CPU native calling
    convention is different. x86-64 is passing first 6 arguments in registers
    and the rest on the stack. x86-32 is passing first 3 arguments in registers.
    sparc64 is passing first 6 in registers. And so on.

    The trampolines between BPF and kernel already exist. BPF_CALL_x macros in
    include/linux/filter.h statically compile trampolines from BPF into kernel
    helpers. They convert up to five u64 arguments into kernel C pointers and
    integers. On 64-bit architectures this BPF_to_kernel trampolines are nops. On
    32-bit architecture they're meaningful.

    The opposite job kernel_to_BPF trampolines is done by CAST_TO_U64 macros and
    __bpf_trace_##call() shim functions in include/trace/bpf_probe.h. They convert
    kernel function arguments into array of u64s that BPF program consumes via
    R1=ctx pointer.

    This patch set is doing the same job as __bpf_trace_##call() static
    trampolines, but dynamically for any kernel function. There are ~22k global
    kernel functions that are attachable via nop at function entry. The function
    arguments and types are described in BTF. The job of btf_distill_func_proto()
    function is to extract useful information from BTF into "function model" that
    architecture dependent trampoline generators will use to generate assembly code
    to cast kernel function arguments into array of u64s. For example the kernel
    function eth_type_trans has two pointers. They will be casted to u64 and stored
    into stack of generated trampoline. The pointer to that stack space will be
    passed into BPF program in R1. On x86-64 such generated trampoline will consume
    16 bytes of stack and two stores of %rdi and %rsi into stack. The verifier will
    make sure that only two u64 are accessed read-only by BPF program. The verifier
    will also recognize the precise type of the pointers being accessed and will
    not allow typecasting of the pointer to a different type within BPF program.

    The tracing use case in the datacenter demonstrated that certain key kernel
    functions have (like tcp_retransmit_skb) have 2 or more kprobes that are always
    active. Other functions have both kprobe and kretprobe. So it is essential to
    keep both kernel code and BPF programs executing at maximum speed. Hence
    generated BPF trampoline is re-generated every time new program is attached or
    detached to maintain maximum performance.

    To avoid the high cost of retpoline the attached BPF programs are called
    directly. __bpf_prog_enter/exit() are used to support per-program execution
    stats. In the future this logic will be optimized further by adding support
    for bpf_stats_enabled_key inside generated assembly code. Introduction of
    preemptible and sleepable BPF programs will completely remove the need to call
    to __bpf_prog_enter/exit().

    Detach of a BPF program from the trampoline should not fail. To avoid memory
    allocation in detach path the half of the page is used as a reserve and flipped
    after each attach/detach. 2k bytes is enough to call 40+ BPF programs directly
    which is enough for BPF tracing use cases. This limit can be increased in the
    future.

    BPF_TRACE_FENTRY programs have access to raw kernel function arguments while
    BPF_TRACE_FEXIT programs have access to kernel return value as well. Often
    kprobe BPF program remembers function arguments in a map while kretprobe
    fetches arguments from a map and analyzes them together with return value.
    BPF_TRACE_FEXIT accelerates this typical use case.

    Recursion prevention for kprobe BPF programs is done via per-cpu
    bpf_prog_active counter. In practice that turned out to be a mistake. It
    caused programs to randomly skip execution. The tracing tools missed results
    they were looking for. Hence BPF trampoline doesn't provide builtin recursion
    prevention. It's a job of BPF program itself and will be addressed in the
    follow up patches.

    BPF trampoline is intended to be used beyond tracing and fentry/fexit use cases
    in the future. For example to remove retpoline cost from XDP programs.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Andrii Nakryiko
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/20191114185720.1641606-5-ast@kernel.org

    Alexei Starovoitov
     
  • Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
    nops/calls in kernel text into calls into BPF trampoline and to patch
    calls/nops inside BPF programs too.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Song Liu
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20191114185720.1641606-4-ast@kernel.org

    Alexei Starovoitov
     
  • Refactor x86 JITing of LDX, STX, CALL instructions into separate helper
    functions. No functional changes in LDX and STX helpers. There is a minor
    change in CALL helper. It will populate target address correctly on the first
    pass of JIT instead of second pass. That won't reduce total number of JIT
    passes though.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Song Liu
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20191114185720.1641606-3-ast@kernel.org

    Alexei Starovoitov
     
  • In preparation for static_call and variable size jump_label support,
    teach text_poke_bp() to emulate instructions, namely:

    JMP32, JMP8, CALL, NOP2, NOP_ATOMIC5, INT3

    The current text_poke_bp() takes a @handler argument which is used as
    a jump target when the temporary INT3 is hit by a different CPU.

    When patching CALL instructions, this doesn't work because we'd miss
    the PUSH of the return address. Instead, teach poke_int3_handler() to
    emulate an instruction, typically the instruction we're patching in.

    This fits almost all text_poke_bp() users, except
    arch_unoptimize_kprobe() which restores random text, and for that site
    we have to build an explicit emulate instruction.

    Tested-by: Alexei Starovoitov
    Tested-by: Steven Rostedt (VMware)
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Masami Hiramatsu
    Reviewed-by: Daniel Bristot de Oliveira
    Acked-by: Alexei Starovoitov
    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: H. Peter Anvin
    Cc: Josh Poimboeuf
    Cc: Linus Torvalds
    Cc: Steven Rostedt
    Cc: Thomas Gleixner
    Link: https://lkml.kernel.org/r/20191111132457.529086974@infradead.org
    Signed-off-by: Ingo Molnar
    (cherry picked from commit 8c7eebc10687af45ac8e40ad1bac0cf7893dba9f)
    Signed-off-by: Alexei Starovoitov

    Peter Zijlstra
     
  • The example code for the x86_64 JIT uses the wrong arguments when
    calling function bar().

    Signed-off-by: Mao Wenan
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191114034351.162740-1-maowenan@huawei.com

    Mao Wenan
     
  • Commit 743e568c1586 (samples/bpf: Add a "force" flag to XDP samples)
    introduced the '-F' option but missed adding it to the usage() and the
    'long_option' array.

    Fixes: 743e568c1586 (samples/bpf: Add a "force" flag to XDP samples)
    Signed-off-by: Andre Guedes
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191114162847.221770-2-andre.guedes@intel.com

    Andre Guedes
     
  • The '-f' option is shown twice in the usage(). This patch removes the
    outdated version.

    Signed-off-by: Andre Guedes
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191114162847.221770-1-andre.guedes@intel.com

    Andre Guedes
     
  • The upcoming s390 branch length extension patches rely on "passes do
    not increase code size" property in order to consistently choose between
    short and long branches. Currently this property does not hold between
    the first and the second passes for register save/restore sequences, as
    well as various code fragments that depend on SEEN_* flags.

    Generate the code during the first pass conservatively: assume register
    save/restore sequences have the maximum possible length, and that all
    SEEN_* flags are set.

    Also refuse to JIT if this happens anyway (e.g. due to a bug), as this
    might lead to verifier bypass once long branches are introduced.

    Signed-off-by: Ilya Leoshkevich
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191114151820.53222-1-iii@linux.ibm.com

    Ilya Leoshkevich
     
  • Currently passing alignment greater than 4 to bpf_jit_binary_alloc does
    not work: in such cases it silently aligns only to 4 bytes.

    On s390, in order to load a constant from memory in a large (>512k) BPF
    program, one must use lgrl instruction, whose memory operand must be
    aligned on an 8-byte boundary.

    This patch makes it possible to request 8-byte alignment from
    bpf_jit_binary_alloc, and also makes it issue a warning when an
    unsupported alignment is requested.

    Signed-off-by: Ilya Leoshkevich
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191115123722.58462-1-iii@linux.ibm.com

    Ilya Leoshkevich
     

12 Nov, 2019

1 commit

  • When installing kselftests to its own directory and run the
    test_lwt_ip_encap.sh it will complain that test_lwt_ip_encap.o can't be
    found. Same with the test_tc_edt.sh test it will complain that
    test_tc_edt.o can't be found.

    $ ./test_lwt_ip_encap.sh
    starting egress IPv4 encap test
    Error opening object test_lwt_ip_encap.o: No such file or directory
    Object hashing failed!
    Cannot initialize ELF context!
    Failed to parse eBPF program: Invalid argument

    Rework to add test_lwt_ip_encap.o and test_tc_edt.o to TEST_FILES so the
    object file gets installed when installing kselftest.

    Fixes: 74b5a5968fe8 ("selftests/bpf: Replace test_progs and test_maps w/ general rule")
    Signed-off-by: Anders Roxell
    Signed-off-by: Daniel Borkmann
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/20191111161728.8854-1-anders.roxell@linaro.org

    Anders Roxell
     

11 Nov, 2019

14 commits

  • With latest llvm compiler, running test_progs will have the following
    verifier failure for test_sysctl_loop1.o:

    libbpf: load bpf program failed: Permission denied
    libbpf: -- BEGIN DUMP LOG ---
    libbpf:
    invalid indirect read from stack var_off (0x0; 0xff)+196 size 7
    ...
    libbpf: -- END LOG --
    libbpf: failed to load program 'cgroup/sysctl'
    libbpf: failed to load object 'test_sysctl_loop1.o'

    The related bytecode looks as below:

    0000000000000308 LBB0_8:
    97: r4 = r10
    98: r4 += -288
    99: r4 += r7
    100: w8 &= 255
    101: r1 = r10
    102: r1 += -488
    103: r1 += r8
    104: r2 = 7
    105: r3 = 0
    106: call 106
    107: w1 = w0
    108: w1 += -1
    109: if w1 > 6 goto -24
    110: w0 += w8
    111: r7 += 8
    112: w8 = w0
    113: if r7 != 224 goto -17

    And source code:

    for (i = 0; i < ARRAY_SIZE(tcp_mem); ++i) {
    ret = bpf_strtoul(value + off, MAX_ULONG_STR_LEN, 0,
    tcp_mem + i);
    if (ret MAX_ULONG_STR_LEN)
    return 0;
    off += ret & MAX_ULONG_STR_LEN;
    }

    Current verifier is not able to conclude that register w0 before '+'
    at insn 110 has a range of 1 to 7 and thinks it is from 0 - 255. This
    leads to more conservative range for w8 at insn 112, and later verifier
    complaint.

    Let us workaround this issue until we found a compiler and/or verifier
    solution. The workaround in this patch is to make variable 'ret' volatile,
    which will force a reload and then '&' operation to ensure better value
    range. With this patch, I got the below byte code for the loop:

    0000000000000328 LBB0_9:
    101: r4 = r10
    102: r4 += -288
    103: r4 += r7
    104: w8 &= 255
    105: r1 = r10
    106: r1 += -488
    107: r1 += r8
    108: r2 = 7
    109: r3 = 0
    110: call 106
    111: *(u32 *)(r10 - 64) = r0
    112: r1 = *(u32 *)(r10 - 64)
    113: if w1 s< 1 goto -28
    114: r1 = *(u32 *)(r10 - 64)
    115: if w1 s> 7 goto -30
    116: r1 = *(u32 *)(r10 - 64)
    117: w1 &= 7
    118: w1 += w8
    119: r7 += 8
    120: w8 = w1
    121: if r7 != 224 goto -21

    Insn 117 did the '&' operation and we got more precise value range
    for 'w8' at insn 120. The test is happy then:

    #3/17 test_sysctl_loop1.o:OK

    Signed-off-by: Yonghong Song
    Signed-off-by: Daniel Borkmann
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/20191107170045.2503480-1-yhs@fb.com

    Yonghong Song
     
  • Magnus Karlsson says:

    ====================
    This patch set extends libbpf and the xdpsock sample program to
    demonstrate the shared umem mode (XDP_SHARED_UMEM) as well as Rx-only
    and Tx-only sockets. This in order for users to have an example to use
    as a blue print and also so that these modes will be exercised more
    frequently.

    Note that the user needs to supply an XDP program with the
    XDP_SHARED_UMEM mode that distributes the packets over the sockets
    according to some policy. There is an example supplied with the
    xdpsock program, but there is no default one in libbpf similarly to
    when XDP_SHARED_UMEM is not used. The reason for this is that I felt
    that supplying one that would work for all users in this mode is
    futile. There are just tons of ways to distribute packets, so whatever
    I come up with and build into libbpf would be wrong in most cases.

    This patch has been applied against commit 30ee348c1267 ("Merge branch 'bpf-libbpf-fixes'")

    Structure of the patch set:

    Patch 1: Adds shared umem support to libbpf
    Patch 2: Shared umem support and example XPD program added to xdpsock sample
    Patch 3: Adds Rx-only and Tx-only support to libbpf
    Patch 4: Uses Rx-only sockets for rxdrop and Tx-only sockets for txpush in
    the xdpsock sample
    Patch 5: Add documentation entries for these two features
    ====================

    Signed-off-by: Alexei Starovoitov

    Alexei Starovoitov
     
  • Add more documentation about the new Rx-only and Tx-only sockets in
    libbpf and also how libbpf can now support shared umems. Also found
    two pieces that could be improved in the text, that got fixed in this
    commit.

    Signed-off-by: Magnus Karlsson
    Signed-off-by: Alexei Starovoitov
    Tested-by: William Tu
    Acked-by: Jonathan Lemon
    Link: https://lore.kernel.org/bpf/1573148860-30254-6-git-send-email-magnus.karlsson@intel.com

    Magnus Karlsson
     
  • Use Rx-only sockets for the rxdrop sample and Tx-only sockets for the
    txpush sample in the xdpsock application. This so that we exercise and
    show case these socket types too.

    Signed-off-by: Magnus Karlsson
    Signed-off-by: Alexei Starovoitov
    Tested-by: William Tu
    Acked-by: Jonathan Lemon
    Link: https://lore.kernel.org/bpf/1573148860-30254-5-git-send-email-magnus.karlsson@intel.com

    Magnus Karlsson
     
  • The libbpf AF_XDP code is extended to allow for the creation of Rx
    only or Tx only sockets. Previously it returned an error if the socket
    was not initialized for both Rx and Tx.

    Signed-off-by: Magnus Karlsson
    Signed-off-by: Alexei Starovoitov
    Tested-by: William Tu
    Acked-by: Jonathan Lemon
    Link: https://lore.kernel.org/bpf/1573148860-30254-4-git-send-email-magnus.karlsson@intel.com

    Magnus Karlsson
     
  • Add support for the XDP_SHARED_UMEM mode to the xdpsock sample
    application. As libbpf does not have a built in XDP program for this
    mode, we use an explicitly loaded XDP program. This also serves as an
    example on how to write your own XDP program that can route to an
    AF_XDP socket.

    Signed-off-by: Magnus Karlsson
    Signed-off-by: Alexei Starovoitov
    Tested-by: William Tu
    Acked-by: Jonathan Lemon
    Link: https://lore.kernel.org/bpf/1573148860-30254-3-git-send-email-magnus.karlsson@intel.com

    Magnus Karlsson
     
  • Add support in libbpf to create multiple sockets that share a single
    umem. Note that an external XDP program need to be supplied that
    routes the incoming traffic to the desired sockets. So you need to
    supply the libbpf_flag XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD and load
    your own XDP program.

    Signed-off-by: Magnus Karlsson
    Signed-off-by: Alexei Starovoitov
    Tested-by: William Tu
    Acked-by: Jonathan Lemon
    Link: https://lore.kernel.org/bpf/1573148860-30254-2-git-send-email-magnus.karlsson@intel.com

    Magnus Karlsson
     
  • Toke Høiland-Jørgensen says:

    ====================
    This series fixes a few bugs in libbpf that I discovered while playing around
    with the new auto-pinning code, and writing the first utility in xdp-tools[0]:

    - If object loading fails, libbpf does not clean up the pinnings created by the
    auto-pinning mechanism.
    - EPERM is not propagated to the caller on program load
    - Netlink functions write error messages directly to stderr

    In addition, libbpf currently only has a somewhat limited getter function for
    XDP link info, which makes it impossible to discover whether an attached program
    is in SKB mode or not. So the last patch in the series adds a new getter for XDP
    link info which returns all the information returned via netlink (and which can
    be extended later).

    Finally, add a getter for BPF program size, which can be used by the caller to
    estimate the amount of locked memory needed to load a program.

    A selftest is added for the pinning change, while the other features were tested
    in the xdp-filter tool from the xdp-tools repo. The 'new-libbpf-features' branch
    contains the commits that make use of the new XDP getter and the corrected EPERM
    error code.

    [0] https://github.com/xdp-project/xdp-tools

    Changelog:

    v4:
    - Don't do any size checks on struct xdp_info, just copy (and/or zero)
    whatever size the caller supplied.

    v3:
    - Pass through all kernel error codes on program load (instead of just EPERM).
    - No new bpf_object__unload() variant, just do the loop at the caller
    - Don't reject struct xdp_info sizes that are bigger than what we expect.
    - Add a comment noting that bpf_program__size() returns the size in bytes

    v2:
    - Keep function names in libbpf.map sorted properly
    ====================

    Signed-off-by: Alexei Starovoitov

    Alexei Starovoitov
     
  • This adds a new getter for the BPF program size (in bytes). This is useful
    for a caller that is trying to predict how much memory will be locked by
    loading a BPF object into the kernel.

    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: David S. Miller
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/157333185272.88376.10996937115395724683.stgit@toke.dk

    Toke Høiland-Jørgensen
     
  • Currently, libbpf only provides a function to get a single ID for the XDP
    program attached to the interface. However, it can be useful to get the
    full set of program IDs attached, along with the attachment mode, in one
    go. Add a new getter function to support this, using an extendible
    structure to carry the information. Express the old bpf_get_link_id()
    function in terms of the new function.

    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Alexei Starovoitov
    Acked-by: David S. Miller
    Acked-by: Song Liu
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/157333185164.88376.7520653040667637246.stgit@toke.dk

    Toke Høiland-Jørgensen
     
  • The netlink functions were using fprintf(stderr, ) directly to print out
    error messages, instead of going through the usual logging macros. This
    makes it impossible for the calling application to silence or redirect
    those error messages. Fix this by switching to pr_warn() in nlattr.c and
    netlink.c.

    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: David S. Miller
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/157333185055.88376.15999360127117901443.stgit@toke.dk

    Toke Høiland-Jørgensen
     
  • When loading an eBPF program, libbpf overrides the return code for EPERM
    errors instead of returning it to the caller. This makes it hard to figure
    out what went wrong on load.

    In particular, EPERM is returned when the system rlimit is too low to lock
    the memory required for the BPF program. Previously, this was somewhat
    obscured because the rlimit error would be hit on map creation (which does
    return it correctly). However, since maps can now be reused, object load
    can proceed all the way to loading programs without hitting the error;
    propagating it even in this case makes it possible for the caller to react
    appropriately (and, e.g., attempt to raise the rlimit before retrying).

    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: David S. Miller
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/157333184946.88376.11768171652794234561.stgit@toke.dk

    Toke Høiland-Jørgensen
     
  • This add tests for the different variations of automatic map unpinning on
    load failure.

    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: David S. Miller
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/157333184838.88376.8243704248624814775.stgit@toke.dk

    Toke Høiland-Jørgensen
     
  • Since the automatic map-pinning happens during load, it will leave pinned
    maps around if the load fails at a later stage. Fix this by unpinning any
    pinned maps on cleanup. To avoid unpinning pinned maps that were reused
    rather than newly pinned, add a new boolean property on struct bpf_map to
    keep track of whether that map was reused or not; and only unpin those maps
    that were not reused.

    Fixes: 57a00f41644f ("libbpf: Add auto-pinning of maps when loading BPF objects")
    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Acked-by: David S. Miller
    Acked-by: Song Liu
    Link: https://lore.kernel.org/bpf/157333184731.88376.9992935027056165873.stgit@toke.dk

    Toke Høiland-Jørgensen
     

09 Nov, 2019

2 commits

  • Since, the new syntax of BTF-defined map has been introduced,
    the syntax for using maps under samples directory are mixed up.
    For example, some are already using the new syntax, and some are using
    existing syntax by calling them as 'legacy'.

    As stated at commit abd29c931459 ("libbpf: allow specifying map
    definitions using BTF"), the BTF-defined map has more compatablility
    with extending supported map definition features.

    The commit doesn't replace all of the map to new BTF-defined map,
    because some of the samples still use bpf_load instead of libbpf, which
    can't properly create BTF-defined map.

    This will only updates the samples which uses libbpf API for loading bpf
    program. (ex. bpf_prog_load_xattr)

    Signed-off-by: Daniel T. Lee
    Acked-by: Andrii Nakryiko
    Signed-off-by: Alexei Starovoitov

    Daniel T. Lee
     
  • Currently, under samples, several methods are being used to load bpf
    program.

    Since using libbpf is preferred solution, lots of previously used
    'load_bpf_file' from bpf_load are replaced with 'bpf_prog_load_xattr'
    from libbpf.

    But some of the error messages still show up as 'load_bpf_file' instead
    of 'bpf_prog_load_xattr'.

    This commit fixes outdated errror messages under samples and fixes some
    code style issues.

    Signed-off-by: Daniel T. Lee
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20191107005153.31541-2-danieltimlee@gmail.com

    Daniel T. Lee
     

08 Nov, 2019

2 commits

  • Access the skb->cb[] in the kfree_skb test.

    Signed-off-by: Martin KaFai Lau
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20191107180905.4097871-1-kafai@fb.com

    Martin KaFai Lau
     
  • This patch adds array support to btf_struct_access().
    It supports array of int, array of struct and multidimensional
    array.

    It also allows using u8[] as a scratch space. For example,
    it allows access the "char cb[48]" with size larger than
    the array's element "char". Another potential use case is
    "u64 icsk_ca_priv[]" in the tcp congestion control.

    btf_resolve_size() is added to resolve the size of any type.
    It will follow the modifier if there is any. Please
    see the function comment for details.

    This patch also adds the "off < moff" check at the beginning
    of the for loop. It is to reject cases when "off" is pointing
    to a "hole" in a struct.

    Signed-off-by: Martin KaFai Lau
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20191107180903.4097702-1-kafai@fb.com

    Martin KaFai Lau
     

07 Nov, 2019

8 commits

  • Andrii Nakryiko says:

    ====================
    Github's mirror of libbpf got LGTM and Coverity statis analysis running
    against it and spotted few real bugs and few potential issues. This patch
    series fixes found issues.
    ====================

    Signed-off-by: Daniel Borkmann

    Daniel Borkmann
     
  • If we get ELF file with "maps" section, but no symbols pointing to it, we'll
    end up with division by zero. Add check against this situation and exit early
    with error. Found by Coverity scan against Github libbpf sources.

    Fixes: bf82927125dd ("libbpf: refactor map initialization")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191107020855.3834758-6-andriin@fb.com

    Andrii Nakryiko
     
  • Perform size check always in btf__resolve_size. Makes the logic a bit more
    robust against corrupted BTF and silences LGTM/Coverity complaining about
    always true (size < 0) check.

    Fixes: 69eaab04c675 ("btf: extract BTF type size calculation")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191107020855.3834758-5-andriin@fb.com

    Andrii Nakryiko
     
  • Fix few issues found by Coverity and LGTM.

    Fixes: b053b439b72a ("bpf: libbpf: bpftool: Print bpf_line_info during prog dump")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191107020855.3834758-4-andriin@fb.com

    Andrii Nakryiko
     
  • Fix a potential overflow issue found by LGTM analysis, based on Github libbpf
    source code.

    Fixes: 3d65014146c6 ("bpf: libbpf: Add btf_line_info support to libbpf")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191107020855.3834758-3-andriin@fb.com

    Andrii Nakryiko
     
  • Coverity scan against Github libbpf code found the issue of not freeing memory and
    leaving already freed memory still referenced from bpf_program. Fix it by
    re-assigning successfully reallocated memory sooner.

    Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191107020855.3834758-2-andriin@fb.com

    Andrii Nakryiko
     
  • Fix issue reported by static analysis (Coverity). If bpf_prog_get_fd_by_id()
    fails, xsk_lookup_bpf_maps() will fail as well and clean-up code will attempt
    close() with fd=-1. Fix by checking bpf_prog_get_fd_by_id() return result and
    exiting early.

    Fixes: 10a13bb40e54 ("libbpf: remove qidconf and better support external bpf programs.")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191107054059.313884-1-andriin@fb.com

    Andrii Nakryiko
     
  • We don't need them since commit e1cf4befa297 ("bpf, s390x: remove
    ld_abs/ld_ind") and commit a3212b8f15d8 ("bpf, s390x: remove obsolete
    exception handling from div/mod").

    Also, use BIT(n) instead of 1 << n, because checkpatch says so.

    Signed-off-by: Ilya Leoshkevich
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191107114033.90505-1-iii@linux.ibm.com

    Ilya Leoshkevich