14 May, 2017

1 commit

  • [ Upstream commit ddc665a4bb4b728b4e6ecec8db1b64efa9184b9c ]

    When the instruction right before the branch destination is
    a 64 bit load immediate, we currently calculate the wrong
    jump offset in the ctx->offset[] array as we only account
    one instruction slot for the 64 bit load immediate although
    it uses two BPF instructions. Fix it up by setting the offset
    into the right slot after we incremented the index.

    Before (ldimm64 test 1):

    [...]
    00000020: 52800007 mov w7, #0x0 // #0
    00000024: d2800060 mov x0, #0x3 // #3
    00000028: d2800041 mov x1, #0x2 // #2
    0000002c: eb01001f cmp x0, x1
    00000030: 54ffff82 b.cs 0x00000020
    00000034: d29fffe7 mov x7, #0xffff // #65535
    00000038: f2bfffe7 movk x7, #0xffff, lsl #16
    0000003c: f2dfffe7 movk x7, #0xffff, lsl #32
    00000040: f2ffffe7 movk x7, #0xffff, lsl #48
    00000044: d29dddc7 mov x7, #0xeeee // #61166
    00000048: f2bdddc7 movk x7, #0xeeee, lsl #16
    0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32
    00000050: f2fdddc7 movk x7, #0xeeee, lsl #48
    [...]

    After (ldimm64 test 1):

    [...]
    00000020: 52800007 mov w7, #0x0 // #0
    00000024: d2800060 mov x0, #0x3 // #3
    00000028: d2800041 mov x1, #0x2 // #2
    0000002c: eb01001f cmp x0, x1
    00000030: 540000a2 b.cs 0x00000044
    00000034: d29fffe7 mov x7, #0xffff // #65535
    00000038: f2bfffe7 movk x7, #0xffff, lsl #16
    0000003c: f2dfffe7 movk x7, #0xffff, lsl #32
    00000040: f2ffffe7 movk x7, #0xffff, lsl #48
    00000044: d29dddc7 mov x7, #0xeeee // #61166
    00000048: f2bdddc7 movk x7, #0xeeee, lsl #16
    0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32
    00000050: f2fdddc7 movk x7, #0xeeee, lsl #48
    [...]

    Also, add a couple of test cases to make sure JITs pass
    this test. Tested on Cavium ThunderX ARMv8. The added
    test cases all pass after the fix.

    Fixes: 8eee539ddea0 ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()")
    Reported-by: David S. Miller
    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Cc: Xi Wang
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Daniel Borkmann
     

21 Oct, 2016

1 commit

  • After commit 636c2628086e ("net: skbuff: Remove errornous length
    validation in skb_vlan_pop()") mentioned test case stopped working,
    throwing a -12 (ENOMEM) return code. The issue however is not due to
    636c2628086e, but rather due to a buggy test case that got uncovered
    from the change in behaviour in 636c2628086e.

    The data_size of that test case for the skb was set to 1. In the
    bpf_fill_ld_abs_vlan_push_pop() handler bpf insns are generated that
    loop with: reading skb data, pushing 68 tags, reading skb data,
    popping 68 tags, reading skb data, etc, in order to force a skb
    expansion and thus trigger that JITs recache skb->data. Problem is
    that initial data_size is too small.

    While before 636c2628086e, the test silently bailed out due to the
    skb->len < VLAN_ETH_HLEN check with returning 0, and now throwing an
    error from failing skb_ensure_writable(). Set at least minimum of
    ETH_HLEN as an initial length so that on first push of data, equivalent
    pop will succeed.

    Fixes: 4d9c5c53ac99 ("test_bpf: add bpf_skb_vlan_push/pop() tests")
    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

16 Sep, 2016

1 commit

  • Commit d5709f7ab776 ("flow_dissector: For stripped vlan, get vlan
    info from skb->vlan_tci") made flow dissector look at vlan_proto
    when vlan is present. Since test_bpf sets skb->vlan_tci to ~0
    (including VLAN_TAG_PRESENT) we have to populate skb->vlan_proto.

    Fixes false negative on test #24:
    test_bpf: #24 LD_PAYLOAD_OFF jited:0 175 ret 0 != 42 FAIL (1 times)

    Signed-off-by: Jakub Kicinski
    Reviewed-by: Dinan Gunawardena
    Acked-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Jakub Kicinski
     

17 May, 2016

1 commit

  • Since the blinding is strictly only called from inside eBPF JITs,
    we need to change signatures for bpf_int_jit_compile() and
    bpf_prog_select_runtime() first in order to prepare that the
    eBPF program we're dealing with can change underneath. Hence,
    for call sites, we need to return the latest prog. No functional
    change in this patch.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

07 Apr, 2016

4 commits

  • Some of these tests proved useful with the powerpc eBPF JIT port due to
    sign-extended 16-bit immediate loads. Though some of these aspects get
    covered in other tests, it is better to have explicit tests so as to
    quickly tag the precise problem.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     
  • BPF_ALU32 and BPF_ALU64 tests for adding two 32-bit values that results in
    32-bit overflow.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     
  • Unsigned Jump-if-Greater-Than.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     
  • JMP_JSET tests incorrectly used BPF_JNE. Fix the same.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     

19 Dec, 2015

1 commit

  • Add couple of test cases for interpreter but also JITs, f.e. to test that
    when imm32 moves are being done, upper 32bits of the regs are being zero
    extended.

    Without JIT:

    [...]
    [ 1114.129301] test_bpf: #43 MOV REG64 jited:0 128 PASS
    [ 1114.130626] test_bpf: #44 MOV REG32 jited:0 139 PASS
    [ 1114.132055] test_bpf: #45 LD IMM64 jited:0 124 PASS
    [...]

    With JIT (generated code can as usual be nicely verified with the help of
    bpf_jit_disasm tool):

    [...]
    [ 1062.726782] test_bpf: #43 MOV REG64 jited:1 6 PASS
    [ 1062.726890] test_bpf: #44 MOV REG32 jited:1 6 PASS
    [ 1062.726993] test_bpf: #45 LD IMM64 jited:1 6 PASS
    [...]

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

05 Nov, 2015

1 commit

  • When running "mod X" operation, if X is 0 the filter has to be halt.
    Add new test cases to cover A = A mod X if X is 0, and A = A mod 1.

    CC: Xi Wang
    CC: Zi Shen Lim
    Signed-off-by: Yang Shi
    Acked-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Acked-by: Zi Shen Lim
    Acked-by: Xi Wang
    Signed-off-by: David S. Miller

    Yang Shi
     

07 Aug, 2015

6 commits


31 Jul, 2015

1 commit

  • As JITs start to perform optimizations whether to clear A and X on eBPF
    programs in the prologue, we should actually assign a program type to the
    native eBPF test cases. It doesn't really matter which program type, as
    these instructions don't go through the verifier, but it needs to be a
    type != BPF_PROG_TYPE_UNSPEC. This reflects eBPF programs loaded via bpf(2)
    system call (!= type unspec) vs. classic BPF to eBPF migrations (== type
    unspec).

    Signed-off-by: Daniel Borkmann
    Cc: Michael Holzheu
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

21 Jul, 2015

1 commit

  • improve accuracy of timing in test_bpf and add two stress tests:
    - {skb->data[0], get_smp_processor_id} repeated 2k times
    - {skb->data[0], vlan_push} x 68 followed by {skb->data[0], vlan_pop} x 68

    1st test is useful to test performance of JIT implementation of BPF_LD_ABS
    together with BPF_CALL instructions.
    2nd test is stressing skb_vlan_push/pop logic together with skb->data access
    via BPF_LD_ABS insn which checks that re-caching of skb->data is done correctly.

    In order to call bpf_skb_vlan_push() from test_bpf.ko have to add
    three export_symbol_gpl.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     

09 Jul, 2015

1 commit

  • Currently "ALU_END_FROM_BE 32" and "ALU_END_FROM_LE 32" do not test if
    the upper bits of the result are zeros (the arm64 JIT had such bugs).
    Extend the two tests to catch this.

    Acked-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: Xi Wang
    Signed-off-by: David S. Miller

    Xi Wang
     

28 May, 2015

1 commit

  • While 3b52960266a3 ("test_bpf: add more eBPF jump torture cases")
    added the int3 bug test case only for eBPF, which needs exactly 11
    passes to converge, here's a version for classic BPF with 11 passes,
    and one that would need 70 passes on x86_64 to actually converge for
    being successfully JITed. Effectively, all jumps are being optimized
    out resulting in a JIT image of just 89 bytes (from originally max
    BPF insns), only returning K.

    Might be useful as a receipe for folks wanting to craft a test case
    when backporting the fix in commit 3f7352bf21f8 ("x86: bpf_jit: fix
    compilation of large bpf programs") while not having eBPF. The 2nd
    one is delegated to the interpreter as the last pass still results
    in shrinking, in other words, this one won't be JITed on x86_64.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

25 May, 2015

1 commit

  • Add two more eBPF test cases for JITs, i.e. the second one revealed a
    bug in the x86_64 JIT compiler, where only an int3 filled image from
    the allocator was emitted and later wrongly set by the compiler as the
    bpf_func program code since optimization pass boundary was surpassed
    w/o actually emitting opcodes.

    Interpreter:

    [ 45.782892] test_bpf: #242 BPF_MAXINSNS: Very long jump backwards jited:0 11 PASS
    [ 45.783062] test_bpf: #243 BPF_MAXINSNS: Edge hopping nuthouse jited:0 14705 PASS

    After x86_64 JIT (fixed):

    [ 80.495638] test_bpf: #242 BPF_MAXINSNS: Very long jump backwards jited:1 6 PASS
    [ 80.495957] test_bpf: #243 BPF_MAXINSNS: Edge hopping nuthouse jited:1 17157 PASS

    Reference: http://thread.gmane.org/gmane.linux.network/364729
    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

23 May, 2015

1 commit

  • Currently the testsuite does not have a test case with a backward jump.
    The s390x JIT (kernel 4.0) had a bug in that area.
    So add one new test case for this now.

    Signed-off-by: Michael Holzheu
    Signed-off-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Michael Holzheu
     

15 May, 2015

2 commits

  • Fix several sparse warnings like:
    lib/test_bpf.c:1824:25: sparse: constant 4294967295 is so big it is long
    lib/test_bpf.c:1878:25: sparse: constant 0x0000ffffffff0000 is so big it is long

    Fixes: cffc642d93f9 ("test_bpf: add 173 new testcases for eBPF")
    Reported-by: Fengguang Wu
    Signed-off-by: Michael Holzheu
    Signed-off-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Michael Holzheu
     
  • Couple of torture test cases related to the bug fixed in 0b59d8806a31
    ("ARM: net: delegate filter to kernel interpreter when imm_offset()
    return value can't fit into 12bits.").

    I've added a helper to allocate and fill the insn space. Output on
    x86_64 from my laptop:

    test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:0 7 PASS
    test_bpf: #234 BPF_MAXINSNS: Single literal jited:0 8 PASS
    test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:0 11553 PASS
    test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS
    test_bpf: #237 BPF_MAXINSNS: Very long jump jited:0 9 PASS
    test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:0 20329 20398 PASS
    test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:0 32178 32475 PASS
    test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:0 10518 PASS

    test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:1 4 PASS
    test_bpf: #234 BPF_MAXINSNS: Single literal jited:1 4 PASS
    test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:1 1625 PASS
    test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS
    test_bpf: #237 BPF_MAXINSNS: Very long jump jited:1 8 PASS
    test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:1 3301 3174 PASS
    test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:1 24107 23491 PASS
    test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:1 8651 PASS

    Signed-off-by: Daniel Borkmann
    Cc: Alexei Starovoitov
    Cc: Nicolas Schichan
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

13 May, 2015

1 commit


11 May, 2015

1 commit

  • Extend the testcase to catch a signedness bug in the arm64 JIT:

    test_bpf: #58 load 64-bit immediate jited:1 ret -1 != 1 FAIL (1 times)

    This is useful to ensure other JITs won't have a similar bug.

    Link: https://lkml.org/lkml/2015/5/8/458
    Cc: Alexei Starovoitov
    Cc: Will Deacon
    Signed-off-by: Xi Wang
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Xi Wang
     

01 May, 2015

1 commit

  • I think this is useful to verify whether a filter could be JITed or
    not in case of bpf_prog_enable >= 1, which otherwise the test suite
    doesn't tell besides taking a good peek at the performance numbers.

    Nicolas Schichan reported a bug in the ARM JIT compiler that rejected
    and waved the filter to the interpreter although it shouldn't have.
    Nevertheless, the test passes as expected, but such information is
    not visible.

    It's i.e. useful for the remaining classic JITs, but also for
    implementing remaining opcodes that are not yet present in eBPF JITs
    (e.g. ARM64 waves some of them to the interpreter). This minor patch
    allows to grep through dmesg to find those accordingly, but also
    provides a total summary, i.e.: [/53 JIT'ed]

    # echo 1 > /proc/sys/net/core/bpf_jit_enable
    # insmod lib/test_bpf.ko
    # dmesg | grep "jited:0"

    dmesg example on the ARM issue with JIT rejection:

    [...]
    [ 67.925387] test_bpf: #2 ADD_SUB_MUL_K jited:1 24 PASS
    [ 67.930889] test_bpf: #3 DIV_MOD_KX jited:0 794 PASS
    [ 67.943940] test_bpf: #4 AND_OR_LSH_K jited:1 20 20 PASS
    [...]

    Signed-off-by: Daniel Borkmann
    Cc: Nicolas Schichan
    Cc: Alexei Starovoitov
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

09 Dec, 2014

1 commit


31 Oct, 2014

1 commit

  • nmap generates classic BPF programs to filter ARP packets with given target MAC
    which triggered a bug in eBPF x64 JIT. The bug was fixed in
    commit e0ee9c12157d ("x86: bpf_jit: fix two bugs in eBPF JIT compiler")
    This patch is adding a testcase in eBPF instructions (those that
    were generated by classic->eBPF converter) to be processed by JIT.
    The test is primarily targeting JIT compiler.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     

23 Sep, 2014

1 commit

  • old gcc 4.2 used by avr32 architecture produces warnings:

    lib/test_bpf.c:1741: warning: integer constant is too large for 'long' type
    lib/test_bpf.c:1741: warning: integer constant is too large for 'long' type
    lib/test_bpf.c: In function '__run_one':
    lib/test_bpf.c:1897: warning: 'ret' may be used uninitialized in this function

    silence these warnings.

    Fixes: 02ab695bb37e ("net: filter: add "load 64-bit immediate" eBPF instruction")
    Reported-by: Fengguang Wu
    Signed-off-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     

10 Sep, 2014

1 commit

  • add BPF_LD_IMM64 instruction to load 64-bit immediate value into a register.
    All previous instructions were 8-byte. This is first 16-byte instruction.
    Two consecutive 'struct bpf_insn' blocks are interpreted as single instruction:
    insn[0].code = BPF_LD | BPF_DW | BPF_IMM
    insn[0].dst_reg = destination register
    insn[0].imm = lower 32-bit
    insn[1].code = 0
    insn[1].imm = upper 32-bit
    All unused fields must be zero.

    Classic BPF has similar instruction: BPF_LD | BPF_W | BPF_IMM
    which loads 32-bit immediate value into a register.

    x64 JITs it as single 'movabsq %rax, imm64'
    arm64 may JIT as sequence of four 'movk x0, #imm16, lsl #shift' insn

    Note that old eBPF programs are binary compatible with new interpreter.

    It helps eBPF programs load 64-bit constant into a register with one
    instruction instead of using two registers and 4 instructions:
    BPF_MOV32_IMM(R1, imm32)
    BPF_ALU64_IMM(BPF_LSH, R1, 32)
    BPF_MOV32_IMM(R2, imm32)
    BPF_ALU64_REG(BPF_OR, R1, R2)

    User space generated programs will use this instruction to load constants only.

    To tell kernel that user space needs a pointer the _pseudo_ variant of
    this instruction may be added later, which will use extra bits of encoding
    to indicate what type of pointer user space is asking kernel to provide.
    For example 'off' or 'src_reg' fields can be used for such purpose.
    src_reg = 1 could mean that user space is asking kernel to validate and
    load in-kernel map pointer.
    src_reg = 2 could mean that user space needs readonly data section pointer
    src_reg = 3 could mean that user space needs a pointer to per-cpu local data
    All such future pseudo instructions will not be carrying the actual pointer
    as part of the instruction, but rather will be treated as a request to kernel
    to provide one. The kernel will verify the request_for_a_pointer, then
    will drop _pseudo_ marking and will store actual internal pointer inside
    the instruction, so the end result is the interpreter and JITs never
    see pseudo BPF_LD_IMM64 insns and only operate on generic BPF_LD_IMM64 that
    loads 64-bit immediate into a register. User space never operates on direct
    pointers and verifier can easily recognize request_for_pointer vs other
    instructions.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     

06 Sep, 2014

1 commit

  • With eBPF getting more extended and exposure to user space is on it's way,
    hardening the memory range the interpreter uses to steer its command flow
    seems appropriate. This patch moves the to be interpreted bytecode to
    read-only pages.

    In case we execute a corrupted BPF interpreter image for some reason e.g.
    caused by an attacker which got past a verifier stage, it would not only
    provide arbitrary read/write memory access but arbitrary function calls
    as well. After setting up the BPF interpreter image, its contents do not
    change until destruction time, thus we can setup the image on immutable
    made pages in order to mitigate modifications to that code. The idea
    is derived from commit 314beb9bcabf ("x86: bpf_jit_comp: secure bpf jit
    against spraying attacks").

    This is possible because bpf_prog is not part of sk_filter anymore.
    After setup bpf_prog cannot be altered during its life-time. This prevents
    any modifications to the entire bpf_prog structure (incl. function/JIT
    image pointer).

    Every eBPF program (including classic BPF that are migrated) have to call
    bpf_prog_select_runtime() to select either interpreter or a JIT image
    as a last setup step, and they all are being freed via bpf_prog_free(),
    including non-JIT. Therefore, we can easily integrate this into the
    eBPF life-time, plus since we directly allocate a bpf_prog, we have no
    performance penalty.

    Tested with seccomp and test_bpf testsuite in JIT/non-JIT mode and manual
    inspection of kernel_page_tables. Brad Spengler proposed the same idea
    via Twitter during development of this patch.

    Joint work with Hannes Frederic Sowa.

    Suggested-by: Brad Spengler
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Hannes Frederic Sowa
    Cc: Alexei Starovoitov
    Cc: Kees Cook
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

26 Aug, 2014

1 commit


03 Aug, 2014

1 commit

  • clean up names related to socket filtering and bpf in the following way:
    - everything that deals with sockets keeps 'sk_*' prefix
    - everything that is pure BPF is changed to 'bpf_*' prefix

    split 'struct sk_filter' into
    struct sk_filter {
    atomic_t refcnt;
    struct rcu_head rcu;
    struct bpf_prog *prog;
    };
    and
    struct bpf_prog {
    u32 jited:1,
    len:31;
    struct sock_fprog_kern *orig_prog;
    unsigned int (*bpf_func)(const struct sk_buff *skb,
    const struct bpf_insn *filter);
    union {
    struct sock_filter insns[0];
    struct bpf_insn insnsi[0];
    struct work_struct work;
    };
    };
    so that 'struct bpf_prog' can be used independent of sockets and cleans up
    'unattached' bpf use cases

    split SK_RUN_FILTER macro into:
    SK_RUN_FILTER to be used with 'struct sk_filter *' and
    BPF_PROG_RUN to be used with 'struct bpf_prog *'

    __sk_filter_release(struct sk_filter *) gains
    __bpf_prog_release(struct bpf_prog *) helper function

    also perform related renames for the functions that work
    with 'struct bpf_prog *', since they're on the same lines:

    sk_filter_size -> bpf_prog_size
    sk_filter_select_runtime -> bpf_prog_select_runtime
    sk_filter_free -> bpf_prog_free
    sk_unattached_filter_create -> bpf_prog_create
    sk_unattached_filter_destroy -> bpf_prog_destroy
    sk_store_orig_filter -> bpf_prog_store_orig_filter
    sk_release_orig_filter -> bpf_release_orig_filter
    __sk_migrate_filter -> bpf_migrate_filter
    __sk_prepare_filter -> bpf_prepare_filter

    API for attaching classic BPF to a socket stays the same:
    sk_attach_filter(prog, struct sock *)/sk_detach_filter(struct sock *)
    and SK_RUN_FILTER(struct sk_filter *, ctx) to execute a program
    which is used by sockets, tun, af_packet

    API for 'unattached' BPF programs becomes:
    bpf_prog_create(struct bpf_prog **)/bpf_prog_destroy(struct bpf_prog *)
    and BPF_PROG_RUN(struct bpf_prog *, ctx) to execute a program
    which is used by isdn, ppp, team, seccomp, ptp, xt_bpf, cls_bpf, test_bpf

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     

25 Jul, 2014

1 commit


11 Jun, 2014

1 commit


03 Jun, 2014

1 commit

  • The current probe_filter_length() (the function that calculates the
    length of a test BPF filter) behavior is to declare the end of the
    filter as soon as it finds {0, *, *, 0}. This is actually a valid
    insn ("ld #0"), so any filter with includes "BPF_STMT(BPF_LD | BPF_IMM, 0)"
    fails (its length is cut short).

    We are changing probe_filter_length() so as to start from the end, and
    declare the end of the filter as the first instruction which is not
    {0, *, *, 0}. This solution produces a simpler patch than the
    alternative of using an explicit end-of-filter mark. It is technically
    incorrect if your filter ends up with "ld #0", but that should not
    happen anyway.

    We also add a new test (LD_IMM_0) that includes ld #0 (does not work
    without this patch).

    Signed-off-by: Chema Gonzalez
    Acked-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Chema Gonzalez
     

02 Jun, 2014

2 commits

  • This check tests that overloading BPF_LD | BPF_ABS with an
    always invalid BPF extension, that is SKF_AD_MAX, fails to
    make sure classic BPF behaviour is correct in filter checker.

    Also, we add a test for loading at packet offset SKF_AD_OFF-1
    which should pass the filter, but later on fail during runtime.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • Also add a test for the scratch memory store that first fills
    all slots and then sucessively reads all of them back adding
    up to A, and eventually returning A. This and the previous
    M[] test with alternating fill/spill will detect possible JIT
    errors on M[].

    Suggested-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

31 May, 2014

1 commit

  • This patch converts raw opcodes for tcpdump tests into
    BPF_STMT()/BPF_JUMP() combinations, which brings it into
    conformity with the rest of the patches and it also makes
    life easier to grasp what's going on in these particular
    test cases when they ever fail. Also arrange payload from
    the jump+holes test in a way as we have with other packet
    payloads in the test suite.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann