17 Jul, 2020

1 commit

  • This reverts commit 3203c9010060 ("test_bpf: flag tests that cannot
    be jited on s390").

    The s390 bpf JIT previously had a restriction on the maximum program
    size, which required some tests in test_bpf to be flagged as expected
    failures. The program size limitation has been removed, and the tests
    now pass, so these tests should no longer be flagged.

    Fixes: d1242b10ff03 ("s390/bpf: Remove JITed image size limitations")
    Signed-off-by: Seth Forshee
    Signed-off-by: Daniel Borkmann
    Reviewed-by: Ilya Leoshkevich
    Link: https://lore.kernel.org/bpf/20200716143931.330122-1-seth.forshee@canonical.com

    Seth Forshee
     

25 Feb, 2020

1 commit

  • Replace the preemption disable/enable with migrate_disable/enable() to
    reflect the actual requirement and to allow PREEMPT_RT to substitute it
    with an actual migration disable mechanism which does not disable
    preemption.

    [ tglx: Switched it over to migrate disable ]

    Signed-off-by: David S. Miller
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20200224145643.785306549@linutronix.de

    David Miller
     

30 Oct, 2019

2 commits

  • Following reports of skb_segment() hitting a BUG_ON when working on
    GROed skbs which have their gso_size mangled (e.g. after a
    bpf_skb_change_proto call), add a reproducer test that mimics the
    input skbs that lead to the mentioned BUG_ON as in [1] and validates the
    fix submitted in [2].

    [1] https://lists.openwall.net/netdev/2019/08/26/110
    [2] commit 3dcbdb134f32 ("net: gso: Fix skb_segment splat when splitting gso_size mangled skb having linear-headed frag_list")

    Signed-off-by: Shmulik Ladkani
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191025134223.2761-3-shmulik.ladkani@gmail.com

    Shmulik Ladkani
     
  • Currently, test_skb_segment() builds a single test skb and runs
    skb_segment() on it.

    Extend test_skb_segment() so it processes an array of numerous
    skb/feature pairs to test.

    Signed-off-by: Shmulik Ladkani
    Signed-off-by: Daniel Borkmann
    Link: https://lore.kernel.org/bpf/20191025134223.2761-2-shmulik.ladkani@gmail.com

    Shmulik Ladkani
     

20 Aug, 2019

1 commit

  • r369217 in clang added a new warning about potential misuse of the xor
    operator as an exponentiation operator:

    ../lib/test_bpf.c:870:13: warning: result of '10 ^ 300' is 294; did you
    mean '1e300'? [-Wxor-used-as-pow]
    { { 4, 10 ^ 300 }, { 20, 10 ^ 300 } },
    ~~~^~~~~
    1e300
    ../lib/test_bpf.c:870:13: note: replace expression with '0xA ^ 300' to
    silence this warning
    ../lib/test_bpf.c:870:31: warning: result of '10 ^ 300' is 294; did you
    mean '1e300'? [-Wxor-used-as-pow]
    { { 4, 10 ^ 300 }, { 20, 10 ^ 300 } },
    ~~~^~~~~
    1e300
    ../lib/test_bpf.c:870:31: note: replace expression with '0xA ^ 300' to
    silence this warning

    The commit link for this new warning has some good logic behind wanting
    to add it but this instance appears to be a false positive. Adopt its
    suggestion to silence the warning but not change the code. According to
    the differential review link in the clang commit, GCC may eventually
    adopt this warning as well.

    Link: https://github.com/ClangBuiltLinux/linux/issues/643
    Link: https://github.com/llvm/llvm-project/commit/920890e26812f808a74c60ebc14cc636dac661c1
    Signed-off-by: Nathan Chancellor
    Acked-by: Yonghong Song
    Signed-off-by: Daniel Borkmann

    Nathan Chancellor
     

05 Jun, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of version 2 of the gnu general public license as
    published by the free software foundation this program is
    distributed in the hope that it will be useful but without any
    warranty without even the implied warranty of merchantability or
    fitness for a particular purpose see the gnu general public license
    for more details

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 64 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Alexios Zavras
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190529141901.894819585@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

26 Feb, 2019

1 commit

  • When running BPF test suite the following splat occurs:

    [ 415.930950] test_bpf: #0 TAX jited:0
    [ 415.931067] BUG: assuming atomic context at lib/test_bpf.c:6674
    [ 415.946169] in_atomic(): 0, irqs_disabled(): 0, pid: 11556, name: modprobe
    [ 415.953176] INFO: lockdep is turned off.
    [ 415.957207] CPU: 1 PID: 11556 Comm: modprobe Tainted: G W 5.0.0-rc7-next-20190220 #1
    [ 415.966328] Hardware name: HiKey Development Board (DT)
    [ 415.971592] Call trace:
    [ 415.974069] dump_backtrace+0x0/0x160
    [ 415.977761] show_stack+0x24/0x30
    [ 415.981104] dump_stack+0xc8/0x114
    [ 415.984534] __cant_sleep+0xf0/0x108
    [ 415.988145] test_bpf_init+0x5e0/0x1000 [test_bpf]
    [ 415.992971] do_one_initcall+0x90/0x428
    [ 415.996837] do_init_module+0x60/0x1e4
    [ 416.000614] load_module+0x1de0/0x1f50
    [ 416.004391] __se_sys_finit_module+0xc8/0xe0
    [ 416.008691] __arm64_sys_finit_module+0x24/0x30
    [ 416.013255] el0_svc_common+0x78/0x130
    [ 416.017031] el0_svc_handler+0x38/0x78
    [ 416.020806] el0_svc+0x8/0xc

    Rework so that preemption is disabled when we loop over function
    'BPF_PROG_RUN(...)'.

    Fixes: 568f196756ad ("bpf: check that BPF programs run with preemption disabled")
    Suggested-by: Arnd Bergmann
    Signed-off-by: Anders Roxell
    Signed-off-by: Daniel Borkmann

    Anders Roxell
     

17 Nov, 2018

1 commit

  • Replace VLAN_TAG_PRESENT with single bit flag and free up
    VLAN.CFI overload. Now VLAN.CFI is visible in networking stack
    and can be passed around intact.

    Signed-off-by: Michał Mirosław
    Signed-off-by: David S. Miller

    Michał Mirosław
     

28 Sep, 2018

1 commit

  • Latest changes in __skb_flow_dissect() assume skb->dev has valid nd_net.
    However, this is not true for test_bpf. As a result, test_bpf.ko crashes
    the system with the following stack trace:

    [ 1133.716622] BUG: unable to handle kernel paging request at 0000000000001030
    [ 1133.716623] PGD 8000001fbf7ee067
    [ 1133.716624] P4D 8000001fbf7ee067
    [ 1133.716624] PUD 1f6c1cf067
    [ 1133.716625] PMD 0
    [ 1133.716628] Oops: 0000 [#1] SMP PTI
    [ 1133.716630] CPU: 7 PID: 40473 Comm: modprobe Kdump: loaded Not tainted 4.19.0-rc5-00805-gca11cc92ccd2 #1167
    [ 1133.716631] Hardware name: Wiwynn Leopard-Orv2/Leopard-DDR BW, BIOS LBM12.5 12/06/2017
    [ 1133.716638] RIP: 0010:__skb_flow_dissect+0x83/0x1680
    [ 1133.716639] Code: 04 00 00 41 0f b7 44 24 04 48 85 db 4d 8d 14 07 0f 84 01 02 00 00 48 8b 43 10 48 85 c0 0f 84 e5 01 00 00 48 8b 80 a8 04 00 00 8b 90 30 10 00 00 48 85 d2 0f 84 dd 01 00 00 31 c0 b9 05 00 00
    [ 1133.716640] RSP: 0018:ffffc900303c7a80 EFLAGS: 00010282
    [ 1133.716642] RAX: 0000000000000000 RBX: ffff881fea0b7400 RCX: 0000000000000000
    [ 1133.716643] RDX: ffffc900303c7bb4 RSI: ffffffff8235c3e0 RDI: ffff881fea0b7400
    [ 1133.716643] RBP: ffffc900303c7b80 R08: 0000000000000000 R09: 000000000000000e
    [ 1133.716644] R10: ffffc900303c7bb4 R11: ffff881fb6840400 R12: ffffffff8235c3e0
    [ 1133.716645] R13: 0000000000000008 R14: 000000000000001e R15: ffffc900303c7bb4
    [ 1133.716646] FS: 00007f54e75d3740(0000) GS:ffff881fff5c0000(0000) knlGS:0000000000000000
    [ 1133.716648] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 1133.716649] CR2: 0000000000001030 CR3: 0000001f6c226005 CR4: 00000000003606e0
    [ 1133.716649] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [ 1133.716650] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    [ 1133.716651] Call Trace:
    [ 1133.716660] ? sched_clock_cpu+0xc/0xa0
    [ 1133.716662] ? sched_clock_cpu+0xc/0xa0
    [ 1133.716665] ? log_store+0x1b5/0x260
    [ 1133.716667] ? up+0x12/0x60
    [ 1133.716669] ? skb_get_poff+0x4b/0xa0
    [ 1133.716674] ? __kmalloc_reserve.isra.47+0x2e/0x80
    [ 1133.716675] skb_get_poff+0x4b/0xa0
    [ 1133.716680] bpf_skb_get_pay_offset+0xa/0x10
    [ 1133.716686] ? test_bpf_init+0x578/0x1000 [test_bpf]
    [ 1133.716690] ? netlink_broadcast_filtered+0x153/0x3d0
    [ 1133.716695] ? free_pcppages_bulk+0x324/0x600
    [ 1133.716696] ? 0xffffffffa0279000
    [ 1133.716699] ? do_one_initcall+0x46/0x1bd
    [ 1133.716704] ? kmem_cache_alloc_trace+0x144/0x1a0
    [ 1133.716709] ? do_init_module+0x5b/0x209
    [ 1133.716712] ? load_module+0x2136/0x25d0
    [ 1133.716715] ? __do_sys_finit_module+0xba/0xe0
    [ 1133.716717] ? __do_sys_finit_module+0xba/0xe0
    [ 1133.716719] ? do_syscall_64+0x48/0x100
    [ 1133.716724] ? entry_SYSCALL_64_after_hwframe+0x44/0xa9

    This patch fixes tes_bpf by using init_net in the dummy dev.

    Fixes: d58e468b1112 ("flow_dissector: implements flow dissector BPF hook")
    Reported-by: Eric Dumazet
    Cc: Willem de Bruijn
    Cc: Petar Penkov
    Signed-off-by: Song Liu
    Reviewed-by: Eric Dumazet
    Acked-by: Willem de Bruijn
    Signed-off-by: Daniel Borkmann

    Song Liu
     

29 Jun, 2018

1 commit


03 Jun, 2018

1 commit


04 May, 2018

1 commit

  • Remove all eBPF tests involving LD_ABS/LD_IND from test_bpf.ko. Reason
    is that the eBPF tests from test_bpf module do not go via BPF verifier
    and therefore any instruction rewrites from verifier cannot take place.

    Therefore, move them into test_verifier which runs out of user space,
    so that verfier can rewrite LD_ABS/LD_IND internally in upcoming patches.
    It will have the same effect since runtime tests are also performed from
    there. This also allows to finally unexport bpf_skb_vlan_{push,pop}_proto
    and keep it internal to core kernel.

    Additionally, also add further cBPF LD_ABS/LD_IND test coverage into
    test_bpf.ko suite.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: Alexei Starovoitov

    Daniel Borkmann
     

30 Mar, 2018

1 commit


26 Mar, 2018

1 commit

  • Without the previous commit,
    "modprobe test_bpf" will have the following errors:
    ...
    [ 98.149165] ------------[ cut here ]------------
    [ 98.159362] kernel BUG at net/core/skbuff.c:3667!
    [ 98.169756] invalid opcode: 0000 [#1] SMP PTI
    [ 98.179370] Modules linked in:
    [ 98.179371] test_bpf(+)
    ...
    which triggers the bug the previous commit intends to fix.

    The skbs are constructed to mimic what mlx5 may generate.
    The packet size/header may not mimic real cases in production. But
    the processing flow is similar.

    Signed-off-by: Yonghong Song
    Signed-off-by: David S. Miller

    Yonghong Song
     

21 Mar, 2018

1 commit

  • Function bpf_fill_maxinsns11 is designed to not be able to be JITed on
    x86_64. So, it fails when CONFIG_BPF_JIT_ALWAYS_ON=y, and
    commit 09584b406742 ("bpf: fix selftests/bpf test_kmod.sh failure when
    CONFIG_BPF_JIT_ALWAYS_ON=y") makes sure that failure is detected on that
    case.

    However, it does not fail on other architectures, which have a different
    JIT compiler design. So, test_bpf has started to fail to load on those.

    After this fix, test_bpf loads fine on both x86_64 and ppc64el.

    Fixes: 09584b406742 ("bpf: fix selftests/bpf test_kmod.sh failure when CONFIG_BPF_JIT_ALWAYS_ON=y")
    Signed-off-by: Thadeu Lima de Souza Cascardo
    Reviewed-by: Yonghong Song
    Signed-off-by: Daniel Borkmann

    Thadeu Lima de Souza Cascardo
     

01 Mar, 2018

1 commit

  • For tests that are using the maximal number of BPF instruction, each
    run takes 20 usec. Looping 10,000 times on them totals 200 ms, which
    is bad when the loop is not preemptible.

    test_bpf: #264 BPF_MAXINSNS: Call heavy transformations jited:1 19248
    18548 PASS
    test_bpf: #269 BPF_MAXINSNS: ld_abs+get_processor_id jited:1 20896 PASS

    Lets divide by ten the number of iterations, so that max latency is
    20ms. We could use need_resched() to break the loop earlier if we
    believe 20 ms is too much.

    Signed-off-by: Eric Dumazet
    Signed-off-by: Daniel Borkmann

    Eric Dumazet
     

27 Feb, 2018

1 commit


05 Feb, 2018

1 commit

  • With CONFIG_BPF_JIT_ALWAYS_ON is defined in the config file,
    tools/testing/selftests/bpf/test_kmod.sh failed like below:
    [root@localhost bpf]# ./test_kmod.sh
    sysctl: setting key "net.core.bpf_jit_enable": Invalid argument
    [ JIT enabled:0 hardened:0 ]
    [ 132.175681] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 132.458834] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [ JIT enabled:1 hardened:0 ]
    [ 133.456025] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 133.730935] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [ JIT enabled:1 hardened:1 ]
    [ 134.769730] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 135.050864] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [ JIT enabled:1 hardened:2 ]
    [ 136.442882] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 136.821810] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [root@localhost bpf]#

    The test_kmod.sh load/remove test_bpf.ko multiple times with different
    settings for sysctl net.core.bpf_jit_{enable,harden}. The failed test #297
    of test_bpf.ko is designed such that JIT always fails.

    Commit 290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
    introduced the following tightening logic:
    ...
    if (!bpf_prog_is_dev_bound(fp->aux)) {
    fp = bpf_int_jit_compile(fp);
    #ifdef CONFIG_BPF_JIT_ALWAYS_ON
    if (!fp->jited) {
    *err = -ENOTSUPP;
    return fp;
    }
    #endif
    ...
    With this logic, Test #297 always gets return value -ENOTSUPP
    when CONFIG_BPF_JIT_ALWAYS_ON is defined, causing the test failure.

    This patch fixed the failure by marking Test #297 as expected failure
    when CONFIG_BPF_JIT_ALWAYS_ON is defined.

    Fixes: 290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
    Signed-off-by: Yonghong Song
    Signed-off-by: Daniel Borkmann

    Yonghong Song
     

27 Jan, 2018

1 commit


20 Jan, 2018

1 commit

  • Add a couple of test cases for interpreter and JIT that are
    related to an issue we faced some time ago in Cilium [1],
    which is fixed in LLVM with commit e53750e1e086 ("bpf: fix
    bug on silently truncating 64-bit immediate").

    Test cases were run-time checking kernel to behave as intended
    which should also provide some guidance for current or new
    JITs in case they should trip over this. Added for cBPF and
    eBPF.

    [1] https://github.com/cilium/cilium/pull/2162

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: Alexei Starovoitov

    Daniel Borkmann
     

10 Jan, 2018

1 commit

  • The BPF interpreter has been used as part of the spectre 2 attack CVE-2017-5715.

    A quote from goolge project zero blog:
    "At this point, it would normally be necessary to locate gadgets in
    the host kernel code that can be used to actually leak data by reading
    from an attacker-controlled location, shifting and masking the result
    appropriately and then using the result of that as offset to an
    attacker-controlled address for a load. But piecing gadgets together
    and figuring out which ones work in a speculation context seems annoying.
    So instead, we decided to use the eBPF interpreter, which is built into
    the host kernel - while there is no legitimate way to invoke it from inside
    a VM, the presence of the code in the host kernel's text section is sufficient
    to make it usable for the attack, just like with ordinary ROP gadgets."

    To make attacker job harder introduce BPF_JIT_ALWAYS_ON config
    option that removes interpreter from the kernel in favor of JIT-only mode.
    So far eBPF JIT is supported by:
    x64, arm64, arm32, sparc64, s390, powerpc64, mips64

    The start of JITed program is randomized and code page is marked as read-only.
    In addition "constant blinding" can be turned on with net.core.bpf_jit_harden

    v2->v3:
    - move __bpf_prog_ret0 under ifdef (Daniel)

    v1->v2:
    - fix init order, test_bpf and cBPF (Daniel's feedback)
    - fix offloaded bpf (Jakub's feedback)
    - add 'return 0' dummy in case something can invoke prog->bpf_func
    - retarget bpf tree. For bpf-next the patch would need one extra hunk.
    It will be sent when the trees are merged back to net-next

    Considered doing:
    int bpf_jit_enable __read_mostly = BPF_EBPF_JIT_DEFAULT;
    but it seems better to land the patch as-is and in bpf-next remove
    bpf_jit_enable global variable from all JITs, consolidate in one place
    and remove this jit_init() function.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann

    Alexei Starovoitov
     

16 Dec, 2017

1 commit

  • Add a test that i) uses LD_ABS, ii) zeroing R6 before call, iii) calls
    a helper that triggers reload of cached skb data, iv) uses LD_ABS again.
    It's added for test_bpf in order to do runtime testing after JITing as
    well as test_verifier to test that the sequence is allowed.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: Alexei Starovoitov

    Daniel Borkmann
     

10 Aug, 2017

1 commit

  • Currently, eBPF only understands BPF_JGT (>), BPF_JGE (>=),
    BPF_JSGT (s>), BPF_JSGE (s>=) instructions, this means that
    particularly *JLT/*JLE counterparts involving immediates need
    to be rewritten from e.g. X < [IMM] by swapping arguments into
    [IMM] > X, meaning the immediate first is required to be loaded
    into a register Y := [IMM], such that then we can compare with
    Y > X. Note that the destination operand is always required to
    be a register.

    This has the downside of having unnecessarily increased register
    pressure, meaning complex program would need to spill other
    registers temporarily to stack in order to obtain an unused
    register for the [IMM]. Loading to registers will thus also
    affect state pruning since we need to account for that register
    use and potentially those registers that had to be spilled/filled
    again. As a consequence slightly more stack space might have
    been used due to spilling, and BPF programs are a bit longer
    due to extra code involving the register load and potentially
    required spill/fills.

    Thus, add BPF_JLT (
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

21 Jun, 2017

1 commit

  • follow Johannes Berg, semantic patch file as below,
    @@
    identifier p, p2;
    expression len;
    expression skb;
    type t, t2;
    @@
    (
    -p = __skb_put(skb, len);
    +p = __skb_put_zero(skb, len);
    |
    -p = (t)__skb_put(skb, len);
    +p = __skb_put_zero(skb, len);
    )
    ... when != p
    (
    p2 = (t2)p;
    -memset(p2, 0, len);
    |
    -memset(p, 0, len);
    )

    @@
    identifier p;
    expression len;
    expression skb;
    type t;
    @@
    (
    -t p = __skb_put(skb, len);
    +t p = __skb_put_zero(skb, len);
    )
    ... when != p
    (
    -memset(p, 0, len);
    )

    @@
    type t, t2;
    identifier p, p2;
    expression skb;
    @@
    t *p;
    ...
    (
    -p = __skb_put(skb, sizeof(t));
    +p = __skb_put_zero(skb, sizeof(t));
    |
    -p = (t *)__skb_put(skb, sizeof(t));
    +p = __skb_put_zero(skb, sizeof(t));
    )
    ... when != p
    (
    p2 = (t2)p;
    -memset(p2, 0, sizeof(*p));
    |
    -memset(p, 0, sizeof(*p));
    )

    @@
    expression skb, len;
    @@
    -memset(__skb_put(skb, len), 0, len);
    +__skb_put_zero(skb, len);

    @@
    expression skb, len, data;
    @@
    -memcpy(__skb_put(skb, len), data, len);
    +__skb_put_data(skb, data, len);

    @@
    expression SKB, C, S;
    typedef u8;
    identifier fn = {__skb_put};
    fresh identifier fn2 = fn ## "_u8";
    @@
    - *(u8 *)fn(SKB, S) = C;
    + fn2(SKB, C);

    Signed-off-by: yuan linyu
    Signed-off-by: David S. Miller

    yuan linyu
     

15 Jun, 2017

1 commit

  • On MIPS, conditional branches can only span 32k instructions. To
    exceed this limit in the JIT with the BPF maximum of 4k insns, we need
    to choose eBPF insns that expand to more than 8 machine instructions.
    Use BPF_LD_ABS as it is quite complex. This forces the JIT to invert
    the sense of the branch to branch around a long jump to the end.

    This (somewhat) verifies that the branch inversion logic and target
    address calculation of the long jumps are done correctly.

    Signed-off-by: David Daney
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    David Daney
     

01 Jun, 2017

1 commit


26 May, 2017

1 commit


03 May, 2017

3 commits

  • On 32-bit:

    lib/test_bpf.c:4772: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4772: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4773: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4773: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4787: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4787: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4801: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4801: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4802: warning: integer constant is too large for ‘unsigned long’ type
    lib/test_bpf.c:4802: warning: integer constant is too large for ‘unsigned long’ type

    On 32-bit systems, "long" is only 32-bit.
    Replace the "UL" suffix by "ULL" to fix this.

    Fixes: 85f68fe898320575 ("bpf, arm64: implement jiting of BPF_XADD")
    Signed-off-by: Geert Uytterhoeven
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Geert Uytterhoeven
     
  • When the instruction right before the branch destination is
    a 64 bit load immediate, we currently calculate the wrong
    jump offset in the ctx->offset[] array as we only account
    one instruction slot for the 64 bit load immediate although
    it uses two BPF instructions. Fix it up by setting the offset
    into the right slot after we incremented the index.

    Before (ldimm64 test 1):

    [...]
    00000020: 52800007 mov w7, #0x0 // #0
    00000024: d2800060 mov x0, #0x3 // #3
    00000028: d2800041 mov x1, #0x2 // #2
    0000002c: eb01001f cmp x0, x1
    00000030: 54ffff82 b.cs 0x00000020
    00000034: d29fffe7 mov x7, #0xffff // #65535
    00000038: f2bfffe7 movk x7, #0xffff, lsl #16
    0000003c: f2dfffe7 movk x7, #0xffff, lsl #32
    00000040: f2ffffe7 movk x7, #0xffff, lsl #48
    00000044: d29dddc7 mov x7, #0xeeee // #61166
    00000048: f2bdddc7 movk x7, #0xeeee, lsl #16
    0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32
    00000050: f2fdddc7 movk x7, #0xeeee, lsl #48
    [...]

    After (ldimm64 test 1):

    [...]
    00000020: 52800007 mov w7, #0x0 // #0
    00000024: d2800060 mov x0, #0x3 // #3
    00000028: d2800041 mov x1, #0x2 // #2
    0000002c: eb01001f cmp x0, x1
    00000030: 540000a2 b.cs 0x00000044
    00000034: d29fffe7 mov x7, #0xffff // #65535
    00000038: f2bfffe7 movk x7, #0xffff, lsl #16
    0000003c: f2dfffe7 movk x7, #0xffff, lsl #32
    00000040: f2ffffe7 movk x7, #0xffff, lsl #48
    00000044: d29dddc7 mov x7, #0xeeee // #61166
    00000048: f2bdddc7 movk x7, #0xeeee, lsl #16
    0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32
    00000050: f2fdddc7 movk x7, #0xeeee, lsl #48
    [...]

    Also, add a couple of test cases to make sure JITs pass
    this test. Tested on Cavium ThunderX ARMv8. The added
    test cases all pass after the fix.

    Fixes: 8eee539ddea0 ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()")
    Reported-by: David S. Miller
    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Cc: Xi Wang
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • This work adds BPF_XADD for BPF_W/BPF_DW to the arm64 JIT and therefore
    completes JITing of all BPF instructions, meaning we can thus also remove
    the 'notyet' label and do not need to fall back to the interpreter when
    BPF_XADD is used in a program!

    This now also brings arm64 JIT in line with x86_64, s390x, ppc64, sparc64,
    where all current eBPF features are supported.

    BPF_W example from test_bpf:

    .u.insns_int = {
    BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
    BPF_ST_MEM(BPF_W, R10, -40, 0x10),
    BPF_STX_XADD(BPF_W, R10, R0, -40),
    BPF_LDX_MEM(BPF_W, R0, R10, -40),
    BPF_EXIT_INSN(),
    },

    [...]
    00000020: 52800247 mov w7, #0x12 // #18
    00000024: 928004eb mov x11, #0xffffffffffffffd8 // #-40
    00000028: d280020a mov x10, #0x10 // #16
    0000002c: b82b6b2a str w10, [x25,x11]
    // start of xadd mapping:
    00000030: 928004ea mov x10, #0xffffffffffffffd8 // #-40
    00000034: 8b19014a add x10, x10, x25
    00000038: f9800151 prfm pstl1strm, [x10]
    0000003c: 885f7d4b ldxr w11, [x10]
    00000040: 0b07016b add w11, w11, w7
    00000044: 880b7d4b stxr w11, w11, [x10]
    00000048: 35ffffab cbnz w11, 0x0000003c
    // end of xadd mapping:
    [...]

    BPF_DW example from test_bpf:

    .u.insns_int = {
    BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
    BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
    BPF_STX_XADD(BPF_DW, R10, R0, -40),
    BPF_LDX_MEM(BPF_DW, R0, R10, -40),
    BPF_EXIT_INSN(),
    },

    [...]
    00000020: 52800247 mov w7, #0x12 // #18
    00000024: 928004eb mov x11, #0xffffffffffffffd8 // #-40
    00000028: d280020a mov x10, #0x10 // #16
    0000002c: f82b6b2a str x10, [x25,x11]
    // start of xadd mapping:
    00000030: 928004ea mov x10, #0xffffffffffffffd8 // #-40
    00000034: 8b19014a add x10, x10, x25
    00000038: f9800151 prfm pstl1strm, [x10]
    0000003c: c85f7d4b ldxr x11, [x10]
    00000040: 8b07016b add x11, x11, x7
    00000044: c80b7d4b stxr w11, x11, [x10]
    00000048: 35ffffab cbnz w11, 0x0000003c
    // end of xadd mapping:
    [...]

    Tested on Cavium ThunderX ARMv8, test suite results after the patch:

    No JIT: [ 3751.855362] test_bpf: Summary: 311 PASSED, 0 FAILED, [0/303 JIT'ed]
    With JIT: [ 3573.759527] test_bpf: Summary: 311 PASSED, 0 FAILED, [303/303 JIT'ed]

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

21 Oct, 2016

1 commit

  • After commit 636c2628086e ("net: skbuff: Remove errornous length
    validation in skb_vlan_pop()") mentioned test case stopped working,
    throwing a -12 (ENOMEM) return code. The issue however is not due to
    636c2628086e, but rather due to a buggy test case that got uncovered
    from the change in behaviour in 636c2628086e.

    The data_size of that test case for the skb was set to 1. In the
    bpf_fill_ld_abs_vlan_push_pop() handler bpf insns are generated that
    loop with: reading skb data, pushing 68 tags, reading skb data,
    popping 68 tags, reading skb data, etc, in order to force a skb
    expansion and thus trigger that JITs recache skb->data. Problem is
    that initial data_size is too small.

    While before 636c2628086e, the test silently bailed out due to the
    skb->len < VLAN_ETH_HLEN check with returning 0, and now throwing an
    error from failing skb_ensure_writable(). Set at least minimum of
    ETH_HLEN as an initial length so that on first push of data, equivalent
    pop will succeed.

    Fixes: 4d9c5c53ac99 ("test_bpf: add bpf_skb_vlan_push/pop() tests")
    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

16 Sep, 2016

1 commit

  • Commit d5709f7ab776 ("flow_dissector: For stripped vlan, get vlan
    info from skb->vlan_tci") made flow dissector look at vlan_proto
    when vlan is present. Since test_bpf sets skb->vlan_tci to ~0
    (including VLAN_TAG_PRESENT) we have to populate skb->vlan_proto.

    Fixes false negative on test #24:
    test_bpf: #24 LD_PAYLOAD_OFF jited:0 175 ret 0 != 42 FAIL (1 times)

    Signed-off-by: Jakub Kicinski
    Reviewed-by: Dinan Gunawardena
    Acked-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Jakub Kicinski
     

17 May, 2016

1 commit

  • Since the blinding is strictly only called from inside eBPF JITs,
    we need to change signatures for bpf_int_jit_compile() and
    bpf_prog_select_runtime() first in order to prepare that the
    eBPF program we're dealing with can change underneath. Hence,
    for call sites, we need to return the latest prog. No functional
    change in this patch.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

07 Apr, 2016

4 commits

  • Some of these tests proved useful with the powerpc eBPF JIT port due to
    sign-extended 16-bit immediate loads. Though some of these aspects get
    covered in other tests, it is better to have explicit tests so as to
    quickly tag the precise problem.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     
  • BPF_ALU32 and BPF_ALU64 tests for adding two 32-bit values that results in
    32-bit overflow.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     
  • Unsigned Jump-if-Greater-Than.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     
  • JMP_JSET tests incorrectly used BPF_JNE. Fix the same.

    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: "David S. Miller"
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Signed-off-by: Naveen N. Rao
    Acked-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Naveen N. Rao
     

19 Dec, 2015

1 commit

  • Add couple of test cases for interpreter but also JITs, f.e. to test that
    when imm32 moves are being done, upper 32bits of the regs are being zero
    extended.

    Without JIT:

    [...]
    [ 1114.129301] test_bpf: #43 MOV REG64 jited:0 128 PASS
    [ 1114.130626] test_bpf: #44 MOV REG32 jited:0 139 PASS
    [ 1114.132055] test_bpf: #45 LD IMM64 jited:0 124 PASS
    [...]

    With JIT (generated code can as usual be nicely verified with the help of
    bpf_jit_disasm tool):

    [...]
    [ 1062.726782] test_bpf: #43 MOV REG64 jited:1 6 PASS
    [ 1062.726890] test_bpf: #44 MOV REG32 jited:1 6 PASS
    [ 1062.726993] test_bpf: #45 LD IMM64 jited:1 6 PASS
    [...]

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

05 Nov, 2015

1 commit

  • When running "mod X" operation, if X is 0 the filter has to be halt.
    Add new test cases to cover A = A mod X if X is 0, and A = A mod 1.

    CC: Xi Wang
    CC: Zi Shen Lim
    Signed-off-by: Yang Shi
    Acked-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Acked-by: Zi Shen Lim
    Acked-by: Xi Wang
    Signed-off-by: David S. Miller

    Yang Shi
     

07 Aug, 2015

1 commit