24 Aug, 2020

1 commit

  • Replace the existing /* fall through */ comments and its variants with
    the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
    fall-through markings when it is the case.

    [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through

    Signed-off-by: Gustavo A. R. Silva

    Gustavo A. R. Silva
     

10 Dec, 2019

1 commit

  • Replace all the occurrences of FIELD_SIZEOF() with sizeof_field() except
    at places where these are defined. Later patches will remove the unused
    definition of FIELD_SIZEOF().

    This patch is generated using following script:

    EXCLUDE_FILES="include/linux/stddef.h|include/linux/kernel.h"

    git grep -l -e "\bFIELD_SIZEOF\b" | while read file;
    do

    if [[ "$file" =~ $EXCLUDE_FILES ]]; then
    continue
    fi
    sed -i -e 's/\bFIELD_SIZEOF\b/sizeof_field/g' $file;
    done

    Signed-off-by: Pankaj Bharadiya
    Link: https://lore.kernel.org/r/20190924105839.110713-3-pankaj.laxminarayan.bharadiya@intel.com
    Co-developed-by: Kees Cook
    Signed-off-by: Kees Cook
    Acked-by: David Miller # for net

    Pankaj Bharadiya
     

25 May, 2019

1 commit


21 May, 2019

1 commit


20 Dec, 2018

1 commit


29 Nov, 2018

1 commit


27 Nov, 2018

3 commits

  • Move all arguments into output registers from input registers.

    This path is exercised by test_verifier.c's "calls: two calls with
    args" test. Adjust BPF_TAILCALL_PROLOGUE_SKIP as needed.

    Let's also make the prologue length a constant size regardless of
    the combination of ->saw_frame_pointer and ->saw_tail_call
    settings.

    Signed-off-by: David S. Miller
    Signed-off-by: Daniel Borkmann

    David Miller
     
  • We need to initialize the frame pointer register not just if it is
    seen as a source operand, but also if it is seen as the destination
    operand of a store or an atomic instruction (which effectively is a
    source operand).

    This is exercised by test_verifier's "non-invalid fp arithmetic"

    Signed-off-by: David S. Miller
    Signed-off-by: Alexei Starovoitov

    David Miller
     
  • On T4 and later sparc64 cpus we can use the fused compare and branch
    instruction.

    However, it can only be used if the branch destination is in the range
    of a signed 10-bit immediate offset. This amounts to 1024
    instructions forwards or backwards.

    After the commit referenced in the Fixes: tag, the largest possible
    size program seen by the JIT explodes by a significant factor.

    As a result of this convergance takes many more passes since the
    expanded "BPF_LDX | BPF_MSH | BPF_B" code sequence, for example,
    contains several embedded branch on condition instructions.

    On each pass, as suddenly new fused compare and branch instances
    become valid, this makes thousands more in range for the next pass.
    And so on and so forth.

    This is most greatly exemplified by "BPF_MAXINSNS: exec all MSH" which
    takes 35 passes to converge, and shrinks the image by about 64K.

    To decrease the cost of this number of convergance passes, do the
    convergance pass before we have the program image allocated, just like
    other JITs (such as x86) do.

    Fixes: e0cea7ce988c ("bpf: implement ld_abs/ld_ind in native bpf")
    Signed-off-by: David S. Miller
    Signed-off-by: Alexei Starovoitov

    David Miller
     

17 Nov, 2018

2 commits


13 Jun, 2018

1 commit

  • The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
    patch replaces cases of:

    kmalloc(a * b, gfp)

    with:
    kmalloc_array(a * b, gfp)

    as well as handling cases of:

    kmalloc(a * b * c, gfp)

    with:

    kmalloc(array3_size(a, b, c), gfp)

    as it's slightly less ugly than:

    kmalloc_array(array_size(a, b), c, gfp)

    This does, however, attempt to ignore constant size factors like:

    kmalloc(4 * 1024, gfp)

    though any constants defined via macros get caught up in the conversion.

    Any factors with a sizeof() of "unsigned char", "char", and "u8" were
    dropped, since they're redundant.

    The tools/ directory was manually excluded, since it has its own
    implementation of kmalloc().

    The Coccinelle script used for this was:

    // Fix redundant parens around sizeof().
    @@
    type TYPE;
    expression THING, E;
    @@

    (
    kmalloc(
    - (sizeof(TYPE)) * E
    + sizeof(TYPE) * E
    , ...)
    |
    kmalloc(
    - (sizeof(THING)) * E
    + sizeof(THING) * E
    , ...)
    )

    // Drop single-byte sizes and redundant parens.
    @@
    expression COUNT;
    typedef u8;
    typedef __u8;
    @@

    (
    kmalloc(
    - sizeof(u8) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(__u8) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(char) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(unsigned char) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(u8) * COUNT
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(__u8) * COUNT
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(char) * COUNT
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(unsigned char) * COUNT
    + COUNT
    , ...)
    )

    // 2-factor product with sizeof(type/expression) and identifier or constant.
    @@
    type TYPE;
    expression THING;
    identifier COUNT_ID;
    constant COUNT_CONST;
    @@

    (
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * (COUNT_ID)
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * COUNT_ID
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * (COUNT_CONST)
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * COUNT_CONST
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * (COUNT_ID)
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * COUNT_ID
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * (COUNT_CONST)
    + COUNT_CONST, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * COUNT_CONST
    + COUNT_CONST, sizeof(THING)
    , ...)
    )

    // 2-factor product, only identifiers.
    @@
    identifier SIZE, COUNT;
    @@

    - kmalloc
    + kmalloc_array
    (
    - SIZE * COUNT
    + COUNT, SIZE
    , ...)

    // 3-factor product with 1 sizeof(type) or sizeof(expression), with
    // redundant parens removed.
    @@
    expression THING;
    identifier STRIDE, COUNT;
    type TYPE;
    @@

    (
    kmalloc(
    - sizeof(TYPE) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    )

    // 3-factor product with 2 sizeof(variable), with redundant parens removed.
    @@
    expression THING1, THING2;
    identifier COUNT;
    type TYPE1, TYPE2;
    @@

    (
    kmalloc(
    - sizeof(TYPE1) * sizeof(TYPE2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kmalloc(
    - sizeof(THING1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kmalloc(
    - sizeof(THING1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    )

    // 3-factor product, only identifiers, with redundant parens removed.
    @@
    identifier STRIDE, SIZE, COUNT;
    @@

    (
    kmalloc(
    - (COUNT) * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - (COUNT) * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - (COUNT) * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - (COUNT) * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    )

    // Any remaining multi-factor products, first at least 3-factor products,
    // when they're not all constants...
    @@
    expression E1, E2, E3;
    constant C1, C2, C3;
    @@

    (
    kmalloc(C1 * C2 * C3, ...)
    |
    kmalloc(
    - (E1) * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kmalloc(
    - (E1) * (E2) * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kmalloc(
    - (E1) * (E2) * (E3)
    + array3_size(E1, E2, E3)
    , ...)
    |
    kmalloc(
    - E1 * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    )

    // And then all remaining 2 factors products when they're not all constants,
    // keeping sizeof() as the second factor argument.
    @@
    expression THING, E1, E2;
    type TYPE;
    constant C1, C2, C3;
    @@

    (
    kmalloc(sizeof(THING) * C2, ...)
    |
    kmalloc(sizeof(TYPE) * C2, ...)
    |
    kmalloc(C1 * C2 * C3, ...)
    |
    kmalloc(C1 * C2, ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * (E2)
    + E2, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * E2
    + E2, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * (E2)
    + E2, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * E2
    + E2, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - (E1) * E2
    + E1, E2
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - (E1) * (E2)
    + E1, E2
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - E1 * E2
    + E1, E2
    , ...)
    )

    Signed-off-by: Kees Cook

    Kees Cook
     

15 May, 2018

1 commit


04 May, 2018

1 commit

  • Since LD_ABS/LD_IND instructions are now removed from the core and
    reimplemented through a combination of inlined BPF instructions and
    a slow-path helper, we can get rid of the complexity from sparc64 JIT.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Acked-by: David S. Miller
    Signed-off-by: Alexei Starovoitov

    Daniel Borkmann
     

27 Jan, 2018

1 commit


20 Jan, 2018

1 commit

  • Having a pure_initcall() callback just to permanently enable BPF
    JITs under CONFIG_BPF_JIT_ALWAYS_ON is unnecessary and could leave
    a small race window in future where JIT is still disabled on boot.
    Since we know about the setting at compilation time anyway, just
    initialize it properly there. Also consolidate all the individual
    bpf_jit_enable variables into a single one and move them under one
    location. Moreover, don't allow for setting unspecified garbage
    values on them.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: Alexei Starovoitov

    Daniel Borkmann
     

28 Dec, 2017

1 commit

  • Daniel Borkmann says:

    ====================
    pull-request: bpf-next 2017-12-28

    The following pull-request contains BPF updates for your *net-next* tree.

    The main changes are:

    1) Fix incorrect state pruning related to recognition of zero initialized
    stack slots, where stacksafe exploration would mistakenly return a
    positive pruning verdict too early ignoring other slots, from Gianluca.

    2) Various BPF to BPF calls related follow-up fixes. Fix an off-by-one
    in maximum call depth check, and rework maximum stack depth tracking
    logic to fix a bypass of the total stack size check reported by Jann.
    Also fix a bug in arm64 JIT where prog->jited_len was uninitialized.
    Addition of various test cases to BPF selftests, from Alexei.

    3) Addition of a BPF selftest to test_verifier that is related to BPF to
    BPF calls which demonstrates a late caller stack size increase and
    thus out of bounds access. Fixed above in 2). Test case from Jann.

    4) Addition of correlating BPF helper calls, BPF to BPF calls as well
    as BPF maps to bpftool xlated dump in order to allow for better
    BPF program introspection and debugging, from Daniel.

    5) Fixing several bugs in BPF to BPF calls kallsyms handling in order
    to get it actually to work for subprogs, from Daniel.

    6) Extending sparc64 JIT support for BPF to BPF calls and fix a couple
    of build errors for libbpf on sparc64, from David.

    7) Allow narrower context access for BPF dev cgroup typed programs in
    order to adapt to LLVM code generation. Also adjust memlock rlimit
    in the test_dev_cgroup BPF selftest, from Yonghong.

    8) Add netdevsim Kconfig entry to BPF selftests since test_offload.py
    relies on netdevsim device being available, from Jakub.

    9) Reduce scope of xdp_do_generic_redirect_map() to being static,
    from Xiongwei.

    10) Minor cleanups and spelling fixes in BPF verifier, from Colin.
    ====================

    Signed-off-by: David S. Miller

    David S. Miller
     

23 Dec, 2017

2 commits

  • Modelled strongly upon the arm64 implementation.

    Signed-off-by: David S. Miller
    Signed-off-by: Daniel Borkmann

    David Miller
     
  • Lots of overlapping changes. Also on the net-next side
    the XDP state management is handled more in the generic
    layers so undo the 'net' nfp fix which isn't applicable
    in net-next.

    Include a necessary change by Jakub Kicinski, with log message:

    ====================
    cls_bpf no longer takes care of offload tracking. Make sure
    netdevsim performs necessary checks. This fixes a warning
    caused by TC trying to remove a filter it has not added.

    Signed-off-by: Jakub Kicinski
    Reviewed-by: Quentin Monnet
    ====================

    Signed-off-by: David S. Miller

    David S. Miller
     

18 Dec, 2017

1 commit

  • global bpf_jit_enable variable is tested multiple times in JITs,
    blinding and verifier core. The malicious root can try to toggle
    it while loading the programs. This race condition was accounted
    for and there should be no issues, but it's safer to avoid
    this race condition.

    Signed-off-by: Alexei Starovoitov
    Acked-by: Daniel Borkmann
    Signed-off-by: Daniel Borkmann

    Alexei Starovoitov
     

16 Dec, 2017

1 commit

  • When LD_ABS/IND is used in the program, and we have a BPF helper
    call that changes packet data (bpf_helper_changes_pkt_data() returns
    true), then in case of sparc JIT, we try to reload cached skb data
    from bpf2sparc[BPF_REG_6]. However, there is no such guarantee or
    assumption that skb sits in R6 at this point, all helpers changing
    skb data only have a guarantee that skb sits in R1. Therefore,
    store BPF R1 in L7 temporarily and after procedure call use L7 to
    reload cached skb data. skb sitting in R6 is only true at the time
    when LD_ABS/IND is executed.

    Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.")
    Signed-off-by: Daniel Borkmann
    Acked-by: David S. Miller
    Acked-by: Alexei Starovoitov
    Signed-off-by: Alexei Starovoitov

    Daniel Borkmann
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

10 Aug, 2017

1 commit


07 Jun, 2017

1 commit


01 Jun, 2017

2 commits


02 May, 2017

1 commit

  • Like other JITs, sparc64 maintains an array of instruction offsets but
    stores the entries off by one. This is done because jumps to the
    exit block are indexed to one past the last BPF instruction.

    So if we size the array by the program length, we need to record
    the previous instruction in order to stay within the array bounds.

    This is explained in ARM JIT commit 8eee539ddea0 ("arm64: bpf: fix
    out-of-bounds read in bpf2a64_offset()").

    But this scheme requires a little bit of careful handling when
    the instruction before the branch destination is a 64-bit load
    immediate. It takes up 2 BPF instruction slots.

    Therefore, we have to fill in the array entry for the second
    half of the 64-bit load immediate instruction rather than for
    the one for the beginning of that instruction.

    Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.")
    Signed-off-by: David S. Miller

    David S. Miller
     

25 Apr, 2017

2 commits

  • Doing a full 64-bit decomposition is really stupid especially for
    simple values like 0 and -1.

    But if we are going to optimize this, go all the way and try for all 2
    and 3 instruction sequences not requiring a temporary register as
    well.

    First we do the easy cases where it's a zero or sign extended 32-bit
    number (sethi+or, sethi+xor, respectively).

    Then we try to find a range of set bits we can load simply then shift
    up into place, in various ways.

    Then we try negating the constant and see if we can do a simple
    sequence using that with a xor at the end. (f.e. the range of set
    bits can't be loaded simply, but for the negated value it can)

    The final optimized strategy involves 4 instructions sequences not
    needing a temporary register.

    Otherwise we sadly fully decompose using a temp..

    Example, from ALU64_XOR_K: 0x0000ffffffff0000 ^ 0x0 = 0x0000ffffffff0000:

    0000000000000000 :
    0: 9d e3 bf 50 save %sp, -176, %sp
    4: 01 00 00 00 nop
    8: 90 10 00 18 mov %i0, %o0
    c: 13 3f ff ff sethi %hi(0xfffffc00), %o1
    10: 92 12 63 ff or %o1, 0x3ff, %o1 ! ffffffff
    14: 93 2a 70 10 sllx %o1, 0x10, %o1
    18: 15 3f ff ff sethi %hi(0xfffffc00), %o2
    1c: 94 12 a3 ff or %o2, 0x3ff, %o2 ! ffffffff
    20: 95 2a b0 10 sllx %o2, 0x10, %o2
    24: 92 1a 60 00 xor %o1, 0, %o1
    28: 12 e2 40 8a cxbe %o1, %o2, 38
    2c: 9a 10 20 02 mov 2, %o5
    30: 10 60 00 03 b,pn %xcc, 3c
    34: 01 00 00 00 nop
    38: 9a 10 20 01 mov 1, %o5 ! 1
    3c: 81 c7 e0 08 ret
    40: 91 eb 40 00 restore %o5, %g0, %o0

    Signed-off-by: David S. Miller

    David S. Miller
     
  • cbcond combines a compare with a branch into a single instruction.

    The limitations are:

    1) Only newer chips support it

    2) For immediate compares we are limited to 5-bit signed immediate
    values

    3) The branch displacement is limited to 10-bit signed

    4) We cannot use it for JSET

    Also, cbcond (unlike all other sparc control transfers) lacks a delay
    slot.

    Currently we don't have a useful instruction we can push into the
    delay slot of normal branches. So using cbcond pretty much always
    increases code density, and is therefore a win.

    Signed-off-by: David S. Miller

    David S. Miller
     

23 Apr, 2017

2 commits


21 Mar, 2016

1 commit


06 Jan, 2016

1 commit

  • The SKF_AD_ALU_XOR_X ancillary is not like the other ancillary data
    instructions since it XORs A with X while all the others replace A with
    some loaded value. All the BPF JITs fail to clear A if this is used as
    the first instruction in a filter. This was found using american fuzzy
    lop.

    Add a helper to determine if A needs to be cleared given the first
    instruction in a filter, and use this in the JITs. Except for ARM, the
    rest have only been compile-tested.

    Fixes: 3480593131e0 ("net: filter: get rid of BPF_S_* enum")
    Signed-off-by: Rabin Vincent
    Acked-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Rabin Vincent
     

03 Oct, 2015

1 commit

  • As we need to add further flags to the bpf_prog structure, lets migrate
    both bools to a bitfield representation. The size of the base structure
    (excluding insns) remains unchanged at 40 bytes.

    Add also tags for the kmemchecker, so that it doesn't throw false
    positives. Even in case gcc would generate suboptimal code, it's not
    being accessed in performance critical paths.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

31 Jul, 2015

1 commit

  • When bpf_jit_compile() got split into two functions via commit
    f3c2af7ba17a ("net: filter: x86: split bpf_jit_compile()"), bpf_jit_dump()
    was changed to always show 0 as number of compiler passes. Change it to
    dump the actual number. Also on sparc, we count passes starting from 0,
    so add 1 for the debug dump as well.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

20 Jan, 2015

1 commit

  • Nothing needs the module pointer any more, and the next patch will
    call it from RCU, where the module itself might no longer exist.
    Removing the arg is the safest approach.

    This just codifies the use of the module_alloc/module_free pattern
    which ftrace and bpf use.

    Signed-off-by: Rusty Russell
    Acked-by: Alexei Starovoitov
    Cc: Mikael Starvik
    Cc: Jesper Nilsson
    Cc: Ralf Baechle
    Cc: Ley Foon Tan
    Cc: Benjamin Herrenschmidt
    Cc: Chris Metcalf
    Cc: Steven Rostedt
    Cc: x86@kernel.org
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: Masami Hiramatsu
    Cc: linux-cris-kernel@axis.com
    Cc: linux-kernel@vger.kernel.org
    Cc: linux-mips@linux-mips.org
    Cc: nios2-dev@lists.rocketboards.org
    Cc: linuxppc-dev@lists.ozlabs.org
    Cc: sparclinux@vger.kernel.org
    Cc: netdev@vger.kernel.org

    Rusty Russell
     

27 Sep, 2014

1 commit


25 Sep, 2014

2 commits

  • David S. Miller
     
  • - fix BPF_LD|ABS|IND from negative offsets:
    make sure to sign extend lower 32 bits in 64-bit register
    before calling C helpers from JITed code, otherwise 'int k'
    argument of bpf_internal_load_pointer_neg_helper() function
    will be added as large unsigned integer, causing packet size
    check to trigger and abort the program.

    It's worth noting that JITed code for 'A = A op K' will affect
    upper 32 bits differently depending whether K is simm13 or not.
    Since small constants are sign extended, whereas large constants
    are stored in temp register and zero extended.
    That is ok and we don't have to pay a penalty of sign extension
    for every sethi, since all classic BPF instructions have 32-bit
    semantics and we only need to set correct upper bits when
    transitioning from JITed code into C.

    - though instructions 'A &= 0' and 'A *= 0' are odd, JIT compiler
    should not optimize them out

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     

24 Sep, 2014

1 commit