17 Jan, 2021

1 commit

  • commit 4a443a51776ca9847942523cf987a330894d3a31 upstream.

    To pick the changes from:

    3ceb6543e9cf6ed8 ("fscrypt: remove kernel-internal constants from UAPI header")

    That don't result in any changes in tooling, just addressing this perf
    build warning:

    Warning: Kernel ABI header at 'tools/include/uapi/linux/fscrypt.h' differs from latest version at 'include/uapi/linux/fscrypt.h'
    diff -u tools/include/uapi/linux/fscrypt.h include/uapi/linux/fscrypt.h

    Cc: Adrian Hunter
    Cc: Eric Biggers
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo
    Signed-off-by: Greg Kroah-Hartman

    Arnaldo Carvalho de Melo
     

06 Jan, 2021

1 commit

  • commit 7ddcdea5b54492f54700f427f58690cf1e187e5e upstream.

    To pick up the changes in:

    a85cbe6159ffc973 ("uapi: move constants from to ")

    That causes no changes in tooling, just addresses this perf build
    warning:

    Warning: Kernel ABI header at 'tools/include/uapi/linux/const.h' differs from latest version at 'include/uapi/linux/const.h'
    diff -u tools/include/uapi/linux/const.h include/uapi/linux/const.h

    Cc: Adrian Hunter
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Namhyung Kim
    Cc: Petr Vorel
    Signed-off-by: Arnaldo Carvalho de Melo
    Signed-off-by: Greg Kroah-Hartman

    Arnaldo Carvalho de Melo
     

12 Dec, 2020

1 commit

  • Remove bpf_ prefix, which causes these helpers to be reported in verifier
    dump as bpf_bpf_this_cpu_ptr() and bpf_bpf_per_cpu_ptr(), respectively. Lets
    fix it as long as it is still possible before UAPI freezes on these helpers.

    Fixes: eaa6bcb71ef6 ("bpf: Introduce bpf_per_cpu_ptr()")
    Signed-off-by: Andrii Nakryiko
    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Linus Torvalds

    Andrii Nakryiko
     

03 Nov, 2020

9 commits

  • To pick the changes from:

    dab741e0e02bd3c4 ("Add a "nosymfollow" mount option.")

    That ends up adding support for the new MS_NOSYMFOLLOW mount flag:

    $ tools/perf/trace/beauty/mount_flags.sh > before
    $ cp include/uapi/linux/mount.h tools/include/uapi/linux/mount.h
    $ tools/perf/trace/beauty/mount_flags.sh > after
    $ diff -u before after
    --- before 2020-11-03 08:51:28.117997454 -0300
    +++ after 2020-11-03 08:51:38.992218869 -0300
    @@ -7,6 +7,7 @@
    [32 ? (ilog2(32) + 1) : 0] = "REMOUNT",
    [64 ? (ilog2(64) + 1) : 0] = "MANDLOCK",
    [128 ? (ilog2(128) + 1) : 0] = "DIRSYNC",
    + [256 ? (ilog2(256) + 1) : 0] = "NOSYMFOLLOW",
    [1024 ? (ilog2(1024) + 1) : 0] = "NOATIME",
    [2048 ? (ilog2(2048) + 1) : 0] = "NODIRATIME",
    [4096 ? (ilog2(4096) + 1) : 0] = "BIND",
    $

    So now one can use it in --filter expressions for tracepoints.

    This silences this perf build warnings:

    Warning: Kernel ABI header at 'tools/include/uapi/linux/mount.h' differs from latest version at 'include/uapi/linux/mount.h'
    diff -u tools/include/uapi/linux/mount.h include/uapi/linux/mount.h

    Cc: Adrian Hunter
    Cc: Al Viro
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Mattias Nissler
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     
  • The diff is just tabs versus spaces, trivial.

    This silences this perf tools build warning:

    Warning: Kernel ABI header at 'tools/include/uapi/linux/perf_event.h' differs from latest version at 'include/uapi/linux/perf_event.h'
    diff -u tools/include/uapi/linux/perf_event.h include/uapi/linux/perf_event.h

    Cc: Adrian Hunter
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     
  • Some should cause changes in tooling, like the one adding LAST_EXCP, but
    the way it is structured end up not making that happen.

    The new SVM_EXIT_INVPCID should get used by arch/x86/util/kvm-stat.c,
    in the svm_exit_reasons table.

    The tools/perf/trace/beauty part has scripts to catch changes and
    automagically create tables, like tools/perf/trace/beauty/kvm_ioctl.sh,
    but changes are needed to make tools/perf/arch/x86/util/kvm-stat.c catch
    those automatically.

    These were handled by the existing scripts:

    $ tools/perf/trace/beauty/kvm_ioctl.sh > before
    $ cp include/uapi/linux/kvm.h tools/include/uapi/linux/kvm.h
    $ tools/perf/trace/beauty/kvm_ioctl.sh > after
    $ diff -u before after
    --- before 2020-11-03 08:43:52.910728608 -0300
    +++ after 2020-11-03 08:44:04.273959984 -0300
    @@ -89,6 +89,7 @@
    [0xbf] = "SET_NESTED_STATE",
    [0xc0] = "CLEAR_DIRTY_LOG",
    [0xc1] = "GET_SUPPORTED_HV_CPUID",
    + [0xc6] = "X86_SET_MSR_FILTER",
    [0xe0] = "CREATE_DEVICE",
    [0xe1] = "SET_DEVICE_ATTR",
    [0xe2] = "GET_DEVICE_ATTR",
    $
    $ tools/perf/trace/beauty/vhost_virtio_ioctl.sh > before
    $ cp include/uapi/linux/vhost.h tools/include/uapi/linux/vhost.h
    $
    $ tools/perf/trace/beauty/vhost_virtio_ioctl.sh > after
    $ diff -u before after
    --- before 2020-11-03 08:45:55.522225198 -0300
    +++ after 2020-11-03 08:46:12.881578666 -0300
    @@ -37,4 +37,5 @@
    [0x71] = "VDPA_GET_STATUS",
    [0x73] = "VDPA_GET_CONFIG",
    [0x76] = "VDPA_GET_VRING_NUM",
    + [0x78] = "VDPA_GET_IOVA_RANGE",
    };
    $

    This addresses these perf build warnings:

    Warning: Kernel ABI header at 'tools/arch/arm64/include/uapi/asm/kvm.h' differs from latest version at 'arch/arm64/include/uapi/asm/kvm.h'
    diff -u tools/arch/arm64/include/uapi/asm/kvm.h arch/arm64/include/uapi/asm/kvm.h
    Warning: Kernel ABI header at 'tools/arch/s390/include/uapi/asm/sie.h' differs from latest version at 'arch/s390/include/uapi/asm/sie.h'
    diff -u tools/arch/s390/include/uapi/asm/sie.h arch/s390/include/uapi/asm/sie.h
    Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/kvm.h' differs from latest version at 'arch/x86/include/uapi/asm/kvm.h'
    diff -u tools/arch/x86/include/uapi/asm/kvm.h arch/x86/include/uapi/asm/kvm.h
    Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/svm.h' differs from latest version at 'arch/x86/include/uapi/asm/svm.h'
    diff -u tools/arch/x86/include/uapi/asm/svm.h arch/x86/include/uapi/asm/svm.h
    Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h'
    diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h
    Warning: Kernel ABI header at 'tools/include/uapi/linux/vhost.h' differs from latest version at 'include/uapi/linux/vhost.h'
    diff -u tools/include/uapi/linux/vhost.h include/uapi/linux/vhost.h

    Cc: Adrian Hunter
    Cc: Alexander Yarygin
    Cc: Borislav Petkov
    Cc: Christian Borntraeger
    Cc: Cornelia Huck
    Cc: David Ahern
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Joerg Roedel
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     
  • e47168f3d1b14af5 ("powerpc/8xx: Support 16k hugepages with 4k pages")

    That don't cause any changes in tooling, just addresses this perf build
    warning:

    Warning: Kernel ABI header at 'tools/include/uapi/linux/mman.h' differs from latest version at 'include/uapi/linux/mman.h'
    diff -u tools/include/uapi/linux/mman.h include/uapi/linux/mman.h

    Cc: Adrian Hunter
    Cc: Christophe Leroy
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Michael Ellerman
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     
  • To get the changes from:

    c7f0207b613033c5 ("fscrypt: make "#define fscrypt_policy" user-only")

    That don't cause any changes in tools/perf, only addresses this perf
    tools build warning:

    Warning: Kernel ABI header at 'tools/include/uapi/linux/fscrypt.h' differs from latest version at 'include/uapi/linux/fscrypt.h'
    diff -u tools/include/uapi/linux/fscrypt.h include/uapi/linux/fscrypt.h

    Cc: Adrian Hunter
    Cc: Eric Biggers
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     
  • To pick the changes in:

    13149e8bafc46572 ("drm/i915: add syncobj timeline support")
    cda9edd02425d790 ("drm/i915: introduce a mechanism to extend execbuf2")

    That don't result in any changes in tooling, just silences this perf
    build warning:

    Warning: Kernel ABI header at 'tools/include/uapi/drm/i915_drm.h' differs from latest version at 'include/uapi/drm/i915_drm.h'
    diff -u tools/include/uapi/drm/i915_drm.h include/uapi/drm/i915_drm.h

    Cc: Adrian Hunter
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Lionel Landwerlin
    Cc: Namhyung Kim
    Cc: Rodrigo Vivi
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     
  • To get the changes in:

    1c101da8b971a366 ("arm64: mte: Allow user control of the tag check mode via prctl()")
    af5ce95282dc99d0 ("arm64: mte: Allow user control of the generated random tags via prctl()")

    Which don't cause any change in tooling, only addresses this perf build
    warning:

    Warning: Kernel ABI header at 'tools/include/uapi/linux/prctl.h' differs from latest version at 'include/uapi/linux/prctl.h'
    diff -u tools/include/uapi/linux/prctl.h include/uapi/linux/prctl.h

    Cc: Adrian Hunter
    Cc: Catalin Marinas
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     
  • The GCC specific __attribute__((optimize)) attribute does not what is
    commonly expected and is explicitly recommended against using in
    production code by the GCC people.

    Unlike what is often expected, it doesn't add to the optimization flags,
    but it fully replaces them, loosing any and all optimization flags
    provided by the compiler commandline.

    The only guaranteed upon means of inhibiting tail-calls is by placing a
    volatile asm with side-effects after the call such that the tail-call simply
    cannot be done.

    Given the original commit wasn't specific on which calls were the problem, this
    removal might re-introduce the problem, which can then be re-analyzed and cured
    properly.

    Signed-off-by: Peter Zijlstra
    Acked-by: Ard Biesheuvel
    Acked-by: Miguel Ojeda
    Cc: Alexei Starovoitov
    Cc: Arnd Bergmann
    Cc: Arvind Sankar
    Cc: Daniel Borkmann
    Cc: Geert Uytterhoeven
    Cc: Ian Rogers
    Cc: Josh Poimboeuf
    Cc: Kees Kook
    Cc: Martin Liška
    Cc: Nick Desaulniers
    Cc: Randy Dunlap
    Cc: Thomas Gleixner
    Link: http://lore.kernel.org/lkml/20201028081123.GT2628@hirez.programming.kicks-ass.net
    Signed-off-by: Arnaldo Carvalho de Melo

    Peter Zijlstra
     
  • To pick the changes from:

    ecb8ac8b1f146915 ("mm/madvise: introduce process_madvise() syscall: an external memory hinting API")

    That addresses these perf build warning:

    Warning: Kernel ABI header at 'tools/include/uapi/asm-generic/unistd.h' differs from latest version at 'include/uapi/asm-generic/unistd.h'
    diff -u tools/include/uapi/asm-generic/unistd.h include/uapi/asm-generic/unistd.h
    Warning: Kernel ABI header at 'tools/perf/arch/x86/entry/syscalls/syscall_64.tbl' differs from latest version at 'arch/x86/entry/syscalls/syscall_64.tbl'
    diff -u tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl

    Cc: Adrian Hunter
    Cc: Ian Rogers
    Cc: Jiri Olsa
    Cc: Linus Torvalds
    Cc: Minchan Kim
    Cc: Namhyung Kim
    Signed-off-by: Arnaldo Carvalho de Melo

    Arnaldo Carvalho de Melo
     

26 Oct, 2020

1 commit

  • Use a more generic form for __section that requires quotes to avoid
    complications with clang and gcc differences.

    Remove the quote operator # from compiler_attributes.h __section macro.

    Convert all unquoted __section(foo) uses to quoted __section("foo").
    Also convert __attribute__((section("foo"))) uses to __section("foo")
    even if the __attribute__ has multiple list entry forms.

    Conversion done using the script at:

    https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl

    Signed-off-by: Joe Perches
    Reviewed-by: Nick Desaulniers
    Reviewed-by: Miguel Ojeda
    Signed-off-by: Linus Torvalds

    Joe Perches
     

22 Oct, 2020

1 commit

  • Based on the discussion in [0], update the bpf_redirect_neigh() helper to
    accept an optional parameter specifying the nexthop information. This makes
    it possible to combine bpf_fib_lookup() and bpf_redirect_neigh() without
    incurring a duplicate FIB lookup - since the FIB lookup helper will return
    the nexthop information even if no neighbour is present, this can simply
    be passed on to bpf_redirect_neigh() if bpf_fib_lookup() returns
    BPF_FIB_LKUP_RET_NO_NEIGH. Thus fix & extend it before helper API is frozen.

    [0] https://lore.kernel.org/bpf/393e17fc-d187-3a8d-2f0d-a627c7c63fca@iogearbox.net/

    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Daniel Borkmann
    Reviewed-by: David Ahern
    Link: https://lore.kernel.org/bpf/160322915615.32199.1187570224032024535.stgit@toke.dk

    Toke Høiland-Jørgensen
     

16 Oct, 2020

1 commit

  • Pull networking updates from Jakub Kicinski:

    - Add redirect_neigh() BPF packet redirect helper, allowing to limit
    stack traversal in common container configs and improving TCP
    back-pressure.

    Daniel reports ~10Gbps => ~15Gbps single stream TCP performance gain.

    - Expand netlink policy support and improve policy export to user
    space. (Ge)netlink core performs request validation according to
    declared policies. Expand the expressiveness of those policies
    (min/max length and bitmasks). Allow dumping policies for particular
    commands. This is used for feature discovery by user space (instead
    of kernel version parsing or trial and error).

    - Support IGMPv3/MLDv2 multicast listener discovery protocols in
    bridge.

    - Allow more than 255 IPv4 multicast interfaces.

    - Add support for Type of Service (ToS) reflection in SYN/SYN-ACK
    packets of TCPv6.

    - In Multi-patch TCP (MPTCP) support concurrent transmission of data on
    multiple subflows in a load balancing scenario. Enhance advertising
    addresses via the RM_ADDR/ADD_ADDR options.

    - Support SMC-Dv2 version of SMC, which enables multi-subnet
    deployments.

    - Allow more calls to same peer in RxRPC.

    - Support two new Controller Area Network (CAN) protocols - CAN-FD and
    ISO 15765-2:2016.

    - Add xfrm/IPsec compat layer, solving the 32bit user space on 64bit
    kernel problem.

    - Add TC actions for implementing MPLS L2 VPNs.

    - Improve nexthop code - e.g. handle various corner cases when nexthop
    objects are removed from groups better, skip unnecessary
    notifications and make it easier to offload nexthops into HW by
    converting to a blocking notifier.

    - Support adding and consuming TCP header options by BPF programs,
    opening the doors for easy experimental and deployment-specific TCP
    option use.

    - Reorganize TCP congestion control (CC) initialization to simplify
    life of TCP CC implemented in BPF.

    - Add support for shipping BPF programs with the kernel and loading
    them early on boot via the User Mode Driver mechanism, hence reusing
    all the user space infra we have.

    - Support sleepable BPF programs, initially targeting LSM and tracing.

    - Add bpf_d_path() helper for returning full path for given 'struct
    path'.

    - Make bpf_tail_call compatible with bpf-to-bpf calls.

    - Allow BPF programs to call map_update_elem on sockmaps.

    - Add BPF Type Format (BTF) support for type and enum discovery, as
    well as support for using BTF within the kernel itself (current use
    is for pretty printing structures).

    - Support listing and getting information about bpf_links via the bpf
    syscall.

    - Enhance kernel interfaces around NIC firmware update. Allow
    specifying overwrite mask to control if settings etc. are reset
    during update; report expected max time operation may take to users;
    support firmware activation without machine reboot incl. limits of
    how much impact reset may have (e.g. dropping link or not).

    - Extend ethtool configuration interface to report IEEE-standard
    counters, to limit the need for per-vendor logic in user space.

    - Adopt or extend devlink use for debug, monitoring, fw update in many
    drivers (dsa loop, ice, ionic, sja1105, qed, mlxsw, mv88e6xxx,
    dpaa2-eth).

    - In mlxsw expose critical and emergency SFP module temperature alarms.
    Refactor port buffer handling to make the defaults more suitable and
    support setting these values explicitly via the DCBNL interface.

    - Add XDP support for Intel's igb driver.

    - Support offloading TC flower classification and filtering rules to
    mscc_ocelot switches.

    - Add PTP support for Marvell Octeontx2 and PP2.2 hardware, as well as
    fixed interval period pulse generator and one-step timestamping in
    dpaa-eth.

    - Add support for various auth offloads in WiFi APs, e.g. SAE (WPA3)
    offload.

    - Add Lynx PHY/PCS MDIO module, and convert various drivers which have
    this HW to use it. Convert mvpp2 to split PCS.

    - Support Marvell Prestera 98DX3255 24-port switch ASICs, as well as
    7-port Mediatek MT7531 IP.

    - Add initial support for QCA6390 and IPQ6018 in ath11k WiFi driver,
    and wcn3680 support in wcn36xx.

    - Improve performance for packets which don't require much offloads on
    recent Mellanox NICs by 20% by making multiple packets share a
    descriptor entry.

    - Move chelsio inline crypto drivers (for TLS and IPsec) from the
    crypto subtree to drivers/net. Move MDIO drivers out of the phy
    directory.

    - Clean up a lot of W=1 warnings, reportedly the actively developed
    subsections of networking drivers should now build W=1 warning free.

    - Make sure drivers don't use in_interrupt() to dynamically adapt their
    code. Convert tasklets to use new tasklet_setup API (sadly this
    conversion is not yet complete).

    * tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2583 commits)
    Revert "bpfilter: Fix build error with CONFIG_BPFILTER_UMH"
    net, sockmap: Don't call bpf_prog_put() on NULL pointer
    bpf, selftest: Fix flaky tcp_hdr_options test when adding addr to lo
    bpf, sockmap: Add locking annotations to iterator
    netfilter: nftables: allow re-computing sctp CRC-32C in 'payload' statements
    net: fix pos incrementment in ipv6_route_seq_next
    net/smc: fix invalid return code in smcd_new_buf_create()
    net/smc: fix valid DMBE buffer sizes
    net/smc: fix use-after-free of delayed events
    bpfilter: Fix build error with CONFIG_BPFILTER_UMH
    cxgb4/ch_ipsec: Replace the module name to ch_ipsec from chcr
    net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
    bpf: Fix register equivalence tracking.
    rxrpc: Fix loss of final ack on shutdown
    rxrpc: Fix bundle counting for exclusive connections
    netfilter: restore NF_INET_NUMHOOKS
    ibmveth: Identify ingress large send packets.
    ibmveth: Switch order of ibmveth_helper calls.
    cxgb4: handle 4-tuple PEDIT to NAT mode translation
    selftests: Add VRF route leaking tests
    ...

    Linus Torvalds
     

15 Oct, 2020

1 commit

  • Pull objtool updates from Ingo Molnar:
    "Most of the changes are cleanups and reorganization to make the
    objtool code more arch-agnostic. This is in preparation for non-x86
    support.

    Other changes:

    - KASAN fixes

    - Handle unreachable trap after call to noreturn functions better

    - Ignore unreachable fake jumps

    - Misc smaller fixes & cleanups"

    * tag 'objtool-core-2020-10-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (21 commits)
    perf build: Allow nested externs to enable BUILD_BUG() usage
    objtool: Allow nested externs to enable BUILD_BUG()
    objtool: Permit __kasan_check_{read,write} under UACCESS
    objtool: Ignore unreachable trap after call to noreturn functions
    objtool: Handle calling non-function symbols in other sections
    objtool: Ignore unreachable fake jumps
    objtool: Remove useless tests before save_reg()
    objtool: Decode unwind hint register depending on architecture
    objtool: Make unwind hint definitions available to other architectures
    objtool: Only include valid definitions depending on source file type
    objtool: Rename frame.h -> objtool.h
    objtool: Refactor jump table code to support other architectures
    objtool: Make relocation in alternative handling arch dependent
    objtool: Abstract alternative special case handling
    objtool: Move macros describing structures to arch-dependent code
    objtool: Make sync-check consider the target architecture
    objtool: Group headers to check in a single list
    objtool: Define 'struct orc_entry' only when needed
    objtool: Skip ORC entry creation for non-text sections
    objtool: Move ORC logic out of check()
    ...

    Linus Torvalds
     

13 Oct, 2020

3 commits

  • Pull compat mount cleanups from Al Viro:
    "The last remnants of mount(2) compat buried by Christoph.

    Buried into NFS, that is.

    Generally I'm less enthusiastic about "let's use in_compat_syscall()
    deep in call chain" kind of approach than Christoph seems to be, but
    in this case it's warranted - that had been an NFS-specific wart,
    hopefully not to be repeated in any other filesystems (read: any new
    filesystem introducing non-text mount options will get NAKed even if
    it doesn't mess the layout up).

    IOW, not worth trying to grow an infrastructure that would avoid that
    use of in_compat_syscall()..."

    * 'compat.mount' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    fs: remove compat_sys_mount
    fs,nfs: lift compat nfs4 mount data handling into the nfs code
    nfs: simplify nfs4_parse_monolithic

    Linus Torvalds
     
  • Pull compat iovec cleanups from Al Viro:
    "Christoph's series around import_iovec() and compat variant thereof"

    * 'work.iov_iter' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    security/keys: remove compat_keyctl_instantiate_key_iov
    mm: remove compat_process_vm_{readv,writev}
    fs: remove compat_sys_vmsplice
    fs: remove the compat readv/writev syscalls
    fs: remove various compat readv/writev helpers
    iov_iter: transparently handle compat iovecs in import_iovec
    iov_iter: refactor rw_copy_check_uvector and import_iovec
    iov_iter: move rw_copy_check_uvector() into lib/iov_iter.c
    compat.h: fix a spelling error in

    Linus Torvalds
     
  • Pull static call support from Ingo Molnar:
    "This introduces static_call(), which is the idea of static_branch()
    applied to indirect function calls. Remove a data load (indirection)
    by modifying the text.

    They give the flexibility of function pointers, but with better
    performance. (This is especially important for cases where retpolines
    would otherwise be used, as retpolines can be pretty slow.)

    API overview:

    DECLARE_STATIC_CALL(name, func);
    DEFINE_STATIC_CALL(name, func);
    DEFINE_STATIC_CALL_NULL(name, typename);

    static_call(name)(args...);
    static_call_cond(name)(args...);
    static_call_update(name, func);

    x86 is supported via text patching, otherwise basic indirect calls are
    used, with function pointers.

    There's a second variant using inline code patching, inspired by
    jump-labels, implemented on x86 as well.

    The new APIs are utilized in the x86 perf code, a heavy user of
    function pointers, where static calls speed up the PMU handler by
    4.2% (!).

    The generic implementation is not really excercised on other
    architectures, outside of the trivial test_static_call_init()
    self-test"

    * tag 'core-static_call-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (21 commits)
    static_call: Fix return type of static_call_init
    tracepoint: Fix out of sync data passing by static caller
    tracepoint: Fix overly long tracepoint names
    x86/perf, static_call: Optimize x86_pmu methods
    tracepoint: Optimize using static_call()
    static_call: Allow early init
    static_call: Add some validation
    static_call: Handle tail-calls
    static_call: Add static_call_cond()
    x86/alternatives: Teach text_poke_bp() to emulate RET
    static_call: Add simple self-test for static calls
    x86/static_call: Add inline static call implementation for x86-64
    x86/static_call: Add out-of-line static call implementation
    static_call: Avoid kprobes on inline static_call()s
    static_call: Add inline static call infrastructure
    static_call: Add basic static call infrastructure
    compiler.h: Make __ADDRESSABLE() symbol truly unique
    jump_label,module: Fix module lifetime for __jump_label_mod_text_reserved()
    module: Properly propagate MODULE_STATE_COMING failure
    module: Fix up module_notifier return values
    ...

    Linus Torvalds
     

12 Oct, 2020

3 commits

  • Recent work in f4d05259213f ("bpf: Add map_meta_equal map ops") and 134fede4eecf
    ("bpf: Relax max_entries check for most of the inner map types") added support
    for dynamic inner max elements for most map-in-map types. Exceptions were maps
    like array or prog array where the map_gen_lookup() callback uses the maps'
    max_entries field as a constant when emitting instructions.

    We recently implemented Maglev consistent hashing into Cilium's load balancer
    which uses map-in-map with an outer map being hash and inner being array holding
    the Maglev backend table for each service. This has been designed this way in
    order to reduce overall memory consumption given the outer hash map allows to
    avoid preallocating a large, flat memory area for all services. Also, the
    number of service mappings is not always known a-priori.

    The use case for dynamic inner array map entries is to further reduce memory
    overhead, for example, some services might just have a small number of back
    ends while others could have a large number. Right now the Maglev backend table
    for small and large number of backends would need to have the same inner array
    map entries which adds a lot of unneeded overhead.

    Dynamic inner array map entries can be realized by avoiding the inlined code
    generation for their lookup. The lookup will still be efficient since it will
    be calling into array_map_lookup_elem() directly and thus avoiding retpoline.
    The patch adds a BPF_F_INNER_MAP flag to map creation which therefore skips
    inline code generation and relaxes array_map_meta_equal() check to ignore both
    maps' max_entries. This also still allows to have faster lookups for map-in-map
    when BPF_F_INNER_MAP is not specified and hence dynamic max_entries not needed.

    Example code generation where inner map is dynamic sized array:

    # bpftool p d x i 125
    int handle__sys_enter(void * ctx):
    ; int handle__sys_enter(void *ctx)
    0: (b4) w1 = 0
    ; int key = 0;
    1: (63) *(u32 *)(r10 -4) = r1
    2: (bf) r2 = r10
    ;
    3: (07) r2 += -4
    ; inner_map = bpf_map_lookup_elem(&outer_arr_dyn, &key);
    4: (18) r1 = map[id:468]
    6: (07) r1 += 272
    7: (61) r0 = *(u32 *)(r2 +0)
    8: (35) if r0 >= 0x3 goto pc+5
    9: (67) r0 <
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20201010234006.7075-4-daniel@iogearbox.net

    Daniel Borkmann
     
  • Add an efficient ingress to ingress netns switch that can be used out of tc BPF
    programs in order to redirect traffic from host ns ingress into a container
    veth device ingress without having to go via CPU backlog queue [0]. For local
    containers this can also be utilized and path via CPU backlog queue only needs
    to be taken once, not twice. On a high level this borrows from ipvlan which does
    similar switch in __netif_receive_skb_core() and then iterates via another_round.
    This helps to reduce latency for mentioned use cases.

    Pod to remote pod with redirect(), TCP_RR [1]:

    # percpu_netperf 10.217.1.33
    RT_LATENCY: 122.450 (per CPU: 122.666 122.401 122.333 122.401 )
    MEAN_LATENCY: 121.210 (per CPU: 121.100 121.260 121.320 121.160 )
    STDDEV_LATENCY: 120.040 (per CPU: 119.420 119.910 125.460 115.370 )
    MIN_LATENCY: 46.500 (per CPU: 47.000 47.000 47.000 45.000 )
    P50_LATENCY: 118.500 (per CPU: 118.000 119.000 118.000 119.000 )
    P90_LATENCY: 127.500 (per CPU: 127.000 128.000 127.000 128.000 )
    P99_LATENCY: 130.750 (per CPU: 131.000 131.000 129.000 132.000 )

    TRANSACTION_RATE: 32666.400 (per CPU: 8152.200 8169.842 8174.439 8169.897 )

    Pod to remote pod with redirect_peer(), TCP_RR:

    # percpu_netperf 10.217.1.33
    RT_LATENCY: 44.449 (per CPU: 43.767 43.127 45.279 45.622 )
    MEAN_LATENCY: 45.065 (per CPU: 44.030 45.530 45.190 45.510 )
    STDDEV_LATENCY: 84.823 (per CPU: 66.770 97.290 84.380 90.850 )
    MIN_LATENCY: 33.500 (per CPU: 33.000 33.000 34.000 34.000 )
    P50_LATENCY: 43.250 (per CPU: 43.000 43.000 43.000 44.000 )
    P90_LATENCY: 46.750 (per CPU: 46.000 47.000 47.000 47.000 )
    P99_LATENCY: 52.750 (per CPU: 51.000 54.000 53.000 53.000 )

    TRANSACTION_RATE: 90039.500 (per CPU: 22848.186 23187.089 22085.077 21919.130 )

    [0] https://linuxplumbersconf.org/event/7/contributions/674/attachments/568/1002/plumbers_2020_cilium_load_balancer.pdf
    [1] https://github.com/borkmann/netperf_scripts/blob/master/percpu_netperf

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20201010234006.7075-3-daniel@iogearbox.net

    Daniel Borkmann
     
  • Follow-up to address David's feedback that we should better describe internals
    of the bpf_redirect_neigh() helper.

    Suggested-by: David Ahern
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Alexei Starovoitov
    Reviewed-by: David Ahern
    Link: https://lore.kernel.org/bpf/20201010234006.7075-2-daniel@iogearbox.net

    Daniel Borkmann
     

09 Oct, 2020

1 commit


08 Oct, 2020

1 commit


03 Oct, 2020

6 commits

  • Now that import_iovec handles compat iovecs, the native syscalls
    can be used for the compat case as well.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     
  • Now that import_iovec handles compat iovecs, the native vmsplice syscall
    can be used for the compat case as well.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     
  • Now that import_iovec handles compat iovecs, the native readv and writev
    syscalls can be used for the compat case as well.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     
  • Add bpf_this_cpu_ptr() to help access percpu var on this cpu. This
    helper always returns a valid pointer, therefore no need to check
    returned value for NULL. Also note that all programs run with
    preemption disabled, which means that the returned pointer is stable
    during all the execution of the program.

    Signed-off-by: Hao Luo
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20200929235049.2533242-6-haoluo@google.com

    Hao Luo
     
  • Add bpf_per_cpu_ptr() to help bpf programs access percpu vars.
    bpf_per_cpu_ptr() has the same semantic as per_cpu_ptr() in the kernel
    except that it may return NULL. This happens when the cpu parameter is
    out of range. So the caller must check the returned value.

    Signed-off-by: Hao Luo
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20200929235049.2533242-5-haoluo@google.com

    Hao Luo
     
  • Pseudo_btf_id is a type of ld_imm insn that associates a btf_id to a
    ksym so that further dereferences on the ksym can use the BTF info
    to validate accesses. Internally, when seeing a pseudo_btf_id ld insn,
    the verifier reads the btf_id stored in the insn[0]'s imm field and
    marks the dst_reg as PTR_TO_BTF_ID. The btf_id points to a VAR_KIND,
    which is encoded in btf_vminux by pahole. If the VAR is not of a struct
    type, the dst reg will be marked as PTR_TO_MEM instead of PTR_TO_BTF_ID
    and the mem_size is resolved to the size of the VAR's type.

    >From the VAR btf_id, the verifier can also read the address of the
    ksym's corresponding kernel var from kallsyms and use that to fill
    dst_reg.

    Therefore, the proper functionality of pseudo_btf_id depends on (1)
    kallsyms and (2) the encoding of kernel global VARs in pahole, which
    should be available since pahole v1.18.

    Signed-off-by: Hao Luo
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20200929235049.2533242-2-haoluo@google.com

    Hao Luo
     

01 Oct, 2020

3 commits

  • Currently, perf event in perf event array is removed from the array when
    the map fd used to add the event is closed. This behavior makes it
    difficult to the share perf events with perf event array.

    Introduce perf event map that keeps the perf event open with a new flag
    BPF_F_PRESERVE_ELEMS. With this flag set, perf events in the array are not
    removed when the original map fd is closed. Instead, the perf event will
    stay in the map until 1) it is explicitly removed from the array; or 2)
    the array is freed.

    Signed-off-by: Song Liu
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20200930224927.1936644-2-songliubraving@fb.com

    Song Liu
     
  • Add a redirect_neigh() helper as redirect() drop-in replacement
    for the xmit side. Main idea for the helper is to be very similar
    in semantics to the latter just that the skb gets injected into
    the neighboring subsystem in order to let the stack do the work
    it knows best anyway to populate the L2 addresses of the packet
    and then hand over to dev_queue_xmit() as redirect() does.

    This solves two bigger items: i) skbs don't need to go up to the
    stack on the host facing veth ingress side for traffic egressing
    the container to achieve the same for populating L2 which also
    has the huge advantage that ii) the skb->sk won't get orphaned in
    ip_rcv_core() when entering the IP routing layer on the host stack.

    Given that skb->sk neither gets orphaned when crossing the netns
    as per 9c4c325252c5 ("skbuff: preserve sock reference when scrubbing
    the skb.") the helper can then push the skbs directly to the phys
    device where FQ scheduler can do its work and TCP stack gets proper
    backpressure given we hold on to skb->sk as long as skb is still
    residing in queues.

    With the helper used in BPF data path to then push the skb to the
    phys device, I observed a stable/consistent TCP_STREAM improvement
    on veth devices for traffic going container -> host -> host ->
    container from ~10Gbps to ~15Gbps for a single stream in my test
    environment.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Alexei Starovoitov
    Reviewed-by: David Ahern
    Acked-by: Martin KaFai Lau
    Cc: David Ahern
    Link: https://lore.kernel.org/bpf/f207de81629e1724899b73b8112e0013be782d35.1601477936.git.daniel@iogearbox.net

    Daniel Borkmann
     
  • Similarly to 5a52ae4e32a6 ("bpf: Allow to retrieve cgroup v1 classid
    from v2 hooks"), add a helper to retrieve cgroup v1 classid solely
    based on the skb->sk, so it can be used as key as part of BPF map
    lookups out of tc from host ns, in particular given the skb->sk is
    retained these days when crossing net ns thanks to 9c4c325252c5
    ("skbuff: preserve sock reference when scrubbing the skb."). This
    is similar to bpf_skb_cgroup_id() which implements the same for v2.
    Kubernetes ecosystem is still operating on v1 however, hence net_cls
    needs to be used there until this can be dropped in with the v2
    helper of bpf_skb_cgroup_id().

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Alexei Starovoitov
    Acked-by: Martin KaFai Lau
    Link: https://lore.kernel.org/bpf/ed633cf27a1c620e901c5aa99ebdefb028dce600.1601477936.git.daniel@iogearbox.net

    Daniel Borkmann
     

30 Sep, 2020

1 commit

  • This enables support for attaching freplace programs to multiple attach
    points. It does this by amending the UAPI for bpf_link_Create with a target
    btf ID that can be used to supply the new attachment point along with the
    target program fd. The target must be compatible with the target that was
    supplied at program load time.

    The implementation reuses the checks that were factored out of
    check_attach_btf_id() to ensure compatibility between the BTF types of the
    old and new attachment. If these match, a new bpf_tracing_link will be
    created for the new attach target, allowing multiple attachments to
    co-exist simultaneously.

    The code could theoretically support multiple-attach of other types of
    tracing programs as well, but since I don't have a use case for any of
    those, there is no API support for doing so.

    Signed-off-by: Toke Høiland-Jørgensen
    Signed-off-by: Alexei Starovoitov
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/160138355169.48470.17165680973640685368.stgit@toke.dk

    Toke Høiland-Jørgensen
     

29 Sep, 2020

3 commits

  • A helper is added to allow seq file writing of kernel data
    structures using vmlinux BTF. Its signature is

    long bpf_seq_printf_btf(struct seq_file *m, struct btf_ptr *ptr,
    u32 btf_ptr_size, u64 flags);

    Flags and struct btf_ptr definitions/use are identical to the
    bpf_snprintf_btf helper, and the helper returns 0 on success
    or a negative error value.

    Suggested-by: Alexei Starovoitov
    Signed-off-by: Alan Maguire
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/1601292670-1616-8-git-send-email-alan.maguire@oracle.com

    Alan Maguire
     
  • A helper is added to support tracing kernel type information in BPF
    using the BPF Type Format (BTF). Its signature is

    long bpf_snprintf_btf(char *str, u32 str_size, struct btf_ptr *ptr,
    u32 btf_ptr_size, u64 flags);

    struct btf_ptr * specifies

    - a pointer to the data to be traced
    - the BTF id of the type of data pointed to
    - a flags field is provided for future use; these flags
    are not to be confused with the BTF_F_* flags
    below that control how the btf_ptr is displayed; the
    flags member of the struct btf_ptr may be used to
    disambiguate types in kernel versus module BTF, etc;
    the main distinction is the flags relate to the type
    and information needed in identifying it; not how it
    is displayed.

    For example a BPF program with a struct sk_buff *skb
    could do the following:

    static struct btf_ptr b = { };

    b.ptr = skb;
    b.type_id = __builtin_btf_type_id(struct sk_buff, 1);
    bpf_snprintf_btf(str, sizeof(str), &b, sizeof(b), 0, 0);

    Default output looks like this:

    (struct sk_buff){
    .transport_header = (__u16)65535,
    .mac_header = (__u16)65535,
    .end = (sk_buff_data_t)192,
    .head = (unsigned char *)0x000000007524fd8b,
    .data = (unsigned char *)0x000000007524fd8b,
    .truesize = (unsigned int)768,
    .users = (refcount_t){
    .refs = (atomic_t){
    .counter = (int)1,
    },
    },
    }

    Flags modifying display are as follows:

    - BTF_F_COMPACT: no formatting around type information
    - BTF_F_NONAME: no struct/union member names/types
    - BTF_F_PTR_RAW: show raw (unobfuscated) pointer values;
    equivalent to %px.
    - BTF_F_ZERO: show zero-valued struct/union members;
    they are not displayed by default

    Signed-off-by: Alan Maguire
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/1601292670-1616-4-git-send-email-alan.maguire@oracle.com

    Alan Maguire
     
  • Add .test_run for raw_tracepoint. Also, introduce a new feature that runs
    the target program on a specific CPU. This is achieved by a new flag in
    bpf_attr.test, BPF_F_TEST_RUN_ON_CPU. When this flag is set, the program
    is triggered on cpu with id bpf_attr.test.cpu. This feature is needed for
    BPF programs that handle perf_event and other percpu resources, as the
    program can access these resource locally.

    Signed-off-by: Song Liu
    Signed-off-by: Daniel Borkmann
    Acked-by: John Fastabend
    Acked-by: Andrii Nakryiko
    Link: https://lore.kernel.org/bpf/20200925205432.1777-2-songliubraving@fb.com

    Song Liu
     

26 Sep, 2020

3 commits

  • This patch changes the bpf_sk_assign() to take
    ARG_PTR_TO_BTF_ID_SOCK_COMMON such that they will work with the pointer
    returned by the bpf_skc_to_*() helpers also.

    The bpf_sk_lookup_assign() is taking ARG_PTR_TO_SOCKET_"OR_NULL". Meaning
    it specifically takes a literal NULL. ARG_PTR_TO_BTF_ID_SOCK_COMMON
    does not allow a literal NULL, so another ARG type is required
    for this purpose and another follow-up patch can be used if
    there is such need.

    Signed-off-by: Martin KaFai Lau
    Signed-off-by: Alexei Starovoitov
    Link: https://lore.kernel.org/bpf/20200925000415.3857374-1-kafai@fb.com

    Martin KaFai Lau
     
  • This patch changes the bpf_tcp_*_syncookie() to take
    ARG_PTR_TO_BTF_ID_SOCK_COMMON such that they will work with the pointer
    returned by the bpf_skc_to_*() helpers also.

    Signed-off-by: Martin KaFai Lau
    Signed-off-by: Alexei Starovoitov
    Acked-by: Lorenz Bauer
    Link: https://lore.kernel.org/bpf/20200925000409.3856725-1-kafai@fb.com

    Martin KaFai Lau
     
  • This patch changes the bpf_sk_storage_*() to take
    ARG_PTR_TO_BTF_ID_SOCK_COMMON such that they will work with the pointer
    returned by the bpf_skc_to_*() helpers also.

    A micro benchmark has been done on a "cgroup_skb/egress" bpf program
    which does a bpf_sk_storage_get(). It was driven by netperf doing
    a 4096 connected UDP_STREAM test with 64bytes packet.
    The stats from "kernel.bpf_stats_enabled" shows no meaningful difference.

    The sk_storage_get_btf_proto, sk_storage_delete_btf_proto,
    btf_sk_storage_get_proto, and btf_sk_storage_delete_proto are
    no longer needed, so they are removed.

    Signed-off-by: Martin KaFai Lau
    Signed-off-by: Alexei Starovoitov
    Acked-by: Lorenz Bauer
    Link: https://lore.kernel.org/bpf/20200925000402.3856307-1-kafai@fb.com

    Martin KaFai Lau