01 Apr, 2014

6 commits


31 Mar, 2014

6 commits

  • This patch replaces/reworks the kernel-internal BPF interpreter with
    an optimized BPF instruction set format that is modelled closer to
    mimic native instruction sets and is designed to be JITed with one to
    one mapping. Thus, the new interpreter is noticeably faster than the
    current implementation of sk_run_filter(); mainly for two reasons:

    1. Fall-through jumps:

    BPF jump instructions are forced to go either 'true' or 'false'
    branch which causes branch-miss penalty. The new BPF jump
    instructions have only one branch and fall-through otherwise,
    which fits the CPU branch predictor logic better. `perf stat`
    shows drastic difference for branch-misses between the old and
    new code.

    2. Jump-threaded implementation of interpreter vs switch
    statement:

    Instead of single table-jump at the top of 'switch' statement,
    gcc will now generate multiple table-jump instructions, which
    helps CPU branch predictor logic.

    Note that the verification of filters is still being done through
    sk_chk_filter() in classical BPF format, so filters from user- or
    kernel space are verified in the same way as we do now, and same
    restrictions/constraints hold as well.

    We reuse current BPF JIT compilers in a way that this upgrade would
    even be fine as is, but nevertheless allows for a successive upgrade
    of BPF JIT compilers to the new format.

    The internal instruction set migration is being done after the
    probing for JIT compilation, so in case JIT compilers are able to
    create a native opcode image, we're going to use that, and in all
    other cases we're doing a follow-up migration of the BPF program's
    instruction set, so that it can be transparently run in the new
    interpreter.

    In short, the *internal* format extends BPF in the following way (more
    details can be taken from the appended documentation):

    - Number of registers increase from 2 to 10
    - Register width increases from 32-bit to 64-bit
    - Conditional jt/jf targets replaced with jt/fall-through
    - Adds signed > and >= insns
    - 16 4-byte stack slots for register spill-fill replaced
    with up to 512 bytes of multi-use stack space
    - Introduction of bpf_call insn and register passing convention
    for zero overhead calls from/to other kernel functions
    - Adds arithmetic right shift and endianness conversion insns
    - Adds atomic_add insn
    - Old tax/txa insns are replaced with 'mov dst,src' insn

    Performance of two BPF filters generated by libpcap resp. bpf_asm
    was measured on x86_64, i386 and arm32 (other libpcap programs
    have similar performance differences):

    fprog #1 is taken from Documentation/networking/filter.txt:
    tcpdump -i eth0 port 22 -dd

    fprog #2 is taken from 'man tcpdump':
    tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<>2)) != 0)' -dd

    Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on the
    same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call,
    smaller is better:

    --x86_64--
    fprog #1 fprog #1 fprog #2 fprog #2
    cache-hit cache-miss cache-hit cache-miss
    old BPF 90 101 192 202
    new BPF 31 71 47 97
    old BPF jit 12 34 17 44
    new BPF jit TBD

    --i386--
    fprog #1 fprog #1 fprog #2 fprog #2
    cache-hit cache-miss cache-hit cache-miss
    old BPF 107 136 227 252
    new BPF 40 119 69 172

    --arm32--
    fprog #1 fprog #1 fprog #2 fprog #2
    cache-hit cache-miss cache-hit cache-miss
    old BPF 202 300 475 540
    new BPF 180 270 330 470
    old BPF jit 26 182 37 202
    new BPF jit TBD

    Thus, without changing any userland BPF filters, applications on
    top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf
    classifier, netfilter's xt_bpf, team driver's load-balancing mode,
    and many more will have better interpreter filtering performance.

    While we are replacing the internal BPF interpreter, we also need
    to convert seccomp BPF in the same step to make use of the new
    internal structure since it makes use of lower-level API details
    without being further decoupled through higher-level calls like
    sk_unattached_filter_{create,destroy}(), for example.

    Just as for normal socket filtering, also seccomp BPF experiences
    a time-to-verdict speedup:

    05-sim-long_jumps.c of libseccomp was used as micro-benchmark:

    seccomp_rule_add_exact(ctx,...
    seccomp_rule_add_exact(ctx,...

    rc = seccomp_load(ctx);

    for (i = 0; i < 10000000; i++)
    syscall(199, 100);

    'short filter' has 2 rules
    'large filter' has 200 rules

    'short filter' performance is slightly better on x86_64/i386/arm32
    'large filter' is much faster on x86_64 and i386 and shows no
    difference on arm32

    --x86_64-- short filter
    old BPF: 2.7 sec
    39.12% bench libc-2.15.so [.] syscall
    8.10% bench [kernel.kallsyms] [k] sk_run_filter
    6.31% bench [kernel.kallsyms] [k] system_call
    5.59% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller
    4.37% bench [kernel.kallsyms] [k] trace_hardirqs_off_caller
    3.70% bench [kernel.kallsyms] [k] __secure_computing
    3.67% bench [kernel.kallsyms] [k] lock_is_held
    3.03% bench [kernel.kallsyms] [k] seccomp_bpf_load
    new BPF: 2.58 sec
    42.05% bench libc-2.15.so [.] syscall
    6.91% bench [kernel.kallsyms] [k] system_call
    6.25% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller
    6.07% bench [kernel.kallsyms] [k] __secure_computing
    5.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp

    --arm32-- short filter
    old BPF: 4.0 sec
    39.92% bench [kernel.kallsyms] [k] vector_swi
    16.60% bench [kernel.kallsyms] [k] sk_run_filter
    14.66% bench libc-2.17.so [.] syscall
    5.42% bench [kernel.kallsyms] [k] seccomp_bpf_load
    5.10% bench [kernel.kallsyms] [k] __secure_computing
    new BPF: 3.7 sec
    35.93% bench [kernel.kallsyms] [k] vector_swi
    21.89% bench libc-2.17.so [.] syscall
    13.45% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp
    6.25% bench [kernel.kallsyms] [k] __secure_computing
    3.96% bench [kernel.kallsyms] [k] syscall_trace_exit

    --x86_64-- large filter
    old BPF: 8.6 seconds
    73.38% bench [kernel.kallsyms] [k] sk_run_filter
    10.70% bench libc-2.15.so [.] syscall
    5.09% bench [kernel.kallsyms] [k] seccomp_bpf_load
    1.97% bench [kernel.kallsyms] [k] system_call
    new BPF: 5.7 seconds
    66.20% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp
    16.75% bench libc-2.15.so [.] syscall
    3.31% bench [kernel.kallsyms] [k] system_call
    2.88% bench [kernel.kallsyms] [k] __secure_computing

    --i386-- large filter
    old BPF: 5.4 sec
    new BPF: 3.8 sec

    --arm32-- large filter
    old BPF: 13.5 sec
    73.88% bench [kernel.kallsyms] [k] sk_run_filter
    10.29% bench [kernel.kallsyms] [k] vector_swi
    6.46% bench libc-2.17.so [.] syscall
    2.94% bench [kernel.kallsyms] [k] seccomp_bpf_load
    1.19% bench [kernel.kallsyms] [k] __secure_computing
    0.87% bench [kernel.kallsyms] [k] sys_getuid
    new BPF: 13.5 sec
    76.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp
    10.98% bench [kernel.kallsyms] [k] vector_swi
    5.87% bench libc-2.17.so [.] syscall
    1.77% bench [kernel.kallsyms] [k] __secure_computing
    0.93% bench [kernel.kallsyms] [k] sys_getuid

    BPF filters generated by seccomp are very branchy, so the new
    internal BPF performance is better than the old one. Performance
    gains will be even higher when BPF JIT is committed for the
    new structure, which is planned in future work (as successive
    JIT migrations).

    BPF has also been stress-tested with trinity's BPF fuzzer.

    Joint work with Daniel Borkmann.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Cc: Hagen Paul Pfeifer
    Cc: Kees Cook
    Cc: Paul Moore
    Cc: Ingo Molnar
    Cc: H. Peter Anvin
    Cc: linux-kernel@vger.kernel.org
    Acked-by: Kees Cook
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     
  • There are currently pch_gbe, cpts, and ixp4xx_eth drivers that open-code
    and reimplement a BPF classifier for the PTP protocol. Since all of them
    effectively do the very same thing and load the very same PTP/BPF filter,
    we can just consolidate that code by introducing ptp_classify_raw() in
    the time-stamping core framework which can be used in drivers.

    As drivers get initialized after bootstrapping the core networking
    subsystem, they can make use of ptp_insns wrapped through
    ptp_classify_raw(), which allows to simplify and remove PTP classifier
    setup code in drivers.

    Joint work with Alexei Starovoitov.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Alexei Starovoitov
    Cc: Richard Cochran
    Cc: Jiri Benc
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • This patch migrates an open-coded sk_run_filter() implementation with
    proper use of the BPF API, that is, sk_unattached_filter_create(). This
    migration is needed, as we will be internally transforming the filter
    to a different representation, and therefore needs to be decoupled.

    It is okay to do so as skb_timestamping_init() is called during
    initialization of the network stack in core initcall via sock_init().
    This would effectively also allow for PTP filters to be jit compiled if
    bpf_jit_enable is set.

    For better readability, there are also some newlines introduced, also
    ptp_classify.h is only in kernel space.

    Joint work with Alexei Starovoitov.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Alexei Starovoitov
    Cc: Richard Cochran
    Cc: Jiri Benc
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • This patch basically does two things, i) removes the extern keyword
    from the include/linux/filter.h file to be more consistent with the
    rest of Joe's changes, and ii) moves filter accounting into the filter
    core framework.

    Filter accounting mainly done through sk_filter_{un,}charge() take
    care of the case when sockets are being cloned through sk_clone_lock()
    so that removal of the filter on one socket won't result in eviction
    as it's still referenced by the other.

    These functions actually belong to net/core/filter.c and not
    include/net/sock.h as we want to keep all that in a central place.
    It's also not in fast-path so uninlining them is fine and even allows
    us to get rd of sk_filter_release_rcu()'s EXPORT_SYMBOL and a forward
    declaration.

    Joint work with Alexei Starovoitov.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Alexei Starovoitov
    Cc: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • In order to open up the possibility to internally transform a BPF program
    into an alternative and possibly non-trivial reversible representation, we
    need to keep the original BPF program around, so that it can be passed back
    to user space w/o the need of a complex decoder.

    The reason for that use case resides in commit a8fc92778080 ("sk-filter:
    Add ability to get socket filter program (v2)"), that is, the ability
    to retrieve the currently attached BPF filter from a given socket used
    mainly by the checkpoint-restore project, for example.

    Therefore, we add two helpers sk_{store,release}_orig_filter for taking
    care of that. In the sk_unattached_filter_create() case, there's no such
    possibility/requirement to retrieve a loaded BPF program. Therefore, we
    can spare us the work in that case.

    This approach will simplify and slightly speed up both, sk_get_filter()
    and sock_diag_put_filterinfo() handlers as we won't need to successively
    decode filters anymore through sk_decode_filter(). As we still need
    sk_decode_filter() later on, we're keeping it around.

    Joint work with Alexei Starovoitov.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Cc: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • This patch adds a jited flag into sk_filter struct in order to indicate
    whether a filter is currently jited or not. The size of sk_filter is
    not being expanded as the 32 bit 'len' member allows upper bits to be
    reused since a filter can currently only grow as large as BPF_MAXINSNS.

    Therefore, there's enough room also for other in future needed flags to
    reuse 'len' field if necessary. The jited flag also allows for having
    alternative interpreter functions running as currently, we can only
    detect jit compiled filters by testing fp->bpf_func to not equal the
    address of sk_run_filter().

    Joint work with Alexei Starovoitov.

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: Daniel Borkmann
    Cc: Pablo Neira Ayuso
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

30 Mar, 2014

10 commits

  • Conflicts:
    drivers/net/ethernet/marvell/mvneta.c

    The mvneta.c conflict is a case of overlapping changes,
    a conversion to devm_ioremap_resource() vs. a conversion
    to netdev_alloc_pcpu_stats.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • ERROR: "(foo*)" should be "(foo *)"
    ERROR: "foo * bar" should be "foo *bar"

    Suggested-by: Sergei Shtylyov
    Signed-off-by: Wang Yufen
    Acked-by: Sergei Shtylyov
    Signed-off-by: David S. Miller

    Wang Yufen
     
  • ERROR: open brace '{' following enum go on the same line
    ERROR: open brace '{' following struct go on the same line
    ERROR: trailing statements should be on next line

    Signed-off-by: Wang Yufen
    Signed-off-by: David S. Miller

    Wang Yufen
     
  • WARNING: please, no space before tabs
    WARNING: please, no spaces at the start of a line
    ERROR: spaces required around that ':' (ctx:VxW)
    ERROR: spaces required around that '>' (ctx:VxV)
    ERROR: spaces required around that '>=' (ctx:VxV)

    Signed-off-by: Wang Yufen
    Signed-off-by: David S. Miller

    Wang Yufen
     
  • Stop taking the transmit lock when a network device has specified
    NETIF_F_LLTX.

    If no locks needed to trasnmit a packet this is the ideal scenario for
    netpoll as all packets can be trasnmitted immediately.

    Even if some locks are needed in ndo_start_xmit skipping any unnecessary
    serialization is desirable for netpoll as it makes it more likely a
    debugging packet may be trasnmitted immediately instead of being
    deferred until later.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • Remove the assumption that the skbs that make it to
    netpoll_send_skb_on_dev are allocated with find_skb, such that
    skb->users == 1 and nothing is attached that would prevent the skbs from
    being freed from hard irq context.

    Remove this assumption by replacing __kfree_skb on error paths with
    dev_kfree_skb_irq (in hard irq context) and kfree_skb (in process
    context).

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • The netpoll_rx_enable and netpoll_rx_disable functions have always
    controlled polling the network drivers transmit and receive queues.

    Rename them to netpoll_poll_enable and netpoll_poll_disable to make
    their functionality clear.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • Today netpoll_rx_enable and netpoll_rx_disable are called from
    dev_close and and __dev_close, and not from dev_close_many.

    Move the calls into __dev_close_many so that we have a single call
    site to maintain, and so that dev_close_many gains this protection as
    well. Which importantly makes batched network device deletes safe.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • Factor out the code that needs to surround ndo_start_xmit
    from netpoll_send_skb_on_dev into netpoll_start_xmit.

    It is an unfortunate fact that as the netpoll code has been maintained
    the primary call site ndo_start_xmit learned how to handle vlans
    and timestamps but the second call of ndo_start_xmit in queue_process
    did not.

    With the introduction of netpoll_start_xmit this associated logic now
    happens at both call sites of ndo_start_xmit and should make it easy
    for that to continue into the future.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • The gfp parameter was added in:
    commit 47be03a28cc6c80e3aa2b3e8ed6d960ff0c5c0af
    Author: Amerigo Wang
    Date: Fri Aug 10 01:24:37 2012 +0000

    netpoll: use GFP_ATOMIC in slave_enable_netpoll() and __netpoll_setup()

    slave_enable_netpoll() and __netpoll_setup() may be called
    with read_lock() held, so should use GFP_ATOMIC to allocate
    memory. Eric suggested to pass gfp flags to __netpoll_setup().

    Cc: Eric Dumazet
    Cc: "David S. Miller"
    Reported-by: Dan Carpenter
    Signed-off-by: Eric Dumazet
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    The reason for the gfp parameter was removed in:
    commit c4cdef9b7183159c23c7302aaf270d64c549f557
    Author: dingtianhong
    Date: Tue Jul 23 15:25:27 2013 +0800

    bonding: don't call slave_xxx_netpoll under spinlocks

    The slave_xxx_netpoll will call synchronize_rcu_bh(),
    so the function may schedule and sleep, it should't be
    called under spinlocks.

    bond_netpoll_setup() and bond_netpoll_cleanup() are always
    protected by rtnl lock, it is no need to take the read lock,
    as the slave list couldn't be changed outside rtnl lock.

    Signed-off-by: Ding Tianhong
    Cc: Jay Vosburgh
    Cc: Andy Gospodarek
    Signed-off-by: David S. Miller

    Nothing else that calls __netpoll_setup or ndo_netpoll_setup
    requires a gfp paramter, so remove the gfp parameter from both
    of these functions making the code clearer.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     

29 Mar, 2014

12 commits

  • Some drivers incorrectly assign vlan acceleration features to
    vlan_features thus causing issues for Q-in-Q vlan configurations.
    Warn the user of such cases.

    Signed-off-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Vlad Yasevich
     
  • When the vlan filtering is enabled on the bridge, but
    the filter is not configured on the bridge device itself,
    running tcpdump on the bridge device will result in a
    an Oops with NULL pointer dereference. The reason
    is that br_pass_frame_up() will bypass the vlan
    check because promisc flag is set. It will then try
    to get the table pointer and process the packet based
    on the table. Since the table pointer is NULL, we oops.
    Catch this special condition in br_handle_vlan().

    Reported-by: Toshiaki Makita
    CC: Toshiaki Makita
    Signed-off-by: Vlad Yasevich
    Acked-by: Toshiaki Makita
    Signed-off-by: David S. Miller

    Vlad Yasevich
     
  • skb_network_protocol() already accounts for multiple vlan
    headers that may be present in the skb. However, skb_mac_gso_segment()
    doesn't know anything about it and assumes that skb->mac_len
    is set correctly to skip all mac headers. That may not
    always be the case. If we are simply forwarding the packet (via
    bridge or macvtap), all vlan headers may not be accounted for.

    A simple solution is to allow skb_network_protocol to return
    the vlan depth it has calculated. This way skb_mac_gso_segment
    will correctly skip all mac headers.

    Signed-off-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Vlad Yasevich
     
  • addrconf_join_solict and addrconf_join_anycast may cause actions which
    need rtnl locked, especially on first address creation.

    A new DAD state is introduced which defers processing of the initial
    DAD processing into a workqueue.

    To get rtnl lock we need to push the code paths which depend on those
    calls up to workqueues, specifically addrconf_verify and the DAD
    processing.

    (v2)
    addrconf_dad_failure needs to be queued up to the workqueue, too. This
    patch introduces a new DAD state and stop the DAD processing in the
    workqueue (this is because of the possible ipv6_del_addr processing
    which removes the solicited multicast address from the device).

    addrconf_verify_lock is removed, too. After the transition it is not
    needed any more.

    As we are not processing in bottom half anymore we need to be a bit more
    careful about disabling bottom half out when we lock spin_locks which are also
    used in bh.

    Relevant backtrace:
    [ 541.030090] RTNL: assertion failed at net/core/dev.c (4496)
    [ 541.031143] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G O 3.10.33-1-amd64-vyatta #1
    [ 541.031145] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
    [ 541.031146] ffffffff8148a9f0 000000000000002f ffffffff813c98c1 ffff88007c4451f8
    [ 541.031148] 0000000000000000 0000000000000000 ffffffff813d3540 ffff88007fc03d18
    [ 541.031150] 0000880000000006 ffff88007c445000 ffffffffa0194160 0000000000000000
    [ 541.031152] Call Trace:
    [ 541.031153] [] ? dump_stack+0xd/0x17
    [ 541.031180] [] ? __dev_set_promiscuity+0x101/0x180
    [ 541.031183] [] ? __hw_addr_create_ex+0x60/0xc0
    [ 541.031185] [] ? __dev_set_rx_mode+0xaa/0xc0
    [ 541.031189] [] ? __dev_mc_add+0x61/0x90
    [ 541.031198] [] ? igmp6_group_added+0xfc/0x1a0 [ipv6]
    [ 541.031208] [] ? kmem_cache_alloc+0xcb/0xd0
    [ 541.031212] [] ? ipv6_dev_mc_inc+0x267/0x300 [ipv6]
    [ 541.031216] [] ? addrconf_join_solict+0x2e/0x40 [ipv6]
    [ 541.031219] [] ? ipv6_dev_ac_inc+0x159/0x1f0 [ipv6]
    [ 541.031223] [] ? addrconf_join_anycast+0x92/0xa0 [ipv6]
    [ 541.031226] [] ? __ipv6_ifa_notify+0x11e/0x1e0 [ipv6]
    [ 541.031229] [] ? ipv6_ifa_notify+0x33/0x50 [ipv6]
    [ 541.031233] [] ? addrconf_dad_completed+0x28/0x100 [ipv6]
    [ 541.031241] [] ? task_cputime+0x2d/0x50
    [ 541.031244] [] ? addrconf_dad_timer+0x136/0x150 [ipv6]
    [ 541.031247] [] ? addrconf_dad_completed+0x100/0x100 [ipv6]
    [ 541.031255] [] ? call_timer_fn.isra.22+0x2a/0x90
    [ 541.031258] [] ? addrconf_dad_completed+0x100/0x100 [ipv6]

    Hunks and backtrace stolen from a patch by Stephen Hemminger.

    Reported-by: Stephen Hemminger
    Signed-off-by: Stephen Hemminger
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Hannes Frederic Sowa
     
  • Dropping packets in __dev_queue_xmit() when transmit queue
    is stopped (NIC TX ring buffer full or BQL limit reached) currently
    outputs a syslog message.

    It would be better to get a precise count of such events available in
    netdevice stats so that monitoring tools can have a clue.

    This extends the work done in caf586e5f23ce
    ("net: add a core netdev->rx_dropped counter")

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Quite often it can be useful to test with dummy or similar
    devices as a blackhole sink for skbs. Such devices are only
    equipped with a single txq, but marked as NETIF_F_LLTX as
    they do not require locking their internal queues on xmit
    (or implement locking themselves). Therefore, rather use
    HARD_TX_{UN,}LOCK API, so that NETIF_F_LLTX will be respected.

    trafgen mmap/TX_RING example against dummy device with config
    foo: { fill(0xff, 64) } results in the following performance
    improvements for such scenarios on an ordinary Core i7/2.80GHz:

    Before:

    Performance counter stats for 'trafgen -i foo -o du0 -n100000000' (10 runs):

    160,975,944,159 instructions:k # 0.55 insns per cycle ( +- 0.09% )
    293,319,390,278 cycles:k # 0.000 GHz ( +- 0.35% )
    192,501,104 branch-misses:k ( +- 1.63% )
    831 context-switches:k ( +- 9.18% )
    7 cpu-migrations:k ( +- 7.40% )
    69,382 cache-misses:k # 0.010 % of all cache refs ( +- 2.18% )
    671,552,021 cache-references:k ( +- 1.29% )

    22.856401569 seconds time elapsed ( +- 0.33% )

    After:

    Performance counter stats for 'trafgen -i foo -o du0 -n100000000' (10 runs):

    133,788,739,692 instructions:k # 0.92 insns per cycle ( +- 0.06% )
    145,853,213,256 cycles:k # 0.000 GHz ( +- 0.17% )
    59,867,100 branch-misses:k ( +- 4.72% )
    384 context-switches:k ( +- 3.76% )
    6 cpu-migrations:k ( +- 6.28% )
    70,304 cache-misses:k # 0.077 % of all cache refs ( +- 1.73% )
    90,879,408 cache-references:k ( +- 1.35% )

    11.719372413 seconds time elapsed ( +- 0.24% )

    Signed-off-by: Daniel Borkmann
    Cc: Jesper Dangaard Brouer
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • It seems I missed one change in get_timewait4_sock() to compute
    the remaining time before deletion of IPV4 timewait socket.

    This could result in wrong output in /proc/net/tcp for tm->when field.

    Fixes: 96f817fedec4 ("tcp: shrink tcp6_timewait_sock by one cache line")
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • There are two problematic situations.

    A deadlock can happen when is_percpu is false because it can get
    interrupted while holding the spinlock. Then it executes
    ovs_flow_stats_update() in softirq context which tries to get
    the same lock.

    The second sitation is that when is_percpu is true, the code
    correctly disables BH but only for the local CPU, so the
    following can happen when locking the remote CPU without
    disabling BH:

    CPU#0 CPU#1
    ovs_flow_stats_get()
    stats_read()
    +->spin_lock remote CPU#1 ovs_flow_stats_get()
    | stats_read()
    | ... +--> spin_lock remote CPU#0
    | |
    | ovs_flow_stats_update() | ...
    | spin_lock local CPU#0

    =================================
    [ INFO: inconsistent lock state ]
    3.14.0-rc8-00007-g632b06a #1 Tainted: G I
    ---------------------------------
    inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
    swapper/0/0 [HC0[0]:SC1[5]:HE1:SE0] takes:
    (&(&cpu_stats->lock)->rlock){+.?...}, at: [] ovs_flow_stats_update+0x51/0xd0 [openvswitch]
    {SOFTIRQ-ON-W} state was registered at:
    [] __lock_acquire+0x68f/0x1c40
    [] lock_acquire+0xa2/0x1d0
    [] _raw_spin_lock+0x3e/0x80
    [] ovs_flow_stats_get+0xc4/0x1e0 [openvswitch]
    [] ovs_flow_cmd_fill_info+0x185/0x360 [openvswitch]
    [] ovs_flow_cmd_build_info.constprop.27+0x55/0x90 [openvswitch]
    [] ovs_flow_cmd_new_or_set+0x4dd/0x570 [openvswitch]
    [] genl_family_rcv_msg+0x1cd/0x3f0
    [] genl_rcv_msg+0x8e/0xd0
    [] netlink_rcv_skb+0xa9/0xc0
    [] genl_rcv+0x28/0x40
    [] netlink_unicast+0x100/0x1e0
    [] netlink_sendmsg+0x347/0x770
    [] sock_sendmsg+0x9c/0xe0
    [] ___sys_sendmsg+0x3a9/0x3c0
    [] __sys_sendmsg+0x51/0x90
    [] SyS_sendmsg+0x12/0x20
    [] system_call_fastpath+0x16/0x1b
    irq event stamp: 1740726
    hardirqs last enabled at (1740726): [] ip6_finish_output2+0x4f0/0x840
    hardirqs last disabled at (1740725): [] ip6_finish_output2+0x4ab/0x840
    softirqs last enabled at (1740674): [] _local_bh_enable+0x22/0x50
    softirqs last disabled at (1740675): [] irq_exit+0xc5/0xd0

    other info that might help us debug this:
    Possible unsafe locking scenario:

    CPU0
    ----
    lock(&(&cpu_stats->lock)->rlock);

    lock(&(&cpu_stats->lock)->rlock);

    *** DEADLOCK ***

    5 locks held by swapper/0/0:
    #0: (((&ifa->dad_timer))){+.-...}, at: [] call_timer_fn+0x5/0x320
    #1: (rcu_read_lock){.+.+..}, at: [] mld_sendpack+0x5/0x4a0
    #2: (rcu_read_lock_bh){.+....}, at: [] ip6_finish_output2+0x59/0x840
    #3: (rcu_read_lock_bh){.+....}, at: [] __dev_queue_xmit+0x5/0x9b0
    #4: (rcu_read_lock){.+.+..}, at: [] internal_dev_xmit+0x5/0x110 [openvswitch]

    stack backtrace:
    CPU: 0 PID: 0 Comm: swapper/0 Tainted: G I 3.14.0-rc8-00007-g632b06a #1
    Hardware name: /DX58SO, BIOS SOX5810J.86A.5599.2012.0529.2218 05/29/2012
    0000000000000000 0fcf20709903df0c ffff88042d603808 ffffffff817cfe3c
    ffffffff81c134c0 ffff88042d603858 ffffffff817cb6da 0000000000000005
    ffffffff00000001 ffff880400000000 0000000000000006 ffffffff81c134c0
    Call Trace:
    [] dump_stack+0x4d/0x66
    [] print_usage_bug+0x1f4/0x205
    [] ? check_usage_backwards+0x180/0x180
    [] mark_lock+0x223/0x2b0
    [] __lock_acquire+0x623/0x1c40
    [] ? __lock_is_held+0x57/0x80
    [] ? masked_flow_lookup+0x236/0x250 [openvswitch]
    [] lock_acquire+0xa2/0x1d0
    [] ? ovs_flow_stats_update+0x51/0xd0 [openvswitch]
    [] _raw_spin_lock+0x3e/0x80
    [] ? ovs_flow_stats_update+0x51/0xd0 [openvswitch]
    [] ovs_flow_stats_update+0x51/0xd0 [openvswitch]
    [] ovs_dp_process_received_packet+0x84/0x120 [openvswitch]
    [] ? __lock_acquire+0x347/0x1c40
    [] ovs_vport_receive+0x2a/0x30 [openvswitch]
    [] internal_dev_xmit+0x68/0x110 [openvswitch]
    [] ? internal_dev_xmit+0x5/0x110 [openvswitch]
    [] dev_hard_start_xmit+0x2e6/0x8b0
    [] __dev_queue_xmit+0x417/0x9b0
    [] ? __dev_queue_xmit+0x5/0x9b0
    [] ? ip6_finish_output2+0x4f0/0x840
    [] dev_queue_xmit+0x10/0x20
    [] ip6_finish_output2+0x551/0x840
    [] ? ip6_finish_output+0x9a/0x220
    [] ip6_finish_output+0x9a/0x220
    [] ip6_output+0x4f/0x1f0
    [] mld_sendpack+0x1d9/0x4a0
    [] mld_send_initial_cr.part.32+0x88/0xa0
    [] ? addrconf_dad_completed+0x220/0x220
    [] ipv6_mc_dad_complete+0x31/0x50
    [] addrconf_dad_completed+0x147/0x220
    [] ? addrconf_dad_completed+0x220/0x220
    [] addrconf_dad_timer+0x19f/0x1c0
    [] call_timer_fn+0x99/0x320
    [] ? call_timer_fn+0x5/0x320
    [] ? addrconf_dad_completed+0x220/0x220
    [] run_timer_softirq+0x254/0x3b0
    [] __do_softirq+0x12d/0x480

    Signed-off-by: Flavio Leitner
    Signed-off-by: David S. Miller

    Flavio Leitner
     
  • If a bridge with vlan_filtering enabled receives frames with stacked
    vlan tags, i.e., they have two vlan tags, br_vlan_untag() strips not
    only the outer tag but also the inner tag.

    br_vlan_untag() is called only from br_handle_vlan(), and in this case,
    it is enough to set skb->vlan_tci to 0 here, because vlan_tci has already
    been set before calling br_handle_vlan().

    Signed-off-by: Toshiaki Makita
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Toshiaki Makita
     
  • Bridge vlan code (br_vlan_get_tag()) assumes that all frames have vlan_tci
    if they are tagged, but if vlan tx offload is manually disabled on bridge
    device and frames are sent from vlan device on the bridge device, the tags
    are embedded in skb->data and they break this assumption.
    Extract embedded vlan tags and move them to vlan_tci at ingress.

    Signed-off-by: Toshiaki Makita
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Toshiaki Makita
     
  • The node discovery domain is assigned when a bearer is enabled.
    In the previous commit we reflect this attribute directly in the
    bearer structure since it's needed to reinitialize the node
    discovery mechanism after a hardware address change.

    There's no need to replicate this attribute anywhere else, so we
    remove it from the tipc_link_req structure.

    Signed-off-by: Erik Hugne
    Reviewed-by: Ying Xue
    Signed-off-by: David S. Miller

    Erik Hugne
     
  • If the hardware address of a underlying netdevice is changed, it is
    not enough to simply reset the bearer/links over this device. We
    also need to reflect this change in the TIPC bearer and node
    discovery structures aswell.

    This patch adds the necessary reinitialization of the node disovery
    mechanism following a hardware address change so that the correct
    originating media address is advertised in the discovery messages.

    Signed-off-by: Erik Hugne
    Reported-by: Dong Liu
    Reviewed-by: Ying Xue
    Signed-off-by: David S. Miller

    Erik Hugne
     

28 Mar, 2014

6 commits

  • skb_zerocopy can copy elements of the frags array between skbs, but it doesn't
    orphan them. Also, it doesn't handle errors, so this patch takes care of that
    as well, and modify the callers accordingly. skb_tx_error() is also added to
    the callers so they will signal the failed delivery towards the creator of the
    skb.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David S. Miller

    Zoltan Kiss
     
  • Use del_timer_sync to ensure that the timer is stopped on all CPUs before
    the driver exists.

    This change was suggested by Thomas Gleixner.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @r@
    declarer name module_exit;
    identifier ex;
    @@

    module_exit(ex);

    @@
    identifier r.ex;
    @@

    ex(...) {

    }
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: David S. Miller

    Julia Lawall
     
  • Use del_timer_sync to ensure that the timer is stopped on all CPUs before
    the driver exists.

    This change was suggested by Thomas Gleixner.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @r@
    declarer name module_exit;
    identifier ex;
    @@

    module_exit(ex);

    @@
    identifier r.ex;
    @@

    ex(...) {

    }
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: David S. Miller

    Julia Lawall
     
  • There is no need to allocate 15 bytes in excess for a SYNACK packet,
    as it contains no data, only headers.

    SYNACK are always generated in softirq context, and contain a single
    segment, we can use TCP_INC_STATS_BH()

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • If an IPv6 host route with metrics exists, an attempt to add a
    new route for the same target with different metrics fails but
    rewrites the metrics anyway:

    12sp0:~ # ip route add fec0::1 dev eth0 rto_min 1000
    12sp0:~ # ip -6 route show
    fe80::/64 dev eth0 proto kernel metric 256
    fec0::1 dev eth0 metric 1024 rto_min lock 1s
    12sp0:~ # ip route add fec0::1 dev eth0 rto_min 1500
    RTNETLINK answers: File exists
    12sp0:~ # ip -6 route show
    fe80::/64 dev eth0 proto kernel metric 256
    fec0::1 dev eth0 metric 1024 rto_min lock 1.5s

    This is caused by all IPv6 host routes using the metrics in
    their inetpeer (or the shared default). This also holds for the
    new route created in ip6_route_add() which shares the metrics
    with the already existing route and thus ip6_route_add()
    rewrites the metrics even if the new route ends up not being
    used at all.

    Another problem is that old metrics in inetpeer can reappear
    unexpectedly for a new route, e.g.

    12sp0:~ # ip route add fec0::1 dev eth0 rto_min 1000
    12sp0:~ # ip route del fec0::1
    12sp0:~ # ip route add fec0::1 dev eth0
    12sp0:~ # ip route change fec0::1 dev eth0 hoplimit 10
    12sp0:~ # ip -6 route show
    fe80::/64 dev eth0 proto kernel metric 256
    fec0::1 dev eth0 metric 1024 hoplimit 10 rto_min lock 1s

    Resolve the first problem by moving the setting of metrics down
    into fib6_add_rt2node() to the point we are sure we are
    inserting the new route into the tree. Second problem is
    addressed by introducing new flag DST_METRICS_FORCE_OVERWRITE
    which is set for a new host route in ip6_route_add() and makes
    ipv6_cow_metrics() always overwrite the metrics in inetpeer
    (even if they are not "new"); it is reset after that.

    v5: use a flag in _metrics member rather than one in flags

    v4: fix a typo making a condition always true (thanks to Hannes
    Frederic Sowa)

    v3: rewritten based on David Miller's idea to move setting the
    metrics (and allocation in non-host case) down to the point we
    already know the route is to be inserted. Also rebased to
    net-next as it is quite late in the cycle.

    Signed-off-by: Michal Kubecek
    Acked-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Michal Kubeček
     
  • Currently, if the card supports CTAG acceleration we do not
    account for the vlan header even if we are configuring an
    8021AD vlan. This may not be best since we'll do software
    tagging for 8021AD which will cause data copy on skb head expansion
    Configure the length based on available hw offload capabilities and
    vlan protocol.

    CC: Patrick McHardy
    Signed-off-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Vlad Yasevich