01 Apr, 2014
11 commits
-
This allows to monitor carrier on/off transitions and detect link
flapping issues:
- new /sys/class/net/X/carrier_changes
- new rtnetlink IFLA_CARRIER_CHANGES (getlink)Tested:
- grep . /sys/class/net/*/carrier_changes
+ ip link set dev X down/up
+ plug/unplug cable
- updated iproute2: prints IFLA_CARRIER_CHANGES
- iproute2 20121211-2 (debian): unchanged behaviorSigned-off-by: David Decotigny
Signed-off-by: David S. Miller -
The issue raises when adding policy route, specify a particular
NIC as oif, the policy route did not take effect. The reason is
that fl6.oif is not set and route map failed. From the
tcp_v6_send_response function, if the binding address is linklocal,
fl6.oif is set, but not for global address.Acked-by: Hannes Frederic Sowa
Signed-off-by: Wang Yufen
Signed-off-by: David S. Miller -
Move the whole rt6_need_strict as static inline into ip6_route.h,
so that it can be reusedSigned-off-by: Wang Yufen
Signed-off-by: David S. Miller -
Signed-off-by: Wang Yufen
Signed-off-by: David S. Miller -
Florian Fainelli says:
====================
net: document sysfs entriesThis patchset attempts to document the basic set of sysfs entries that are
exposed by netdevices in /sys/class/net//I did not go before the pre-git era, so the oldest entries are marked with
the 2.6.12 kernel version and dated of April 2005.Future patches will document the queues/ and statistics/ directories as well.
====================Signed-off-by: David S. Miller
-
NET_ADDR_* values are exported in the
/sys/class/net//addr_assign_type sysfs attributes, and as such
constitutes an user-space ABI. Move the NET_ADDR_* definitions from
include/linux/netdevice.h to include/uapi/linux/netdevice.hSigned-off-by: Florian Fainelli
Signed-off-by: David S. Miller -
Add sysfs attributes Documentation entries for the basic set of
attributes that are exposed by a network device in
/sys/class/net//Signed-off-by: Florian Fainelli
Signed-off-by: David S. Miller -
This device provides QMI and ethernet functionality via a standard CDC
ethernet descriptor. But when driven by cdc_ether, the QMI
functionality is unavailable because only cdc_ether can claim the USB
interface. Thus blacklist the device in cdc_ether and add its IDs to
qmi_wwan, which enables both QMI and ethernet simultaneously.Signed-off-by: Yegor Yefremov
Signed-off-by: David S. Miller -
Vlad Yasevich says:
====================
bridge: Fix forwarding of 8021AD framesBridge has its own way to deterine if the packet is forwardable and it doesn't
support 8021ad tags correctly. Instead just allow bridge to use an
existing is_skb_forwardable() function.v2: Fix missing hunk in patch 2/2 to make it build.
v3: Fix indent for is_skb_forwardable
====================Signed-off-by: David S. Miller
-
Use existing function instead of trying to use our own.
Signed-off-by: Vlad Yasevich
Signed-off-by: David S. Miller -
Signed-off-by: Vlad Yasevich
Signed-off-by: David S. Miller
31 Mar, 2014
10 commits
-
Daniel Borkmann says:
====================
BPF updatesWe sat down and have heavily reworked the whole previous patchset
from v10 [1] to address all comments/concerns. This patchset therefore
*replaces* the internal BPF interpreter with the new layout as
discussed in [1], and migrates some exotic callers to properly use the
BPF API for a transparent upgrade. All other callers that already use
the BPF API in a way it should be used, need no further changes to run
the new internals. We also removed the sysctl knob entirely, and do not
expose any structure to userland, so that implementation details only
reside in kernel space. Since we are replacing the interpreter we had
to migrate seccomp in one patch along with the interpreter to not break
anything. When attaching a new filter, the flow can be described as
following: i) test if jit compiler is enabled and can compile the user
BPF, ii) if so, then go for it, iii) if not, then transparently migrate
the filter into the new representation, and run it in the interpreter.
Also, we have scratched the jit flag from the len attribute and made it
as initial patch in this series as Pablo has suggested in the last
feedback, thanks. For details, please refer to the patches themselves.We did extensive testing of BPF and seccomp on the new interpreter
itself and also on the user ABIs and could not find any issues; new
performance numbers as posted in patch 8 are also still the same.Please find more details in the patches themselves.
For all the previous history from v1 to v10, see [1]. We have decided
to drop the v11 as we have pedantically reworked the set, but of course,
included all previous feedback.v3 -> v4:
- Applied feedback from Dave regarding swap insns
- Rebased on net-next
v2 -> v3:
- Rebased to latest net-next (i.e. w/ rxhash->hash rename)
- Fixed patch 8/9 commit message/doc as suggested by Dave
- Rest is unchanged
v1 -> v2:
- Rebased to latest net-next
- Added static to ptp_filter as suggested by Dave
- Fixed a typo in patch 8's commit message
- Rest unchangedThanks !
[1] http://thread.gmane.org/gmane.linux.kernel/1665858
====================Signed-off-by: David S. Miller
-
Further extend the current BPF documentation to document new BPF
engine internals. Joint work with Daniel Borkmann.Signed-off-by: Alexei Starovoitov
Signed-off-by: Daniel Borkmann
Signed-off-by: David S. Miller -
This patch replaces/reworks the kernel-internal BPF interpreter with
an optimized BPF instruction set format that is modelled closer to
mimic native instruction sets and is designed to be JITed with one to
one mapping. Thus, the new interpreter is noticeably faster than the
current implementation of sk_run_filter(); mainly for two reasons:1. Fall-through jumps:
BPF jump instructions are forced to go either 'true' or 'false'
branch which causes branch-miss penalty. The new BPF jump
instructions have only one branch and fall-through otherwise,
which fits the CPU branch predictor logic better. `perf stat`
shows drastic difference for branch-misses between the old and
new code.2. Jump-threaded implementation of interpreter vs switch
statement:Instead of single table-jump at the top of 'switch' statement,
gcc will now generate multiple table-jump instructions, which
helps CPU branch predictor logic.Note that the verification of filters is still being done through
sk_chk_filter() in classical BPF format, so filters from user- or
kernel space are verified in the same way as we do now, and same
restrictions/constraints hold as well.We reuse current BPF JIT compilers in a way that this upgrade would
even be fine as is, but nevertheless allows for a successive upgrade
of BPF JIT compilers to the new format.The internal instruction set migration is being done after the
probing for JIT compilation, so in case JIT compilers are able to
create a native opcode image, we're going to use that, and in all
other cases we're doing a follow-up migration of the BPF program's
instruction set, so that it can be transparently run in the new
interpreter.In short, the *internal* format extends BPF in the following way (more
details can be taken from the appended documentation):- Number of registers increase from 2 to 10
- Register width increases from 32-bit to 64-bit
- Conditional jt/jf targets replaced with jt/fall-through
- Adds signed > and >= insns
- 16 4-byte stack slots for register spill-fill replaced
with up to 512 bytes of multi-use stack space
- Introduction of bpf_call insn and register passing convention
for zero overhead calls from/to other kernel functions
- Adds arithmetic right shift and endianness conversion insns
- Adds atomic_add insn
- Old tax/txa insns are replaced with 'mov dst,src' insnPerformance of two BPF filters generated by libpcap resp. bpf_asm
was measured on x86_64, i386 and arm32 (other libpcap programs
have similar performance differences):fprog #1 is taken from Documentation/networking/filter.txt:
tcpdump -i eth0 port 22 -ddfprog #2 is taken from 'man tcpdump':
tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<>2)) != 0)' -ddRaw performance data from BPF micro-benchmark: SK_RUN_FILTER on the
same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call,
smaller is better:--x86_64--
fprog #1 fprog #1 fprog #2 fprog #2
cache-hit cache-miss cache-hit cache-miss
old BPF 90 101 192 202
new BPF 31 71 47 97
old BPF jit 12 34 17 44
new BPF jit TBD--i386--
fprog #1 fprog #1 fprog #2 fprog #2
cache-hit cache-miss cache-hit cache-miss
old BPF 107 136 227 252
new BPF 40 119 69 172--arm32--
fprog #1 fprog #1 fprog #2 fprog #2
cache-hit cache-miss cache-hit cache-miss
old BPF 202 300 475 540
new BPF 180 270 330 470
old BPF jit 26 182 37 202
new BPF jit TBDThus, without changing any userland BPF filters, applications on
top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf
classifier, netfilter's xt_bpf, team driver's load-balancing mode,
and many more will have better interpreter filtering performance.While we are replacing the internal BPF interpreter, we also need
to convert seccomp BPF in the same step to make use of the new
internal structure since it makes use of lower-level API details
without being further decoupled through higher-level calls like
sk_unattached_filter_{create,destroy}(), for example.Just as for normal socket filtering, also seccomp BPF experiences
a time-to-verdict speedup:05-sim-long_jumps.c of libseccomp was used as micro-benchmark:
seccomp_rule_add_exact(ctx,...
seccomp_rule_add_exact(ctx,...rc = seccomp_load(ctx);
for (i = 0; i < 10000000; i++)
syscall(199, 100);'short filter' has 2 rules
'large filter' has 200 rules'short filter' performance is slightly better on x86_64/i386/arm32
'large filter' is much faster on x86_64 and i386 and shows no
difference on arm32--x86_64-- short filter
old BPF: 2.7 sec
39.12% bench libc-2.15.so [.] syscall
8.10% bench [kernel.kallsyms] [k] sk_run_filter
6.31% bench [kernel.kallsyms] [k] system_call
5.59% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller
4.37% bench [kernel.kallsyms] [k] trace_hardirqs_off_caller
3.70% bench [kernel.kallsyms] [k] __secure_computing
3.67% bench [kernel.kallsyms] [k] lock_is_held
3.03% bench [kernel.kallsyms] [k] seccomp_bpf_load
new BPF: 2.58 sec
42.05% bench libc-2.15.so [.] syscall
6.91% bench [kernel.kallsyms] [k] system_call
6.25% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller
6.07% bench [kernel.kallsyms] [k] __secure_computing
5.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp--arm32-- short filter
old BPF: 4.0 sec
39.92% bench [kernel.kallsyms] [k] vector_swi
16.60% bench [kernel.kallsyms] [k] sk_run_filter
14.66% bench libc-2.17.so [.] syscall
5.42% bench [kernel.kallsyms] [k] seccomp_bpf_load
5.10% bench [kernel.kallsyms] [k] __secure_computing
new BPF: 3.7 sec
35.93% bench [kernel.kallsyms] [k] vector_swi
21.89% bench libc-2.17.so [.] syscall
13.45% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp
6.25% bench [kernel.kallsyms] [k] __secure_computing
3.96% bench [kernel.kallsyms] [k] syscall_trace_exit--x86_64-- large filter
old BPF: 8.6 seconds
73.38% bench [kernel.kallsyms] [k] sk_run_filter
10.70% bench libc-2.15.so [.] syscall
5.09% bench [kernel.kallsyms] [k] seccomp_bpf_load
1.97% bench [kernel.kallsyms] [k] system_call
new BPF: 5.7 seconds
66.20% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp
16.75% bench libc-2.15.so [.] syscall
3.31% bench [kernel.kallsyms] [k] system_call
2.88% bench [kernel.kallsyms] [k] __secure_computing--i386-- large filter
old BPF: 5.4 sec
new BPF: 3.8 sec--arm32-- large filter
old BPF: 13.5 sec
73.88% bench [kernel.kallsyms] [k] sk_run_filter
10.29% bench [kernel.kallsyms] [k] vector_swi
6.46% bench libc-2.17.so [.] syscall
2.94% bench [kernel.kallsyms] [k] seccomp_bpf_load
1.19% bench [kernel.kallsyms] [k] __secure_computing
0.87% bench [kernel.kallsyms] [k] sys_getuid
new BPF: 13.5 sec
76.08% bench [kernel.kallsyms] [k] sk_run_filter_int_seccomp
10.98% bench [kernel.kallsyms] [k] vector_swi
5.87% bench libc-2.17.so [.] syscall
1.77% bench [kernel.kallsyms] [k] __secure_computing
0.93% bench [kernel.kallsyms] [k] sys_getuidBPF filters generated by seccomp are very branchy, so the new
internal BPF performance is better than the old one. Performance
gains will be even higher when BPF JIT is committed for the
new structure, which is planned in future work (as successive
JIT migrations).BPF has also been stress-tested with trinity's BPF fuzzer.
Joint work with Daniel Borkmann.
Signed-off-by: Alexei Starovoitov
Signed-off-by: Daniel Borkmann
Cc: Hagen Paul Pfeifer
Cc: Kees Cook
Cc: Paul Moore
Cc: Ingo Molnar
Cc: H. Peter Anvin
Cc: linux-kernel@vger.kernel.org
Acked-by: Kees Cook
Signed-off-by: David S. Miller -
Similarly as in ppp, we need to migrate the ISDN/PPP code to make use
of the sk_unattached_filter api in order to decouple having direct
filter structure access. By using sk_unattached_filter_{create,destroy},
we can allow for the possibility to jit compile filters for faster
filter verdicts as well.Joint work with Alexei Starovoitov.
Signed-off-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
Cc: Karsten Keil
Cc: isdn4linux@listserv.isdn4linux.de
Signed-off-by: David S. Miller -
For the ppp driver, there are currently two open-coded BPF filters in use,
that is, pass_filter and active_filter. Migrate both to make proper use
of sk_unattached_filter_{create,destroy} API so that the actual BPF code
is decoupled from direct access, and filters can be jited as a side-effect
by the internal filter compiler.Joint work with Alexei Starovoitov.
Signed-off-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
Cc: Paul Mackerras
Cc: linux-ppp@vger.kernel.org
Signed-off-by: David S. Miller -
There are currently pch_gbe, cpts, and ixp4xx_eth drivers that open-code
and reimplement a BPF classifier for the PTP protocol. Since all of them
effectively do the very same thing and load the very same PTP/BPF filter,
we can just consolidate that code by introducing ptp_classify_raw() in
the time-stamping core framework which can be used in drivers.As drivers get initialized after bootstrapping the core networking
subsystem, they can make use of ptp_insns wrapped through
ptp_classify_raw(), which allows to simplify and remove PTP classifier
setup code in drivers.Joint work with Alexei Starovoitov.
Signed-off-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
Cc: Richard Cochran
Cc: Jiri Benc
Signed-off-by: David S. Miller -
This patch migrates an open-coded sk_run_filter() implementation with
proper use of the BPF API, that is, sk_unattached_filter_create(). This
migration is needed, as we will be internally transforming the filter
to a different representation, and therefore needs to be decoupled.It is okay to do so as skb_timestamping_init() is called during
initialization of the network stack in core initcall via sock_init().
This would effectively also allow for PTP filters to be jit compiled if
bpf_jit_enable is set.For better readability, there are also some newlines introduced, also
ptp_classify.h is only in kernel space.Joint work with Alexei Starovoitov.
Signed-off-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
Cc: Richard Cochran
Cc: Jiri Benc
Signed-off-by: David S. Miller -
This patch basically does two things, i) removes the extern keyword
from the include/linux/filter.h file to be more consistent with the
rest of Joe's changes, and ii) moves filter accounting into the filter
core framework.Filter accounting mainly done through sk_filter_{un,}charge() take
care of the case when sockets are being cloned through sk_clone_lock()
so that removal of the filter on one socket won't result in eviction
as it's still referenced by the other.These functions actually belong to net/core/filter.c and not
include/net/sock.h as we want to keep all that in a central place.
It's also not in fast-path so uninlining them is fine and even allows
us to get rd of sk_filter_release_rcu()'s EXPORT_SYMBOL and a forward
declaration.Joint work with Alexei Starovoitov.
Signed-off-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
Cc: Pavel Emelyanov
Signed-off-by: David S. Miller -
In order to open up the possibility to internally transform a BPF program
into an alternative and possibly non-trivial reversible representation, we
need to keep the original BPF program around, so that it can be passed back
to user space w/o the need of a complex decoder.The reason for that use case resides in commit a8fc92778080 ("sk-filter:
Add ability to get socket filter program (v2)"), that is, the ability
to retrieve the currently attached BPF filter from a given socket used
mainly by the checkpoint-restore project, for example.Therefore, we add two helpers sk_{store,release}_orig_filter for taking
care of that. In the sk_unattached_filter_create() case, there's no such
possibility/requirement to retrieve a loaded BPF program. Therefore, we
can spare us the work in that case.This approach will simplify and slightly speed up both, sk_get_filter()
and sock_diag_put_filterinfo() handlers as we won't need to successively
decode filters anymore through sk_decode_filter(). As we still need
sk_decode_filter() later on, we're keeping it around.Joint work with Alexei Starovoitov.
Signed-off-by: Alexei Starovoitov
Signed-off-by: Daniel Borkmann
Cc: Pavel Emelyanov
Signed-off-by: David S. Miller -
This patch adds a jited flag into sk_filter struct in order to indicate
whether a filter is currently jited or not. The size of sk_filter is
not being expanded as the 32 bit 'len' member allows upper bits to be
reused since a filter can currently only grow as large as BPF_MAXINSNS.Therefore, there's enough room also for other in future needed flags to
reuse 'len' field if necessary. The jited flag also allows for having
alternative interpreter functions running as currently, we can only
detect jit compiled filters by testing fp->bpf_func to not equal the
address of sk_run_filter().Joint work with Alexei Starovoitov.
Signed-off-by: Alexei Starovoitov
Signed-off-by: Daniel Borkmann
Cc: Pablo Neira Ayuso
Signed-off-by: David S. Miller
30 Mar, 2014
19 commits
-
Conflicts:
drivers/net/ethernet/marvell/mvneta.cThe mvneta.c conflict is a case of overlapping changes,
a conversion to devm_ioremap_resource() vs. a conversion
to netdev_alloc_pcpu_stats.Signed-off-by: David S. Miller
-
The at8031 can work on polling mode and interrupt mode.
Add ack_interrupt and config intr funcs to enable
interrupt mode for it.Signed-off-by: Zhao Qiang
Signed-off-by: David S. Miller -
ERROR: "(foo*)" should be "(foo *)"
ERROR: "foo * bar" should be "foo *bar"Suggested-by: Sergei Shtylyov
Signed-off-by: Wang Yufen
Acked-by: Sergei Shtylyov
Signed-off-by: David S. Miller -
ERROR: open brace '{' following enum go on the same line
ERROR: open brace '{' following struct go on the same line
ERROR: trailing statements should be on next lineSigned-off-by: Wang Yufen
Signed-off-by: David S. Miller -
WARNING: please, no space before tabs
WARNING: please, no spaces at the start of a line
ERROR: spaces required around that ':' (ctx:VxW)
ERROR: spaces required around that '>' (ctx:VxV)
ERROR: spaces required around that '>=' (ctx:VxV)Signed-off-by: Wang Yufen
Signed-off-by: David S. Miller -
Eric W. Biederman says:
====================
netpoll: Cleanups and fixesThis should be a small set of safe cleanups and fixes to netpoll.
The fixes are vlan headers are now always inserted when needed, and
napi polling is always avoided when network devices are closed.There are a bunch of little cleanups removing unnecessary code, fixing
function naming, not taking unnecessary locks and removing general
silliness.
====================Signed-off-by: David S. Miller
-
Stop taking the transmit lock when a network device has specified
NETIF_F_LLTX.If no locks needed to trasnmit a packet this is the ideal scenario for
netpoll as all packets can be trasnmitted immediately.Even if some locks are needed in ndo_start_xmit skipping any unnecessary
serialization is desirable for netpoll as it makes it more likely a
debugging packet may be trasnmitted immediately instead of being
deferred until later.Signed-off-by: "Eric W. Biederman"
Signed-off-by: David S. Miller -
Remove the assumption that the skbs that make it to
netpoll_send_skb_on_dev are allocated with find_skb, such that
skb->users == 1 and nothing is attached that would prevent the skbs from
being freed from hard irq context.Remove this assumption by replacing __kfree_skb on error paths with
dev_kfree_skb_irq (in hard irq context) and kfree_skb (in process
context).Signed-off-by: "Eric W. Biederman"
Signed-off-by: David S. Miller -
The netpoll_rx_enable and netpoll_rx_disable functions have always
controlled polling the network drivers transmit and receive queues.Rename them to netpoll_poll_enable and netpoll_poll_disable to make
their functionality clear.Signed-off-by: "Eric W. Biederman"
Signed-off-by: David S. Miller -
Today netpoll_rx_enable and netpoll_rx_disable are called from
dev_close and and __dev_close, and not from dev_close_many.Move the calls into __dev_close_many so that we have a single call
site to maintain, and so that dev_close_many gains this protection as
well. Which importantly makes batched network device deletes safe.Signed-off-by: "Eric W. Biederman"
Signed-off-by: David S. Miller -
Factor out the code that needs to surround ndo_start_xmit
from netpoll_send_skb_on_dev into netpoll_start_xmit.It is an unfortunate fact that as the netpoll code has been maintained
the primary call site ndo_start_xmit learned how to handle vlans
and timestamps but the second call of ndo_start_xmit in queue_process
did not.With the introduction of netpoll_start_xmit this associated logic now
happens at both call sites of ndo_start_xmit and should make it easy
for that to continue into the future.Signed-off-by: "Eric W. Biederman"
Signed-off-by: David S. Miller -
The gfp parameter was added in:
commit 47be03a28cc6c80e3aa2b3e8ed6d960ff0c5c0af
Author: Amerigo Wang
Date: Fri Aug 10 01:24:37 2012 +0000netpoll: use GFP_ATOMIC in slave_enable_netpoll() and __netpoll_setup()
slave_enable_netpoll() and __netpoll_setup() may be called
with read_lock() held, so should use GFP_ATOMIC to allocate
memory. Eric suggested to pass gfp flags to __netpoll_setup().Cc: Eric Dumazet
Cc: "David S. Miller"
Reported-by: Dan Carpenter
Signed-off-by: Eric Dumazet
Signed-off-by: Cong Wang
Signed-off-by: David S. MillerThe reason for the gfp parameter was removed in:
commit c4cdef9b7183159c23c7302aaf270d64c549f557
Author: dingtianhong
Date: Tue Jul 23 15:25:27 2013 +0800bonding: don't call slave_xxx_netpoll under spinlocks
The slave_xxx_netpoll will call synchronize_rcu_bh(),
so the function may schedule and sleep, it should't be
called under spinlocks.bond_netpoll_setup() and bond_netpoll_cleanup() are always
protected by rtnl lock, it is no need to take the read lock,
as the slave list couldn't be changed outside rtnl lock.Signed-off-by: Ding Tianhong
Cc: Jay Vosburgh
Cc: Andy Gospodarek
Signed-off-by: David S. MillerNothing else that calls __netpoll_setup or ndo_netpoll_setup
requires a gfp paramter, so remove the gfp parameter from both
of these functions making the code clearer.Signed-off-by: "Eric W. Biederman"
Signed-off-by: David S. Miller -
Francois Romieu says:
====================
remove open-coded skb_cow_head.As per http://marc.info/?l=linux-netdev&m=139440579104701.
====================Signed-off-by: David S. Miller
-
Signed-off-by: Francois Romieu
Cc: Inaky Perez-Gonzalez
Signed-off-by: David S. Miller -
Signed-off-by: Francois Romieu
Cc: Nithin Nayak Sujir
Cc: Michael Chan
Signed-off-by: David S. Miller -
Signed-off-by: Francois Romieu
Cc: Rasesh Mody
Signed-off-by: David S. Miller -
Signed-off-by: Francois Romieu
Cc: Jitendra Kalsaria
Cc: Shahed Shaikh
Cc: Ron Mercer
Signed-off-by: David S. Miller -
Signed-off-by: Francois Romieu
Cc: Guo-Fu Tseng
Signed-off-by: David S. Miller -
Signed-off-by: Francois Romieu
Cc: Chris Snook
Cc: Jay Cliburn
Signed-off-by: David S. Miller