12 May, 2011
1 commit
-
Conflicts:
drivers/net/benet/be_main.c
10 May, 2011
1 commit
-
commit 255d0dc34068a976 (netfilter: x_table: speedup compat operations)
made ebtables not working anymore.1) xt_compat_calc_jump() is not an exact match lookup
2) compat_table_info() has a typo in xt_compat_init_offsets() call
3) compat_do_replace() misses a xt_compat_init_offsets() callReported-by: dann frazier
Signed-off-by: Eric Dumazet
Signed-off-by: Patrick McHardy
04 Apr, 2011
1 commit
-
We currently use a percpu spinlock to 'protect' rule bytes/packets
counters, after various attempts to use RCU instead.Lately we added a seqlock so that get_counters() can run without
blocking BH or 'writers'. But we really only need the seqcount in it.Spinlock itself is only locked by the current/owner cpu, so we can
remove it completely.This cleanups api, using correct 'writer' vs 'reader' semantic.
At replace time, the get_counters() call makes sure all cpus are done
using the old table.Signed-off-by: Eric Dumazet
Cc: Jan Engelhardt
Signed-off-by: Patrick McHardy
15 Mar, 2011
1 commit
-
As Stephen correctly points out, we need to return -ENOENT in
xt_find_match()/xt_find_target() after the patch "netfilter: x_tables:
misuse of try_then_request_module" in order to properly indicate
a non-existant module to the caller.Signed-off-by: Patrick McHardy
09 Mar, 2011
1 commit
-
Since xt_find_match() returns ERR_PTR(xx) on error not NULL,
the macro try_then_request_module won't work correctly here.
The macro expects its first argument will be zero if condition
fails. But ERR_PTR(-ENOENT) is not zero.The correct solution is to propagate the error value
back.Found by inspection, and compile tested only.
Signed-off-by: Stephen Hemminger
Signed-off-by: Patrick McHardy
20 Jan, 2011
1 commit
17 Jan, 2011
1 commit
-
The setsockopt() syscall to replace tables is already recorded
in the audit logs. This patch stores additional information
such as table name and netfilter protocol.Cc: Patrick McHardy
Cc: Eric Paris
Cc: Al Viro
Signed-off-by: Thomas Graf
Signed-off-by: Patrick McHardy
13 Jan, 2011
1 commit
-
One iptables invocation with 135000 rules takes 35 seconds of cpu time
on a recent server, using a 32bit distro and a 64bit kernel.We eventually trigger NMI/RCU watchdog.
INFO: rcu_sched_state detected stall on CPU 3 (t=6000 jiffies)
COMPAT mode has quadratic behavior and consume 16 bytes of memory per
rule.Switch the xt_compat algos to use an array instead of list, and use a
binary search to locate an offset in the sorted array.This halves memory need (8 bytes per rule), and removes quadratic
behavior [ O(N*N) -> O(N*log2(N)) ]Time of iptables goes from 35 s to 150 ms.
Signed-off-by: Eric Dumazet
Signed-off-by: Pablo Neira Ayuso
11 Jan, 2011
1 commit
-
Using "iptables -L" with a lot of rules have a too big BH latency.
Jesper mentioned ~6 ms and worried of frame drops.Switch to a per_cpu seqlock scheme, so that taking a snapshot of
counters doesnt need to block BH (for this cpu, but also other cpus).This adds two increments on seqlock sequence per ipt_do_table() call,
its a reasonable cost for allowing "iptables -L" not block BH
processing.Reported-by: Jesper Dangaard Brouer
Signed-off-by: Eric Dumazet
CC: Patrick McHardy
Acked-by: Stephen Hemminger
Acked-by: Jesper Dangaard Brouer
Signed-off-by: Pablo Neira Ayuso
05 Oct, 2010
1 commit
-
Since we register nf hooks, matches and targets in order, we'd better
unregister them in the reverse order.Signed-off-by: Changli Gao
Signed-off-by: Patrick McHardy
31 May, 2010
2 commits
-
commit f3c5c1bfd4 (netfilter: xtables: make ip_tables reentrant)
introduced a performance regression, because stackptr array is shared by
all cpus, adding cache line ping pongs. (16 cpus share a 64 bytes cache
line)Fix this using alloc_percpu()
Signed-off-by: Eric Dumazet
Acked-By: Jan Engelhardt
Signed-off-by: Patrick McHardy -
In xt_register_table, xt_jumpstack_alloc is called first, later
xt_replace_table is used. But in xt_replace_table, xt_jumpstack_alloc
will be used again. Then the memory allocated by previous xt_jumpstack_alloc
will be leaked. We can simply remove the previous xt_jumpstack_alloc because
there aren't any users of newinfo between xt_jumpstack_alloc and
xt_replace_table.Signed-off-by: Xiaotian Feng
Cc: Patrick McHardy
Cc: "David S. Miller"
Cc: Jan Engelhardt
Cc: Andrew Morton
Cc: Rusty Russell
Cc: Alexey Dobriyan
Acked-By: Jan Engelhardt
Signed-off-by: Patrick McHardy
21 Apr, 2010
1 commit
-
The jumpstack allocation needs to be moved out of the critical region.
Corrects this notice:BUG: sleeping function called from invalid context at mm/slub.c:1705
[ 428.295762] in_atomic(): 1, irqs_disabled(): 0, pid: 9111, name: iptables
[ 428.295771] Pid: 9111, comm: iptables Not tainted 2.6.34-rc1 #2
[ 428.295776] Call Trace:
[ 428.295791] [] __might_sleep+0xe5/0xed
[ 428.295801] [] __kmalloc+0x92/0xfc
[ 428.295825] [] ? xt_jumpstack_alloc+0x36/0xff [x_tables]Signed-off-by: Jan Engelhardt
Signed-off-by: Patrick McHardy
20 Apr, 2010
1 commit
-
Conflicts:
Documentation/feature-removal-schedule.txt
net/ipv6/netfilter/ip6t_REJECT.c
net/netfilter/xt_limit.cSigned-off-by: Patrick McHardy
19 Apr, 2010
1 commit
-
Currently, the table traverser stores return addresses in the ruleset
itself (struct ip6t_entry->comefrom). This has a well-known drawback:
the jumpstack is overwritten on reentry, making it necessary for
targets to return absolute verdicts. Also, the ruleset (which might
be heavy memory-wise) needs to be replicated for each CPU that can
possibly invoke ip6t_do_table.This patch decouples the jumpstack from struct ip6t_entry and instead
puts it into xt_table_info. Not being restricted by 'comefrom'
anymore, we can set up a stack as needed. By default, there is room
allocated for two entries into the traverser.arp_tables is not touched though, because there is just one/two
modules and further patches seek to collapse the table traverser
anyhow.Signed-off-by: Jan Engelhardt
Signed-off-by: Patrick McHardy
30 Mar, 2010
1 commit
-
…it slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
25 Mar, 2010
4 commits
-
Part of the transition of done by this semantic patch:
//
@ rule1 @
struct xt_target ops;
identifier check;
@@
ops.checkentry = check;@@
identifier rule1.check;
@@
check(...) { }@@
identifier rule1.check;
@@
check(...) { }
//Signed-off-by: Jan Engelhardt
-
The following semantic patch does part of the transformation:
//
@ rule1 @
struct xt_match ops;
identifier check;
@@
ops.checkentry = check;@@
identifier rule1.check;
@@
check(...) { }@@
identifier rule1.check;
@@
check(...) { }
//Signed-off-by: Jan Engelhardt
-
Signed-off-by: Jan Engelhardt
-
Signed-off-by: Jan Engelhardt
18 Mar, 2010
1 commit
-
Signed-off-by: Jan Engelhardt
16 Feb, 2010
1 commit
-
with 32 bit userland and 64 bit kernels, it is unlikely but possible
that insertion of new rules fails even tough there are only about 2000
iptables rules.This happens because the compat delta is using a short int.
Easily reproducible via "iptables -m limit" ; after about 2050
rules inserting new ones fails with -ELOOP.Note that compat_delta included 2 bytes of padding on x86_64, so
structure size remains the same.Signed-off-by: Florian Westphal
Signed-off-by: Patrick McHardy
15 Feb, 2010
2 commits
-
Signed-off-by: Jan Engelhardt
-
Print which revision has been used and which size are which
(kernel/user) for easier debugging.Signed-off-by: Jan Engelhardt
11 Feb, 2010
3 commits
-
The static initial tables are pretty large, and after the net
namespace has been instantiated, they just hang around for nothing.
This commit removes them and creates tables on-demand at runtime when
needed.Size shrinks by 7735 bytes (x86_64).
Signed-off-by: Jan Engelhardt
-
The respective xt_table structures already have most of the metadata
needed for hook setup. Add a 'priority' field to struct xt_table so
that xt_hook_link() can be called with a reduced number of arguments.So should we be having more tables in the future, it comes at no
static cost (only runtime, as before) - space saved:
6807373->6806555.Signed-off-by: Jan Engelhardt
10 Feb, 2010
1 commit
-
Rewrite COMPAT_XT_ALIGN in terms of dummy structure hack.
Compat counters logically have nothing to do with it.
Use ALIGN() macro while I'm at it for same types.Signed-off-by: Alexey Dobriyan
Signed-off-by: Patrick McHardy
22 Sep, 2009
1 commit
-
Sizing of memory allocations shouldn't depend on the number of physical
pages found in a system, as that generally includes (perhaps a huge amount
of) non-RAM pages. The amount of what actually is usable as storage
should instead be used as a basis here.Some of the calculations (i.e. those not intending to use high memory)
should likely even use (totalram_pages - totalhigh_pages).Signed-off-by: Jan Beulich
Acked-by: Rusty Russell
Acked-by: Ingo Molnar
Cc: Dave Airlie
Cc: Kyle McMartin
Cc: Jeremy Fitzhardinge
Cc: Pekka Enberg
Cc: Hugh Dickins
Cc: "David S. Miller"
Cc: Patrick McHardy
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 Aug, 2009
1 commit
-
The inputted table is never modified, so should be considered const.
Signed-off-by: Jan Engelhardt
Signed-off-by: Patrick McHardy
13 Jun, 2009
1 commit
-
Signed-off-by: Joe Perches
Signed-off-by: Patrick McHardy
08 May, 2009
1 commit
-
Users cannot make anything of these numbers. Let's just tell them
directly.Signed-off-by: Jan Engelhardt
29 Apr, 2009
1 commit
-
The x_tables are organized with a table structure and a per-cpu copies
of the counters and rules. On older kernels there was a reader/writer
lock per table which was a performance bottleneck. In 2.6.30-rc, this
was converted to use RCU and the counters/rules which solved the performance
problems for do_table but made replacing rules much slower because of
the necessary RCU grace period.This version uses a per-cpu set of spinlocks and counters to allow to
table processing to proceed without the cache thrashing of a global
reader lock and keeps the same performance for table updates.Signed-off-by: Stephen Hemminger
Acked-by: Linus Torvalds
Signed-off-by: David S. Miller
25 Mar, 2009
1 commit
20 Feb, 2009
1 commit
-
The reader/writer lock in ip_tables is acquired in the critical path of
processing packets and is one of the reasons just loading iptables can cause
a 20% performance loss. The rwlock serves two functions:1) it prevents changes to table state (xt_replace) while table is in use.
This is now handled by doing rcu on the xt_table. When table is
replaced, the new table(s) are put in and the old one table(s) are freed
after RCU period.2) it provides synchronization when accesing the counter values.
This is now handled by swapping in new table_info entries for each cpu
then summing the old values, and putting the result back onto one
cpu. On a busy system it may cause sampling to occur at different
times on each cpu, but no packet/byte counts are lost in the process.Signed-off-by: Stephen Hemminger
Sucessfully tested on my dual quad core machine too, but iptables only (no ipv6 here)
BTW, my new "tbench 8" result is 2450 MB/s, (it was 2150 MB/s not so long ago)Acked-by: Eric Dumazet
Signed-off-by: Patrick McHardy
18 Feb, 2009
1 commit
-
When extensions were moved to the NFPROTO_UNSPEC wildcard in
ab4f21e6fb1c09b13c4c3cb8357babe8223471bd, they disappeared from the
procfs files.Signed-off-by: Jan Engelhardt
Signed-off-by: Patrick McHardy
13 Jan, 2009
1 commit
-
Commit 55b69e91 (netfilter: implement NFPROTO_UNSPEC as a wildcard
for extensions) broke revision probing for matches and targets that
are registered with NFPROTO_UNSPEC.Fix by continuing the search on the NFPROTO_UNSPEC list if nothing
is found on the af-specific lists.Signed-off-by: Patrick McHardy
Signed-off-by: David S. Miller
08 Oct, 2008
3 commits
-
By passing in the family through which extensions were invoked, a bit
of data space can be reclaimed. The "family" member will be added to
the parameter structures and the check functions be adjusted.Signed-off-by: Jan Engelhardt
Signed-off-by: Patrick McHardy -
This patch does this for target extensions' checkentry functions.
Signed-off-by: Jan Engelhardt
Signed-off-by: Patrick McHardy -
This patch does this for match extensions' checkentry functions.
Signed-off-by: Jan Engelhardt
Signed-off-by: Patrick McHardy