01 Nov, 2011

1 commit


15 Sep, 2011

1 commit


12 May, 2011

1 commit


10 May, 2011

1 commit

  • commit 255d0dc34068a976 (netfilter: x_table: speedup compat operations)
    made ebtables not working anymore.

    1) xt_compat_calc_jump() is not an exact match lookup
    2) compat_table_info() has a typo in xt_compat_init_offsets() call
    3) compat_do_replace() misses a xt_compat_init_offsets() call

    Reported-by: dann frazier
    Signed-off-by: Eric Dumazet
    Signed-off-by: Patrick McHardy

    Eric Dumazet
     

04 Apr, 2011

1 commit

  • We currently use a percpu spinlock to 'protect' rule bytes/packets
    counters, after various attempts to use RCU instead.

    Lately we added a seqlock so that get_counters() can run without
    blocking BH or 'writers'. But we really only need the seqcount in it.

    Spinlock itself is only locked by the current/owner cpu, so we can
    remove it completely.

    This cleanups api, using correct 'writer' vs 'reader' semantic.

    At replace time, the get_counters() call makes sure all cpus are done
    using the old table.

    Signed-off-by: Eric Dumazet
    Cc: Jan Engelhardt
    Signed-off-by: Patrick McHardy

    Eric Dumazet
     

15 Mar, 2011

1 commit


09 Mar, 2011

1 commit

  • Since xt_find_match() returns ERR_PTR(xx) on error not NULL,
    the macro try_then_request_module won't work correctly here.
    The macro expects its first argument will be zero if condition
    fails. But ERR_PTR(-ENOENT) is not zero.

    The correct solution is to propagate the error value
    back.

    Found by inspection, and compile tested only.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: Patrick McHardy

    Stephen Hemminger
     

20 Jan, 2011

1 commit


17 Jan, 2011

1 commit


13 Jan, 2011

1 commit

  • One iptables invocation with 135000 rules takes 35 seconds of cpu time
    on a recent server, using a 32bit distro and a 64bit kernel.

    We eventually trigger NMI/RCU watchdog.

    INFO: rcu_sched_state detected stall on CPU 3 (t=6000 jiffies)

    COMPAT mode has quadratic behavior and consume 16 bytes of memory per
    rule.

    Switch the xt_compat algos to use an array instead of list, and use a
    binary search to locate an offset in the sorted array.

    This halves memory need (8 bytes per rule), and removes quadratic
    behavior [ O(N*N) -> O(N*log2(N)) ]

    Time of iptables goes from 35 s to 150 ms.

    Signed-off-by: Eric Dumazet
    Signed-off-by: Pablo Neira Ayuso

    Eric Dumazet
     

11 Jan, 2011

1 commit

  • Using "iptables -L" with a lot of rules have a too big BH latency.
    Jesper mentioned ~6 ms and worried of frame drops.

    Switch to a per_cpu seqlock scheme, so that taking a snapshot of
    counters doesnt need to block BH (for this cpu, but also other cpus).

    This adds two increments on seqlock sequence per ipt_do_table() call,
    its a reasonable cost for allowing "iptables -L" not block BH
    processing.

    Reported-by: Jesper Dangaard Brouer
    Signed-off-by: Eric Dumazet
    CC: Patrick McHardy
    Acked-by: Stephen Hemminger
    Acked-by: Jesper Dangaard Brouer
    Signed-off-by: Pablo Neira Ayuso

    Eric Dumazet
     

05 Oct, 2010

1 commit


31 May, 2010

2 commits

  • commit f3c5c1bfd4 (netfilter: xtables: make ip_tables reentrant)
    introduced a performance regression, because stackptr array is shared by
    all cpus, adding cache line ping pongs. (16 cpus share a 64 bytes cache
    line)

    Fix this using alloc_percpu()

    Signed-off-by: Eric Dumazet
    Acked-By: Jan Engelhardt
    Signed-off-by: Patrick McHardy

    Eric Dumazet
     
  • In xt_register_table, xt_jumpstack_alloc is called first, later
    xt_replace_table is used. But in xt_replace_table, xt_jumpstack_alloc
    will be used again. Then the memory allocated by previous xt_jumpstack_alloc
    will be leaked. We can simply remove the previous xt_jumpstack_alloc because
    there aren't any users of newinfo between xt_jumpstack_alloc and
    xt_replace_table.

    Signed-off-by: Xiaotian Feng
    Cc: Patrick McHardy
    Cc: "David S. Miller"
    Cc: Jan Engelhardt
    Cc: Andrew Morton
    Cc: Rusty Russell
    Cc: Alexey Dobriyan
    Acked-By: Jan Engelhardt
    Signed-off-by: Patrick McHardy

    Xiaotian Feng
     

21 Apr, 2010

1 commit

  • The jumpstack allocation needs to be moved out of the critical region.
    Corrects this notice:

    BUG: sleeping function called from invalid context at mm/slub.c:1705
    [ 428.295762] in_atomic(): 1, irqs_disabled(): 0, pid: 9111, name: iptables
    [ 428.295771] Pid: 9111, comm: iptables Not tainted 2.6.34-rc1 #2
    [ 428.295776] Call Trace:
    [ 428.295791] [] __might_sleep+0xe5/0xed
    [ 428.295801] [] __kmalloc+0x92/0xfc
    [ 428.295825] [] ? xt_jumpstack_alloc+0x36/0xff [x_tables]

    Signed-off-by: Jan Engelhardt
    Signed-off-by: Patrick McHardy

    Jan Engelhardt
     

20 Apr, 2010

1 commit


19 Apr, 2010

1 commit

  • Currently, the table traverser stores return addresses in the ruleset
    itself (struct ip6t_entry->comefrom). This has a well-known drawback:
    the jumpstack is overwritten on reentry, making it necessary for
    targets to return absolute verdicts. Also, the ruleset (which might
    be heavy memory-wise) needs to be replicated for each CPU that can
    possibly invoke ip6t_do_table.

    This patch decouples the jumpstack from struct ip6t_entry and instead
    puts it into xt_table_info. Not being restricted by 'comefrom'
    anymore, we can set up a stack as needed. By default, there is room
    allocated for two entries into the traverser.

    arp_tables is not touched though, because there is just one/two
    modules and further patches seek to collapse the table traverser
    anyhow.

    Signed-off-by: Jan Engelhardt
    Signed-off-by: Patrick McHardy

    Jan Engelhardt
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

25 Mar, 2010

4 commits


18 Mar, 2010

1 commit


16 Feb, 2010

1 commit

  • with 32 bit userland and 64 bit kernels, it is unlikely but possible
    that insertion of new rules fails even tough there are only about 2000
    iptables rules.

    This happens because the compat delta is using a short int.
    Easily reproducible via "iptables -m limit" ; after about 2050
    rules inserting new ones fails with -ELOOP.

    Note that compat_delta included 2 bytes of padding on x86_64, so
    structure size remains the same.

    Signed-off-by: Florian Westphal
    Signed-off-by: Patrick McHardy

    Florian Westphal
     

15 Feb, 2010

2 commits


11 Feb, 2010

3 commits


10 Feb, 2010

1 commit


22 Sep, 2009

1 commit

  • Sizing of memory allocations shouldn't depend on the number of physical
    pages found in a system, as that generally includes (perhaps a huge amount
    of) non-RAM pages. The amount of what actually is usable as storage
    should instead be used as a basis here.

    Some of the calculations (i.e. those not intending to use high memory)
    should likely even use (totalram_pages - totalhigh_pages).

    Signed-off-by: Jan Beulich
    Acked-by: Rusty Russell
    Acked-by: Ingo Molnar
    Cc: Dave Airlie
    Cc: Kyle McMartin
    Cc: Jeremy Fitzhardinge
    Cc: Pekka Enberg
    Cc: Hugh Dickins
    Cc: "David S. Miller"
    Cc: Patrick McHardy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     

24 Aug, 2009

1 commit


13 Jun, 2009

1 commit


08 May, 2009

1 commit


29 Apr, 2009

1 commit

  • The x_tables are organized with a table structure and a per-cpu copies
    of the counters and rules. On older kernels there was a reader/writer
    lock per table which was a performance bottleneck. In 2.6.30-rc, this
    was converted to use RCU and the counters/rules which solved the performance
    problems for do_table but made replacing rules much slower because of
    the necessary RCU grace period.

    This version uses a per-cpu set of spinlocks and counters to allow to
    table processing to proceed without the cache thrashing of a global
    reader lock and keeps the same performance for table updates.

    Signed-off-by: Stephen Hemminger
    Acked-by: Linus Torvalds
    Signed-off-by: David S. Miller

    Stephen Hemminger
     

25 Mar, 2009

1 commit


20 Feb, 2009

1 commit

  • The reader/writer lock in ip_tables is acquired in the critical path of
    processing packets and is one of the reasons just loading iptables can cause
    a 20% performance loss. The rwlock serves two functions:

    1) it prevents changes to table state (xt_replace) while table is in use.
    This is now handled by doing rcu on the xt_table. When table is
    replaced, the new table(s) are put in and the old one table(s) are freed
    after RCU period.

    2) it provides synchronization when accesing the counter values.
    This is now handled by swapping in new table_info entries for each cpu
    then summing the old values, and putting the result back onto one
    cpu. On a busy system it may cause sampling to occur at different
    times on each cpu, but no packet/byte counts are lost in the process.

    Signed-off-by: Stephen Hemminger

    Sucessfully tested on my dual quad core machine too, but iptables only (no ipv6 here)
    BTW, my new "tbench 8" result is 2450 MB/s, (it was 2150 MB/s not so long ago)

    Acked-by: Eric Dumazet
    Signed-off-by: Patrick McHardy

    Stephen Hemminger
     

18 Feb, 2009

1 commit


13 Jan, 2009

1 commit

  • Commit 55b69e91 (netfilter: implement NFPROTO_UNSPEC as a wildcard
    for extensions) broke revision probing for matches and targets that
    are registered with NFPROTO_UNSPEC.

    Fix by continuing the search on the NFPROTO_UNSPEC list if nothing
    is found on the af-specific lists.

    Signed-off-by: Patrick McHardy
    Signed-off-by: David S. Miller

    Patrick McHardy
     

08 Oct, 2008

1 commit