15 Mar, 2011

4 commits


14 Feb, 2011

1 commit


12 Feb, 2011

1 commit

  • The patch below introduces an early termination of the loop that is
    counting matches. It terminates once the counter has exceeded the
    threshold provided by the user. There's no point in continuing the loop
    afterwards and looking at other entries.

    It plays together with the following code further below:

    return (connections > info->limit) ^ info->inverse;

    where connections is the result of the counted connection, which in turn
    is the matches variable in the loop. So once

    -> matches = info->limit + 1
    alias -> matches > info->limit
    alias -> matches > threshold

    we can terminate the loop.

    Signed-off-by: Stefan Berger
    Signed-off-by: Patrick McHardy

    Stefan Berger
     

26 Jan, 2011

1 commit

  • xt_connlimit normally records the "original" tuples in a hashlist
    (such as "1.2.3.4 -> 5.6.7.8"), and looks in this list for iph->daddr
    when counting.

    When the user however uses DNAT in PREROUTING, looking for
    iph->daddr -- which is now 192.168.9.10 -- will not match. Thus in
    daddr mode, we need to record the reverse direction tuple
    ("192.168.9.10 -> 1.2.3.4") instead. In the reverse tuple, the dst
    addr is on the src side, which is convenient, as count_them still uses
    &conn->tuple.src.u3.

    Signed-off-by: Jan Engelhardt

    Jan Engelhardt
     

20 Jan, 2011

1 commit

  • This adds destination address-based selection. The old "inverse"
    member is overloaded (memory-wise) with a new "flags" variable,
    similar to how J.Park did it with xt_string rev 1. Since revision 0
    userspace only sets flag 0x1, no great changes are made to explicitly
    test for different revisions.

    Signed-off-by: Jan Engelhardt

    Jan Engelhardt
     

18 Jan, 2011

1 commit


12 May, 2010

3 commits


20 Apr, 2010

1 commit


30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

25 Mar, 2010

3 commits

  • When extended status codes are available, such as ENOMEM on failed
    allocations, or subsequent functions (e.g. nf_ct_get_l3proto), passing
    them up to userspace seems like a good idea compared to just always
    EINVAL.

    Signed-off-by: Jan Engelhardt

    Jan Engelhardt
     
  • The following semantic patch does part of the transformation:
    //
    @ rule1 @
    struct xt_match ops;
    identifier check;
    @@
    ops.checkentry = check;

    @@
    identifier rule1.check;
    @@
    check(...) { }

    @@
    identifier rule1.check;
    @@
    check(...) { }
    //

    Signed-off-by: Jan Engelhardt

    Jan Engelhardt
     
  • Restore function signatures from bool to int so that we can report
    memory allocation failures or similar using -ENOMEM rather than
    always having to pass -EINVAL back.

    This semantic patch may not be too precise (checking for functions
    that use xt_mtchk_param rather than functions referenced by
    xt_match.checkentry), but reviewed, it produced the intended result.

    //
    @@
    type bool;
    identifier check, par;
    @@
    -bool check
    +int check
    (struct xt_mtchk_param *par) { ... }
    //

    Signed-off-by: Jan Engelhardt

    Jan Engelhardt
     

18 Mar, 2010

1 commit


17 Mar, 2010

1 commit


16 Feb, 2010

1 commit


18 Jan, 2010

1 commit


04 Jan, 2010

1 commit


07 Nov, 2009

1 commit


26 Mar, 2009

1 commit

  • Use "hlist_nulls" infrastructure we added in 2.6.29 for RCUification of UDP & TCP.

    This permits an easy conversion from call_rcu() based hash lists to a
    SLAB_DESTROY_BY_RCU one.

    Avoiding call_rcu() delay at nf_conn freeing time has numerous gains.

    First, it doesnt fill RCU queues (up to 10000 elements per cpu).
    This reduces OOM possibility, if queued elements are not taken into account
    This reduces latency problems when RCU queue size hits hilimit and triggers
    emergency mode.

    - It allows fast reuse of just freed elements, permitting better use of
    CPU cache.

    - We delete rcu_head from "struct nf_conn", shrinking size of this structure
    by 8 or 16 bytes.

    This patch only takes care of "struct nf_conn".
    call_rcu() is still used for less critical conntrack parts, that may
    be converted later if necessary.

    Signed-off-by: Eric Dumazet
    Signed-off-by: Patrick McHardy

    Eric Dumazet
     

08 Oct, 2008

7 commits


05 Jun, 2008

1 commit

  • In xt_connlimit match module, the counter of an IP is decreased when
    the TCP packet is go through the chain with ip_conntrack state TW.
    Well, it's very natural that the server and client close the socket
    with FIN packet. But when the client/server close the socket with RST
    packet(using so_linger), the counter for this connection still exsit.
    The following patch can fix it which is based on linux-2.6.25.4

    Signed-off-by: Dong Wei
    Acked-by: Jan Engelhardt
    Signed-off-by: Patrick McHardy
    Signed-off-by: David S. Miller

    Dong Wei
     

14 Apr, 2008

2 commits


01 Feb, 2008

2 commits

  • Ignoring specific entries in __nf_conntrack_find() is only needed by NAT
    for nf_conntrack_tuple_taken(). Remove it from __nf_conntrack_find()
    and make nf_conntrack_tuple_taken() search the hash itself.

    Saves 54 bytes of text in the hotpath on x86_64:

    __nf_conntrack_find | -54 # 321 -> 267, # inlines: 3 -> 2, size inlines: 181 -> 127
    nf_conntrack_tuple_taken | +305 # 15 -> 320, lexblocks: 0 -> 3, # inlines: 0 -> 3, size inlines: 0 -> 181
    nf_conntrack_find_get | -2 # 90 -> 88
    3 functions changed, 305 bytes added, 56 bytes removed, diff: +249

    Signed-off-by: Patrick McHardy
    Signed-off-by: David S. Miller

    Patrick McHardy
     
  • Signed-off-by: Patrick McHardy
    Signed-off-by: David S. Miller

    Patrick McHardy
     

29 Jan, 2008

4 commits