02 Feb, 2009

1 commit


22 Jan, 2009

1 commit

  • With simple extension to the binding mechanism, which allows to bind more
    than 64k sockets (or smaller amount, depending on sysctl parameters),
    we have to traverse the whole bind hash table to find out empty bucket.
    And while it is not a problem for example for 32k connections, bind()
    completion time grows exponentially (since after each successful binding
    we have to traverse one bucket more to find empty one) even if we start
    each time from random offset inside the hash table.

    So, when hash table is full, and we want to add another socket, we have
    to traverse the whole table no matter what, so effectivelly this will be
    the worst case performance and it will be constant.

    Attached picture shows bind() time depending on number of already bound
    sockets.

    Green area corresponds to the usual binding to zero port process, which
    turns on kernel port selection as described above. Red area is the bind
    process, when number of reuse-bound sockets is not limited by 64k (or
    sysctl parameters). The same exponential growth (hidden by the green
    area) before number of ports reaches sysctl limit.

    At this time bind hash table has exactly one reuse-enbaled socket in a
    bucket, but it is possible that they have different addresses. Actually
    kernel selects the first port to try randomly, so at the beginning bind
    will take roughly constant time, but with time number of port to check
    after random start will increase. And that will have exponential growth,
    but because of above random selection, not every next port selection
    will necessary take longer time than previous. So we have to consider
    the area below in the graph (if you could zoom it, you could find, that
    there are many different times placed there), so area can hide another.

    Blue area corresponds to the port selection optimization.

    This is rather simple design approach: hashtable now maintains (unprecise
    and racely updated) number of currently bound sockets, and when number
    of such sockets becomes greater than predefined value (I use maximum
    port range defined by sysctls), we stop traversing the whole bind hash
    table and just stop at first matching bucket after random start. Above
    limit roughly corresponds to the case, when bind hash table is full and
    we turned on mechanism of allowing to bind more reuse-enabled sockets,
    so it does not change behaviour of other sockets.

    Signed-off-by: Evgeniy Polyakov
    Tested-by: Denys Fedoryschenko
    Signed-off-by: David S. Miller

    Evgeniy Polyakov
     

24 Nov, 2008

2 commits

  • The rule of calling sock_prot_inuse_add() is that BHs must
    be disabled. Some new calls were added where this was not
    true and this tiggers warnings as reported by Ilpo.

    Fix this by adding explicit BH disabling around those call sites,
    or moving sock_prot_inuse_add() call inside an existing BH disabled
    section.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • This is the last step to be able to perform full RCU lookups
    in __inet_lookup() : After established/timewait tables, we
    add RCU lookups to listening hash table.

    The only trick here is that a socket of a given type (TCP ipv4,
    TCP ipv6, ...) can now flight between two different tables
    (established and listening) during a RCU grace period, so we
    must use different 'nulls' end-of-chain values for two tables.

    We define a large value :

    #define LISTENING_NULLS_BASE (1U << 29)

    So that slots in listening table are guaranteed to have different
    end-of-chain values than slots in established table. A reader can
    still detect it finished its lookup in the right chain.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

21 Nov, 2008

1 commit

  • Now TCP & DCCP use RCU lookups, we can convert ehash rwlocks to spinlocks.

    /proc/net/tcp and other seq_file 'readers' can safely be converted to 'writers'.

    This should speedup writers, since spin_lock()/spin_unlock()
    only use one atomic operation instead of two for write_lock()/write_unlock()

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

20 Nov, 2008

1 commit

  • This patch prepares RCU migration of listening_hash table for
    TCP/DCCP protocols.

    listening_hash table being small (32 slots per protocol), we add
    a spinlock for each slot, instead of a single rwlock for whole table.

    This should reduce hold time of readers, and writers concurrency.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

17 Nov, 2008

1 commit

  • RCU was added to UDP lookups, using a fast infrastructure :
    - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the
    price of call_rcu() at freeing time.
    - hlist_nulls permits to use few memory barriers.

    This patch uses same infrastructure for TCP/DCCP established
    and timewait sockets.

    Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications
    using short lived TCP connections. A followup patch, converting
    rwlocks to spinlocks will even speedup this case.

    __inet_lookup_established() is pretty fast now we dont have to
    dirty a contended cache line (read_lock/read_unlock)

    Only established and timewait hashtable are converted to RCU
    (bind table and listen table are still using traditional locking)

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

12 Nov, 2008

1 commit


26 Jul, 2008

1 commit

  • Removes legacy reinvent-the-wheel type thing. The generic
    machinery integrates much better to automated debugging aids
    such as kerneloops.org (and others), and is unambiguous due to
    better naming. Non-intuively BUG_TRAP() is actually equal to
    WARN_ON() rather than BUG_ON() though some might actually be
    promoted to BUG_ON() but I left that to future.

    I could make at least one BUILD_BUG_ON conversion.

    Signed-off-by: Ilpo Järvinen
    Signed-off-by: David S. Miller

    Ilpo Järvinen
     

17 Jul, 2008

1 commit


17 Jun, 2008

3 commits


18 Apr, 2008

1 commit


16 Apr, 2008

1 commit


01 Apr, 2008

1 commit


26 Mar, 2008

2 commits


23 Mar, 2008

1 commit

  • Inspired by the commit ab1e0a13 ([SOCK] proto: Add hashinfo member to
    struct proto) from Arnaldo, I made similar thing for UDP/-Lite IPv4
    and -v6 protocols.

    The result is not that exciting, but it removes some levels of
    indirection in udpxxx_get_port and saves some space in code and text.

    The first step is to union existing hashinfo and new udp_hash on the
    struct proto and give a name to this union, since future initialization
    of tcpxxx_prot, dccp_vx_protinfo and udpxxx_protinfo will cause gcc
    warning about inability to initialize anonymous member this way.

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     

14 Feb, 2008

2 commits


05 Feb, 2008

1 commit


03 Feb, 2008

1 commit

  • This way we can remove TCP and DCCP specific versions of

    sk->sk_prot->get_port: both v4 and v6 use inet_csk_get_port
    sk->sk_prot->hash: inet_hash is directly used, only v6 need
    a specific version to deal with mapped sockets
    sk->sk_prot->unhash: both v4 and v6 use inet_hash directly

    struct inet_connection_sock_af_ops also gets a new member, bind_conflict, so
    that inet_csk_get_port can find the per family routine.

    Now only the lookup routines receive as a parameter a struct inet_hashtable.

    With this we further reuse code, reducing the difference among INET transport
    protocols.

    Eventually work has to be done on UDP and SCTP to make them share this
    infrastructure and get as a bonus inet_diag interfaces so that iproute can be
    used with these protocols.

    net-2.6/net/ipv4/inet_hashtables.c:
    struct proto | +8
    struct inet_connection_sock_af_ops | +8
    2 structs changed
    __inet_hash_nolisten | +18
    __inet_hash | -210
    inet_put_port | +8
    inet_bind_bucket_create | +1
    __inet_hash_connect | -8
    5 functions changed, 27 bytes added, 218 bytes removed, diff: -191

    net-2.6/net/core/sock.c:
    proto_seq_show | +3
    1 function changed, 3 bytes added, diff: +3

    net-2.6/net/ipv4/inet_connection_sock.c:
    inet_csk_get_port | +15
    1 function changed, 15 bytes added, diff: +15

    net-2.6/net/ipv4/tcp.c:
    tcp_set_state | -7
    1 function changed, 7 bytes removed, diff: -7

    net-2.6/net/ipv4/tcp_ipv4.c:
    tcp_v4_get_port | -31
    tcp_v4_hash | -48
    tcp_v4_destroy_sock | -7
    tcp_v4_syn_recv_sock | -2
    tcp_unhash | -179
    5 functions changed, 267 bytes removed, diff: -267

    net-2.6/net/ipv6/inet6_hashtables.c:
    __inet6_hash | +8
    1 function changed, 8 bytes added, diff: +8

    net-2.6/net/ipv4/inet_hashtables.c:
    inet_unhash | +190
    inet_hash | +242
    2 functions changed, 432 bytes added, diff: +432

    vmlinux:
    16 functions changed, 485 bytes added, 492 bytes removed, diff: -7

    /home/acme/git/net-2.6/net/ipv6/tcp_ipv6.c:
    tcp_v6_get_port | -31
    tcp_v6_hash | -7
    tcp_v6_syn_recv_sock | -9
    3 functions changed, 47 bytes removed, diff: -47

    /home/acme/git/net-2.6/net/dccp/proto.c:
    dccp_destroy_sock | -7
    dccp_unhash | -179
    dccp_hash | -49
    dccp_set_state | -7
    dccp_done | +1
    5 functions changed, 1 bytes added, 242 bytes removed, diff: -241

    /home/acme/git/net-2.6/net/dccp/ipv4.c:
    dccp_v4_get_port | -31
    dccp_v4_request_recv_sock | -2
    2 functions changed, 33 bytes removed, diff: -33

    /home/acme/git/net-2.6/net/dccp/ipv6.c:
    dccp_v6_get_port | -31
    dccp_v6_hash | -7
    dccp_v6_request_recv_sock | +5
    3 functions changed, 5 bytes added, 38 bytes removed, diff: -33

    Signed-off-by: Arnaldo Carvalho de Melo
    Signed-off-by: David S. Miller

    Arnaldo Carvalho de Melo
     

01 Feb, 2008

3 commits

  • Add a net argument to inet_lookup and propagate it further
    into lookup calls. Plus tune the __inet_check_established.

    The dccp and inet_diag, which use that lookup functions
    pass the init_net into them.

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     
  • This tags the inet_bind_bucket struct with net pointer,
    initializes it during creation and makes a filtering
    during lookup.

    A better hashfn, that takes the net into account is to
    be done in the future, but currently all bind buckets
    with similar port will be in one hash chain.

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     
  • These two functions are the same except for what they call
    to "check_established" and "hash" for a socket.

    This saves half-a-kilo for ipv4 and ipv6.

    add/remove: 1/0 grow/shrink: 1/4 up/down: 582/-1128 (-546)
    function old new delta
    __inet_hash_connect - 577 +577
    arp_ignore 108 113 +5
    static.hint 8 4 -4
    rt_worker_func 376 372 -4
    inet6_hash_connect 584 25 -559
    inet_hash_connect 586 25 -561

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     

29 Jan, 2008

4 commits

  • 1) Cleanups (all functions are prefixed by sock_prot_inuse)

    sock_prot_inc_use(prot) -> sock_prot_inuse_add(prot,-1)
    sock_prot_dec_use(prot) -> sock_prot_inuse_add(prot,-1)
    sock_prot_inuse() -> sock_prot_inuse_get()

    New functions :

    sock_prot_inuse_init() and sock_prot_inuse_free() to abstract pcounter use.

    2) if CONFIG_PROC_FS=n, we can zap 'inuse' member from "struct proto",
    since nobody wants to read the inuse value.

    This saves 1372 bytes on i386/SMP and some cpu cycles.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Add __acquires() and __releases() annotations to suppress some sparse
    warnings.

    example of warnings :

    net/ipv4/udp.c:1555:14: warning: context imbalance in 'udp_seq_start' - wrong
    count at exit
    net/ipv4/udp.c:1571:13: warning: context imbalance in 'udp_seq_stop' -
    unexpected unlock

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • This is -700 bytes from the net/ipv4/built-in.o

    add/remove: 1/0 grow/shrink: 1/3 up/down: 340/-1040 (-700)
    function old new delta
    __inet_lookup_established - 339 +339
    tcp_sacktag_write_queue 2254 2255 +1
    tcp_v4_err 1304 973 -331
    tcp_v4_rcv 2089 1744 -345
    tcp_v4_do_rcv 826 462 -364

    Exporting is for dccp module (used via e.g. inet_lookup).

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     
  • This one is used in quite many places in the networking code and
    seems to big to be inline.

    After the patch net/ipv4/build-in.o loses ~650 bytes:
    add/remove: 2/0 grow/shrink: 0/5 up/down: 461/-1114 (-653)
    function old new delta
    __inet_hash_nolisten - 282 +282
    __inet_hash - 179 +179
    tcp_sacktag_write_queue 2255 2254 -1
    __inet_lookup_listener 284 274 -10
    tcp_v4_syn_recv_sock 755 493 -262
    tcp_v4_hash 389 35 -354
    inet_hash_connect 1086 599 -487

    This version addresses the issue pointed by Eric, that
    while being inline this function was optimized by gcc
    in respect to the 'listen_possible' argument.

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     

07 Nov, 2007

1 commit

  • As done two years ago on IP route cache table (commit
    22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one
    lock per hash bucket for the huge TCP/DCCP hash tables.

    On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for
    litle performance differences. (we hit a different cache line for the
    rwlock, but then the bucket cache line have a better sharing factor
    among cpus, since we dirty it less often). For netstat or ss commands
    that want a full scan of hash table, we perform fewer memory accesses.

    Using a 'small' table of hashed rwlocks should be more than enough to
    provide correct SMP concurrency between different buckets, without
    using too much memory. Sizing of this table depends on
    num_possible_cpus() and various CONFIG settings.

    This patch provides some locking abstraction that may ease a future
    work using a different model for TCP/DCCP table.

    Signed-off-by: Eric Dumazet
    Acked-by: Arnaldo Carvalho de Melo
    Signed-off-by: David S. Miller

    Eric Dumazet
     

19 Oct, 2007

1 commit

  • There is a justifying patch for Stephen's patches. Stephen's patches
    disallows using a port range of one single port and brakes the meaning
    of the 'remaining' variable, in some places it has different meaning.
    My patch gives back the sense of 'remaining' variable. It should mean
    how many ports are remaining and nothing else. Also my patch allows
    using a single port.

    I sure we must be able to use mentioned port range, this does not
    restricted by documentation and does not brake current behavior.

    usefull links:
    Patches posted by Stephen Hemminger
    http://marc.info/?l=linux-netdev&m=119206106218187&w=2
    http://marc.info/?l=linux-netdev&m=119206109918235&w=2

    Andrew Morton's comment
    http://marc.info/?l=linux-kernel&m=119248225007737&w=2

    1. Allows using a port range of one single port.
    2. Gives back sense of 'remaining' variable.

    Signed-off-by: Anton Arapov
    Acked-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Anton Arapov
     

11 Oct, 2007

1 commit

  • Expansion of original idea from Denis V. Lunev

    Add robustness and locking to the local_port_range sysctl.
    1. Enforce that low < high when setting.
    2. Use seqlock to ensure atomic update.

    The locking might seem like overkill, but there are
    cases where sysadmin might want to change value in the
    middle of a DoS attack.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Stephen Hemminger
     

11 Feb, 2007

1 commit


09 Feb, 2007

1 commit

  • ehash table layout is currently this one :

    First half of this table is used by sockets not in TIME_WAIT state
    Second half of it is used by sockets in TIME_WAIT state.

    This is non optimal because of for a given hash or socket, the two chain heads
    are located in separate cache lines.
    Moreover the locks of the second half are never used.

    If instead of this halving, we use two list heads in inet_ehash_bucket instead
    of only one, we probably can avoid one cache miss, and reduce ram usage,
    particularly if sizeof(rwlock_t) is big (various CONFIG_DEBUG_SPINLOCK,
    CONFIG_DEBUG_LOCK_ALLOC settings). So we still halves the table but we keep
    together related chains to speedup lookups and socket state change.

    In this patch I did not try to align struct inet_ehash_bucket, but a future
    patch could try to make this structure have a convenient size (a power of two
    or a multiple of L1_CACHE_SIZE).
    I guess rwlock will just vanish as soon as RCU is plugged into ehash :) , so
    maybe we dont need to scratch our heads to align the bucket...

    Note : In case struct inet_ehash_bucket is not a power of two, we could
    probably change alloc_large_system_hash() (in case it use __get_free_pages())
    to free the unused space. It currently allocates a big zone, but the last
    quarter of it could be freed. Again, this should be a temporary 'problem'.

    Patch tested on ipv4 tcp only, but should be OK for IPV6 and DCCP.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

08 Dec, 2006

2 commits

  • Replace all uses of kmem_cache_t with struct kmem_cache.

    The patch was generated using the following script:

    #!/bin/sh
    #
    # Replace one string by another in all the kernel sources.
    #

    set -e

    for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do
    quilt add $file
    sed -e "1,\$s/$1/$2/g" $file >/tmp/$$
    mv /tmp/$$ $file
    quilt refresh
    done

    The script was run like this

    sh replace kmem_cache_t "struct kmem_cache"

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • SLAB_ATOMIC is an alias of GFP_ATOMIC

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

29 Sep, 2006

2 commits

  • inet_lookup() annotated along with helper functions (__inet_lookup(),
    __inet_lookup_established(), inet_lookup_established(),
    inet_lookup_listener(), __inet_lookup_listener() and inet_ehashfn())

    Signed-off-by: Al Viro
    Signed-off-by: David S. Miller

    Al Viro
     
  • INET_MATCH() and friends depend on an interesting set of kludges:
    * there's a pair of adjacent fields in struct inet_sock - __be16 dport
    followed by __u16 num. We want to search by pair, so we combine the keys into
    a single 32bit value and compare with 32bit value read from &...->dport.
    * on 64bit targets we combine comparisons with pair of adjacent __be32
    fields in the same way.

    Make sure that we don't mix those values with anything else and that pairs
    we form them from have correct types.

    Signed-off-by: Al Viro
    Signed-off-by: David S. Miller

    Al Viro
     

23 Sep, 2006

1 commit

  • Right now most inet_lookup_* functions take a host-order hnum instead
    of a network-order dport because that's how it is represented
    internally.

    This means that users of these functions have to be careful about
    using the right byte-order. To add more confusion, inet_lookup takes
    a network-order dport unlike all other functions.

    So this patch changes all visible inet_lookup functions to take a
    dport and move all dport->hnum conversion inside them.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu