07 Dec, 2011

1 commit

  • Since commit c5ed63d66f24(tcp: fix three tcp sysctls tuning),
    sysctl_max_syn_backlog is determined by tcp_hashinfo->ehash_mask,
    and the minimal value is 128, and it will increase in proportion to the
    memory of machine.
    The original description for tcp_max_syn_backlog and sysctl_max_syn_backlog
    are out of date.

    Changelog:
    V2: update description for sysctl_max_syn_backlog

    Signed-off-by: Weiping Pan
    Reviewed-by: Shan Wei
    Acked-by: Neil Horman
    Signed-off-by: David S. Miller

    Peter Pan(潘卫平)
     

09 Dec, 2010

1 commit


03 Dec, 2010

1 commit

  • This will also improve handling of ipv6 tcp socket request
    backlog when syncookies are not enabled. When backlog
    becomes very deep, last quarter of backlog is limited to
    validated destinations. Previously only ipv4 implemented
    this logic, but now ipv6 does too.

    Now we are only one step away from enabling timewait
    recycling for ipv6, and that step is simply filling in
    the implementation of tcp_v6_get_peer() and
    tcp_v6_tw_get_peer().

    Signed-off-by: David S. Miller

    David S. Miller
     

22 Nov, 2010

1 commit

  • We forgot to use __GFP_HIGHMEM in several __vmalloc() calls.

    In ceph, add the missing flag.

    In fib_trie.c, xfrm_hash.c and request_sock.c, using vzalloc() is
    cleaner and allows using HIGHMEM pages as well.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

26 Jul, 2008

1 commit

  • Removes legacy reinvent-the-wheel type thing. The generic
    machinery integrates much better to automated debugging aids
    such as kerneloops.org (and others), and is unambiguous due to
    better naming. Non-intuively BUG_TRAP() is actually equal to
    WARN_ON() rather than BUG_ON() though some might actually be
    promoted to BUG_ON() but I left that to future.

    I could make at least one BUILD_BUG_ON conversion.

    Signed-off-by: Ilpo Järvinen
    Signed-off-by: David S. Miller

    Ilpo Järvinen
     

29 Jan, 2008

1 commit


15 Nov, 2007

1 commit

  • The request_sock_queue's listen_opt is either vmalloc-ed or
    kmalloc-ed depending on the number of table entries. Thus it
    is expected to be handled properly on free, which is done in
    the reqsk_queue_destroy().

    However the error path in inet_csk_listen_start() calls
    the lite version of reqsk_queue_destroy, called
    __reqsk_queue_destroy, which calls the kfree unconditionally.

    Fix this and move the __reqsk_queue_destroy into a .c file as
    it looks too big to be inline.

    As David also noticed, this is an error recovery path only,
    so no locking is required and the lopt is known to be not NULL.

    reqsk_queue_yank_listen_sk is also now only used in
    net/core/request_sock.c so we should move it there too.

    Signed-off-by: Pavel Emelyanov
    Acked-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     

03 Dec, 2006

1 commit

  • We currently allocate a fixed size (TCP_SYNQ_HSIZE=512) slots hash table for
    each LISTEN socket, regardless of various parameters (listen backlog for
    example)

    On x86_64, this means order-1 allocations (might fail), even for 'small'
    sockets, expecting few connections. On the contrary, a huge server wanting a
    backlog of 50000 is slowed down a bit because of this fixed limit.

    This patch makes the sizing of listen hash table a dynamic parameter,
    depending of :
    - net.core.somaxconn tunable (default is 128)
    - net.ipv4.tcp_max_syn_backlog tunable (default : 256, 1024 or 128)
    - backlog value given by user application (2nd parameter of listen())

    For large allocations (bigger than PAGE_SIZE), we use vmalloc() instead of
    kmalloc().

    We still limit memory allocation with the two existing tunables (somaxconn &
    tcp_max_syn_backlog). So for standard setups, this patch actually reduce RAM
    usage.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

10 Apr, 2006

1 commit


27 Mar, 2006

1 commit

  • Just noticed that request_sock.[ch] contain a useless assignment of
    rskq_accept_head to itself. I assume this is a typo and the 2nd one
    was supposed to be _tail. However, setting _tail to NULL is not
    needed, so the patch below just drops the 2nd assignment.

    Signed-off-By: Norbert Kiesel
    Signed-off-by: Adrian Bunk
    Signed-off-by: David S. Miller

    Norbert Kiesel
     

28 Feb, 2006

1 commit

  • In 295f7324ff8d9ea58b4d3ec93b1aaa1d80e048a9 I moved defer_accept from
    tcp_sock to request_queue and mistakingly reset it at reqsl_queue_alloc, causing
    calls to setsockopt(TCP_DEFER_ACCEPT ) to be lost after bind, the fix is to
    remove the zeroing of rskq_defer_accept from reqsl_queue_alloc.

    Thanks to Alexandra N. Kossovsky for
    reporting and testing the suggested fix.

    Signed-off-by: Arnaldo Carvalho de Melo
    Signed-off-by: David S. Miller

    Arnaldo Carvalho de Melo
     

30 Aug, 2005

3 commits


19 Jun, 2005

3 commits