25 Jun, 2020

1 commit


02 Oct, 2019

1 commit

  • BH must be disabled when invoking nf_conncount_gc_list() to perform
    garbage collection, otherwise deadlock might happen.

    nf_conncount_add+0x1f/0x50 [nf_conncount]
    nft_connlimit_eval+0x4c/0xe0 [nft_connlimit]
    nft_dynset_eval+0xb5/0x100 [nf_tables]
    nft_do_chain+0xea/0x420 [nf_tables]
    ? sch_direct_xmit+0x111/0x360
    ? noqueue_init+0x10/0x10
    ? __qdisc_run+0x84/0x510
    ? tcp_packet+0x655/0x1610 [nf_conntrack]
    ? ip_finish_output2+0x1a7/0x430
    ? tcp_error+0x130/0x150 [nf_conntrack]
    ? nf_conntrack_in+0x1fc/0x4c0 [nf_conntrack]
    nft_do_chain_ipv4+0x66/0x80 [nf_tables]
    nf_hook_slow+0x44/0xc0
    ip_rcv+0xb5/0xd0
    ? ip_rcv_finish_core.isra.19+0x360/0x360
    __netif_receive_skb_one_core+0x52/0x70
    netif_receive_skb_internal+0x34/0xe0
    napi_gro_receive+0xba/0xe0
    e1000_clean_rx_irq+0x1e9/0x420 [e1000e]
    e1000e_poll+0xbe/0x290 [e1000e]
    net_rx_action+0x149/0x3b0
    __do_softirq+0xde/0x2d8
    irq_exit+0xba/0xc0
    do_IRQ+0x85/0xd0
    common_interrupt+0xf/0xf

    RIP: 0010:nf_conncount_gc_list+0x3b/0x130 [nf_conncount]

    Fixes: 2f971a8f4255 ("netfilter: nf_conncount: move all list iterations under spinlock")
    Reported-by: Laura Garcia Liebana
    Signed-off-by: Pablo Neira Ayuso

    Pablo Neira Ayuso
     

29 Dec, 2018

1 commit

  • 'lookup' is always followed by 'add'.
    Merge both and make the list-walk part of nf_conncount_add().

    This also avoids one unneeded unlock/re-lock pair.

    Extra care needs to be taken in count_tree, as we only hold rcu
    read lock, i.e. we can only insert to an existing tree node after
    acquiring its lock and making sure it has a nonzero count.

    As a zero count should be rare, just fall back to insert_tree()
    (which acquires tree lock).

    This issue and its solution were pointed out by Shawn Bohrer
    during patch review.

    Reviewed-by: Shawn Bohrer
    Signed-off-by: Florian Westphal
    Signed-off-by: Pablo Neira Ayuso

    Florian Westphal
     

18 Jul, 2018

3 commits

  • This patch is originally from Florian Westphal.

    This patch does the following 3 main tasks.

    1) Add list lock to 'struct nf_conncount_list' so that we can
    alter the lists containing the individual connections without holding the
    main tree lock. It would be useful when we only need to add/remove to/from
    a list without allocate/remove a node in the tree. With this change, we
    update nft_connlimit accordingly since we longer need to maintain
    a list lock in nft_connlimit now.

    2) Use RCU for the initial tree search to improve tree look up performance.

    3) Add a garbage collection worker. This worker is schedule when there
    are excessive tree node that needed to be recycled.

    Moreover,the rbnode reclaim logic is moved from search tree to insert tree
    to avoid race condition.

    Signed-off-by: Yi-Hung Wei
    Signed-off-by: Florian Westphal
    Signed-off-by: Pablo Neira Ayuso

    Yi-Hung Wei
     
  • This patch is originally from Florian Westphal.

    This patch does the following three tasks.

    It applies the same early exit technique for nf_conncount_lookup().

    Since now we keep the number of connections in 'struct nf_conncount_list',
    we no longer need to return the count in nf_conncount_lookup().

    Moreover, we expose the garbage collection function nf_conncount_gc_list()
    for nft_connlimit.

    Signed-off-by: Yi-Hung Wei
    Signed-off-by: Florian Westphal
    Signed-off-by: Pablo Neira Ayuso

    Yi-Hung Wei
     
  • Original patch is from Florian Westphal.

    This patch switches from hlist to plain list to store the list of
    connections with the same filtering key in nf_conncount. With the
    plain list, we can insert new connections at the tail, so over time
    the beginning of list holds long-running connections and those are
    expired, while the newly creates ones are at the end.

    Later on, we could probably move checked ones to the end of the list,
    so the next run has higher chance to reclaim stale entries in the front.

    Signed-off-by: Yi-Hung Wei
    Signed-off-by: Florian Westphal
    Signed-off-by: Pablo Neira Ayuso

    Yi-Hung Wei
     

13 Jun, 2018

1 commit

  • Currently, we use check_hlist() for garbage colleciton. However, we
    use the ‘zone’ from the counted entry to query the existence of
    existing entries in the hlist. This could be wrong when they are in
    different zones, and this patch fixes this issue.

    Fixes: e59ea3df3fc2 ("netfilter: xt_connlimit: honor conntrack zone if available")
    Signed-off-by: Yi-Hung Wei
    Signed-off-by: Pablo Neira Ayuso

    Yi-Hung Wei
     

03 Jun, 2018

1 commit

  • This features which allows you to limit the maximum number of
    connections per arbitrary key. The connlimit expression is stateful,
    therefore it can be used from meters to dynamically populate a set, this
    provides a mapping to the iptables' connlimit match. This patch also
    comes that allows you define static connlimit policies.

    This extension depends on the nf_conncount infrastructure.

    Signed-off-by: Pablo Neira Ayuso

    Pablo Neira Ayuso