27 Jun, 2017

1 commit


08 Jun, 2017

1 commit

  • Network devices can allocate reasources and private memory using
    netdev_ops->ndo_init(). However, the release of these resources
    can occur in one of two different places.

    Either netdev_ops->ndo_uninit() or netdev->destructor().

    The decision of which operation frees the resources depends upon
    whether it is necessary for all netdev refs to be released before it
    is safe to perform the freeing.

    netdev_ops->ndo_uninit() presumably can occur right after the
    NETDEV_UNREGISTER notifier completes and the unicast and multicast
    address lists are flushed.

    netdev->destructor(), on the other hand, does not run until the
    netdev references all go away.

    Further complicating the situation is that netdev->destructor()
    almost universally does also a free_netdev().

    This creates a problem for the logic in register_netdevice().
    Because all callers of register_netdevice() manage the freeing
    of the netdev, and invoke free_netdev(dev) if register_netdevice()
    fails.

    If netdev_ops->ndo_init() succeeds, but something else fails inside
    of register_netdevice(), it does call ndo_ops->ndo_uninit(). But
    it is not able to invoke netdev->destructor().

    This is because netdev->destructor() will do a free_netdev() and
    then the caller of register_netdevice() will do the same.

    However, this means that the resources that would normally be released
    by netdev->destructor() will not be.

    Over the years drivers have added local hacks to deal with this, by
    invoking their destructor parts by hand when register_netdevice()
    fails.

    Many drivers do not try to deal with this, and instead we have leaks.

    Let's close this hole by formalizing the distinction between what
    private things need to be freed up by netdev->destructor() and whether
    the driver needs unregister_netdevice() to perform the free_netdev().

    netdev->priv_destructor() performs all actions to free up the private
    resources that used to be freed by netdev->destructor(), except for
    free_netdev().

    netdev->needs_free_netdev is a boolean that indicates whether
    free_netdev() should be done at the end of unregister_netdevice().

    Now, register_netdevice() can sanely release all resources after
    ndo_ops->ndo_init() succeeds, by invoking both ndo_ops->ndo_uninit()
    and netdev->priv_destructor().

    And at the end of unregister_netdevice(), we invoke
    netdev->priv_destructor() and optionally call free_netdev().

    Signed-off-by: David S. Miller

    David S. Miller
     

09 Jan, 2017

4 commits

  • The tc_from field fulfills two roles. It encodes whether a packet was
    redirected by an act_mirred device and, if so, whether act_mirred was
    called on ingress or egress. Split it into separate fields.

    The information is needed by the special IFB loop, where packets are
    taken out of the normal path by act_mirred, forwarded to IFB, then
    reinjected at their original location (ingress or egress) by IFB.

    The IFB device cannot use skb->tc_at_ingress, because that may have
    been overwritten as the packet travels from act_mirred to ifb_xmit,
    when it passes through tc_classify on the IFB egress path. Cache this
    value in skb->tc_from_ingress.

    That field is valid only if a packet arriving at ifb_xmit came from
    act_mirred. Other packets can be crafted to reach ifb_xmit. These
    must be dropped. Set tc_redirected on redirection and drop all packets
    that do not have this bit set.

    Both fields are set only on cloned skbs in tc actions, so original
    packet sources do not have to clear the bit when reusing packets
    (notably, pktgen and octeon).

    Signed-off-by: Willem de Bruijn
    Signed-off-by: David S. Miller

    Willem de Bruijn
     
  • Extract the remaining two fields from tc_verd and remove the __u16
    completely. TC_AT and TC_FROM are converted to equivalent two-bit
    integer fields tc_at and tc_from. Where possible, use existing
    helper skb_at_tc_ingress when reading tc_at. Introduce helper
    skb_reset_tc to clear fields.

    Not documenting tc_from and tc_at, because they will be replaced
    with single bit fields in follow-on patches.

    Signed-off-by: Willem de Bruijn
    Signed-off-by: David S. Miller

    Willem de Bruijn
     
  • Packets sent by the IFB device skip subsequent tc classification.
    A single bit governs this state. Move it out of tc_verd in
    anticipation of removing that __u16 completely.

    The new bitfield tc_skip_classify temporarily uses one bit of a
    hole, until tc_verd is removed completely in a follow-up patch.

    Remove the bit hole comment. It could be 2, 3, 4 or 5 bits long.
    With that many options, little value in documenting it.

    Introduce a helper function to deduplicate the logic in the two
    sites that check this bit.

    The field tc_skip_classify is set only in IFB on skbs cloned in
    act_mirred, so original packet sources do not have to clear the
    bit when reusing packets (notably, pktgen and octeon).

    Signed-off-by: Willem de Bruijn
    Signed-off-by: David S. Miller

    Willem de Bruijn
     
  • The network device operation for reading statistics is only called
    in one place, and it ignores the return value. Having a structure
    return value is potentially confusing because some future driver could
    incorrectly assume that the return value was used.

    Fix all drivers with ndo_get_stats64 to have a void function.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    stephen hemminger
     

09 May, 2016

1 commit

  • When using ifb+netem on ingress on SIT/IPIP/GRE traffic,
    GRO packets are not properly processed.

    Segmentation should not be forced, since ifb is already adding
    quite a performance hit.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

09 Jul, 2015

1 commit

  • Add multiqueue capabilities to ifb netdevice.

    This removes last bottleneck for ingress when mq qdisc can be used
    to shard load from multiple RX queues on physical device.

    Tested:

    # netem based setup, installed at receiver side
    ETH=eth0
    IFB=ifb10
    EST="est 1sec 4sec" # Optional rate estimator
    RTT_HALF=2ms
    #REORDER=20us
    #LOSS="loss 1"
    TXQ=8

    ip link add ifb10 numtxqueues $TXQ type ifb
    ip link set dev $IFB up

    tc qdisc add dev $ETH ingress 2>/dev/null

    tc filter add dev $ETH parent ffff: \
    protocol ip u32 match u32 0 0 flowid 1:1 \
    action mirred egress redirect dev $IFB

    tc qdisc del dev $IFB root 2>/dev/null

    tc qdisc add dev $IFB root handle 1: mq
    for i in `seq 1 $TXQ`
    do
    slot=$( printf %x $(( i )) )
    tc qd add dev $IFB parent 1:$slot $EST netem \
    limit 100000 delay $RTT_HALF $REORDER $LOSS
    done

    lpaa24:~# tc -s -d qd sh dev ifb10
    qdisc mq 1: root
    Sent 316544766 bytes 5265927 pkt (dropped 0, overlimits 0 requeues 0)
    backlog 98880b 1648p requeues 0
    qdisc netem 8002: parent 1:1 limit 100000 delay 2.0ms
    Sent 39601416 bytes 658721 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38235Kbit 79657pps backlog 12240b 204p requeues 0
    qdisc netem 8003: parent 1:2 limit 100000 delay 2.0ms
    Sent 39472866 bytes 657227 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38234Kbit 79655pps backlog 10620b 176p requeues 0
    qdisc netem 8004: parent 1:3 limit 100000 delay 2.0ms
    Sent 39703417 bytes 659699 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38320Kbit 79831pps backlog 12780b 213p requeues 0
    qdisc netem 8005: parent 1:4 limit 100000 delay 2.0ms
    Sent 39565149 bytes 658011 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38174Kbit 79530pps backlog 11880b 198p requeues 0
    qdisc netem 8006: parent 1:5 limit 100000 delay 2.0ms
    Sent 39506078 bytes 657354 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38195Kbit 79571pps backlog 12480b 208p requeues 0
    qdisc netem 8007: parent 1:6 limit 100000 delay 2.0ms
    Sent 39675994 bytes 658849 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38323Kbit 79838pps backlog 12600b 210p requeues 0
    qdisc netem 8008: parent 1:7 limit 100000 delay 2.0ms
    Sent 39532042 bytes 658367 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38177Kbit 79536pps backlog 13140b 219p requeues 0
    qdisc netem 8009: parent 1:8 limit 100000 delay 2.0ms
    Sent 39488164 bytes 657705 pkt (dropped 0, overlimits 0 requeues 0)
    rate 38192Kbit 79568pps backlog 13Kb 222p requeues 0

    Signed-off-by: Eric Dumazet
    Cc: Alexei Starovoitov
    Cc: Jamal Hadi Salim
    Cc: John Fastabend
    Acked-by: Alexei Starovoitov
    Acked-by: Jamal Hadi Salim
    Signed-off-by: David S. Miller

    Eric Dumazet
     

18 Apr, 2015

1 commit

  • When you redirect a VLAN device to any device, you end up with
    crap in af_packet on the xmit path because hard_header_len is
    not equal to skb->mac_len. So the redirected packet contains
    four extra bytes at the start which then gets interpreted as
    part of the MAC address.

    This patch fixes this by only pushing skb->mac_len. We also
    need to fix ifb because it tries to undo the pushing done by
    act_mirred.

    Signed-off-by: Herbert Xu
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Herbert Xu
     

08 Oct, 2014

1 commit

  • Testing xmit_more support with netperf and connected UDP sockets,
    I found strange dst refcount false sharing.

    Current handling of IFF_XMIT_DST_RELEASE is not optimal.

    Dropping dst in validate_xmit_skb() is certainly too late in case
    packet was queued by cpu X but dequeued by cpu Y

    The logical point to take care of drop/force is in __dev_queue_xmit()
    before even taking qdisc lock.

    As Julian Anastasov pointed out, need for skb_dst() might come from some
    packet schedulers or classifiers.

    This patch adds new helper to cleanly express needs of various drivers
    or qdiscs/classifiers.

    Drivers that need skb_dst() in their ndo_start_xmit() should call
    following helper in their setup instead of the prior :

    dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
    ->
    netif_keep_dst(dev);

    Instead of using a single bit, we use two bits, one being
    eventually rebuilt in bonding/team drivers.

    The other one, is permanent and blocks IFF_XMIT_DST_RELEASE being
    rebuilt in bonding/team. Eventually, we could add something
    smarter later.

    Signed-off-by: Eric Dumazet
    Cc: Julian Anastasov
    Signed-off-by: David S. Miller

    Eric Dumazet
     

16 Jul, 2014

1 commit

  • Extend alloc_netdev{,_mq{,s}}() to take name_assign_type as argument, and convert
    all users to pass NET_NAME_UNKNOWN.

    Coccinelle patch:

    @@
    expression sizeof_priv, name, setup, txqs, rxqs, count;
    @@

    (
    -alloc_netdev_mqs(sizeof_priv, name, setup, txqs, rxqs)
    +alloc_netdev_mqs(sizeof_priv, name, NET_NAME_UNKNOWN, setup, txqs, rxqs)
    |
    -alloc_netdev_mq(sizeof_priv, name, setup, count)
    +alloc_netdev_mq(sizeof_priv, name, NET_NAME_UNKNOWN, setup, count)
    |
    -alloc_netdev(sizeof_priv, name, setup)
    +alloc_netdev(sizeof_priv, name, NET_NAME_UNKNOWN, setup)
    )

    v9: move comments here from the wrong commit

    Signed-off-by: Tom Gundersen
    Reviewed-by: David Herrmann
    Signed-off-by: David S. Miller

    Tom Gundersen
     

30 Mar, 2014

1 commit


29 Mar, 2014

1 commit


15 Mar, 2014

1 commit

  • Replace the bh safe variant with the hard irq safe variant.

    We need a hard irq safe variant to deal with netpoll transmitting
    packets from hard irq context, and we need it in most if not all of
    the places using the bh safe variant.

    Except on 32bit uni-processor the code is exactly the same so don't
    bother with a bh variant, just have a hard irq safe variant that
    everyone can use.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     

06 Nov, 2013

1 commit

  • In order to enable lockdep on seqcount/seqlock structures, we
    must explicitly initialize any locks.

    The u64_stats_sync structure, uses a seqcount, and thus we need
    to introduce a u64_stats_init() function and use it to initialize
    the structure.

    This unfortunately adds a lot of fairly trivial initialization code
    to a number of drivers. But the benefit of ensuring correctness makes
    this worth while.

    Because these changes are required for lockdep to be enabled, and the
    changes are quite trivial, I've not yet split this patch out into 30-some
    separate patches, as I figured it would be better to get the various
    maintainers thoughts on how to best merge this change along with
    the seqcount lockdep enablement.

    Feedback would be appreciated!

    Signed-off-by: John Stultz
    Acked-by: Julian Anastasov
    Signed-off-by: Peter Zijlstra
    Cc: Alexey Kuznetsov
    Cc: "David S. Miller"
    Cc: Eric Dumazet
    Cc: Hideaki YOSHIFUJI
    Cc: James Morris
    Cc: Jesse Gross
    Cc: Mathieu Desnoyers
    Cc: "Michael S. Tsirkin"
    Cc: Mirko Lindner
    Cc: Patrick McHardy
    Cc: Roger Luethi
    Cc: Rusty Russell
    Cc: Simon Horman
    Cc: Stephen Hemminger
    Cc: Steven Rostedt
    Cc: Thomas Petazzoni
    Cc: Wensong Zhang
    Cc: netdev@vger.kernel.org
    Link: http://lkml.kernel.org/r/1381186321-4906-2-git-send-email-john.stultz@linaro.org
    Signed-off-by: Ingo Molnar

    John Stultz
     

12 Jul, 2013

1 commit


11 Jul, 2013

1 commit

  • According to the commit 16b0dc29c1af9df341428f4c49ada4f626258082
    (dummy: fix rcu_sched self-detected stalls)

    Eric Dumazet fix the problem in dummy, but the ifb will occur the
    same problem like the dummy modules.

    Trying to "modprobe ifb numifbs=30000" triggers :

    INFO: rcu_sched self-detected stall on CPU

    After this splat, RTNL is locked and reboot is needed.

    We must call cond_resched() to avoid this, even holding RTNL.

    Signed-off-by: Ding Tianhong
    Signed-off-by: David S. Miller

    dingtianhong
     

20 Apr, 2013

2 commits


15 Jan, 2013

1 commit


16 Feb, 2012

1 commit


17 Nov, 2011

1 commit

  • Only distinct use is checking if NETIF_F_NOCACHE_COPY should be
    enabled by default. The check heuristics is altered a bit here,
    so it hits other people than before. The default shouldn't be
    trusted for performance-critical cases anyway.

    For all other uses NETIF_F_NO_CSUM is equivalent to NETIF_F_HW_CSUM.

    Signed-off-by: Michał Mirosław
    Signed-off-by: David S. Miller

    Michał Mirosław
     

28 Jul, 2011

1 commit

  • After the last patch, We are left in a state in which only drivers calling
    ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
    hardware call ether_setup for their net_devices and don't hold any state in
    their skbs. There are a handful of drivers that violate this assumption of
    course, and need to be fixed up. This patch identifies those drivers, and marks
    them as not being able to support the safe transmission of skbs by clearning the
    IFF_TX_SKB_SHARING flag in priv_flags

    Signed-off-by: Neil Horman
    CC: Karsten Keil
    CC: "David S. Miller"
    CC: Jay Vosburgh
    CC: Andy Gospodarek
    CC: Patrick McHardy
    CC: Krzysztof Halasa
    CC: "John W. Linville"
    CC: Greg Kroah-Hartman
    CC: Marcel Holtmann
    CC: Johannes Berg
    Signed-off-by: David S. Miller

    Neil Horman
     

22 Jun, 2011

1 commit


07 Jun, 2011

1 commit


06 May, 2011

1 commit

  • Force dev_alloc_name() to be called from register_netdevice() by
    dev_get_valid_name(). That allows to remove multiple explicit
    dev_alloc_name() calls.

    The possibility to call dev_alloc_name in advance remains.

    This also fixes veth creation regresion caused by
    84c49d8c3e4abefb0a41a77b25aa37ebe8d6b743

    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     

04 Jan, 2011

1 commit

  • Le lundi 03 janvier 2011 à 11:40 -0800, David Miller a écrit :
    > From: Jarek Poplawski
    > Date: Mon, 3 Jan 2011 20:37:03 +0100
    >
    > > On Sun, Jan 02, 2011 at 09:24:36PM +0100, Eric Dumazet wrote:
    > >> Le mercredi 29 décembre 2010 ?? 00:07 +0100, Jarek Poplawski a écrit :
    > >>
    > >> > Ingress is before vlans handler so these features and the
    > >> > NETIF_F_HW_VLAN_TX flag seem useful for ifb considering
    > >> > dev_hard_start_xmit() checks.
    > >>
    > >> OK, here is v2 of the patch then, thanks everybody.
    > >>
    > >>
    > >> [PATCH v2 net-next-2.6] ifb: add performance flags
    > >>
    > >> IFB can use the full set of features flags (NETIF_F_SG |
    > >> NETIF_F_FRAGLIST | NETIF_F_TSO | NETIF_F_NO_CSUM | NETIF_F_HIGHDMA) to
    > >> avoid unnecessary split of some packets (GRO for example)
    > >>
    > >> Changli suggested to also set vlan_features,
    > >
    > > He also suggested more GSO flags of which especially NETIF_F_TSO6
    > > seems interesting (wrt GRO)?
    >
    > I think at least TSO6 would very much be appropriate here.

    Yes, why not, I am only wondering why loopback / dummy (and others ?)
    only set NETIF_F_TSO :)

    Since I want to play with ECN, I might also add NETIF_F_TSO_ECN ;)

    For other flags, I really doubt it can matter on ifb ?

    [PATCH v3 net-next-2.6] ifb: add performance flags

    IFB can use the full set of features flags (NETIF_F_SG |
    NETIF_F_FRAGLIST | NETIF_F_TSO | NETIF_F_NO_CSUM | NETIF_F_HIGHDMA) to
    avoid unnecessary split of some packets (GRO for example)

    Changli suggested to also set vlan_features, NETIF_F_TSO6,
    NETIF_F_TSO_ECN.

    Jarek suggested to add NETIF_F_HW_VLAN_TX as well.

    Signed-off-by: Eric Dumazet
    Cc: Changli Gao
    Cc: Jarek Poplawski
    Cc: Pawel Staszewski
    Signed-off-by: David S. Miller

    Eric Dumazet
     

20 Dec, 2010

1 commit


17 Dec, 2010

1 commit

  • After recent ifb changes, we must use lockless __skb_dequeue() since
    lock is not anymore initialized.

    Signed-off-by: Eric Dumazet
    Cc: Jamal Hadi Salim
    Cc: Changli Gao
    Acked-by: Changli Gao
    Signed-off-by: David S. Miller

    Eric Dumazet
     

11 Dec, 2010

3 commits


09 Dec, 2010

1 commit

  • If we break the loop when there are still skbs in tq and no skb in
    rq, the skbs will be left in txq until new skbs are enqueued into rq.
    In rare cases, no new skb is queued, then these skbs will stay in rq
    forever.

    After this patch, if tq isn't empty when we break the loop, we goto
    resched directly.

    Signed-off-by: Changli Gao
    Signed-off-by: Jamal Hadi Salim
    Signed-off-by: David S. Miller

    Changli Gao
     

10 May, 2010

1 commit

  • Now that core network takes care of trans_start updates, dont do it
    in drivers themselves, if possible. Drivers can avoid one cache miss
    (on dev->trans_start) in their start_xmit() handler.

    Exceptions are NETIF_F_LLTX drivers

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

21 Nov, 2009

1 commit


02 Nov, 2009

1 commit


23 Oct, 2009

1 commit


01 Sep, 2009

1 commit


06 Jul, 2009

1 commit


19 May, 2009

1 commit

  • One point of contention in high network loads is the dst_release() performed
    when a transmited skb is freed. This is because NIC tx completion calls
    dev_kree_skb() long after original call to dev_queue_xmit(skb).

    CPU cache is cold and the atomic op in dst_release() stalls. On SMP, this is
    quite visible if one CPU is 100% handling softirqs for a network device,
    since dst_clone() is done by other cpus, involving cache line ping pongs.

    It seems right place to release dst is in dev_hard_start_xmit(), for most
    devices but ones that are virtual, and some exceptions.

    David Miller suggested to define a new device flag, set in alloc_netdev_mq()
    (so that most devices set it at init time), and carefuly unset in devices
    which dont want a NULL skb->dst in their ndo_start_xmit().

    List of devices that must clear this flag is :

    - loopback device, because it calls netif_rx() and quoting Patrick :
    "ip_route_input() doesn't accept loopback addresses, so loopback packets
    already need to have a dst_entry attached."
    - appletalk/ipddp.c : needs skb->dst in its xmit function

    - And all devices that call again dev_queue_xmit() from their xmit function
    (as some classifiers need skb->dst) : bonding, vlan, macvlan, eql, ifb, hdlc_fr

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet