19 May, 2018

1 commit

  • [ Upstream commit 7df40c2673a1307c3260aab6f9d4b9bf97ca8fd7 ]

    Normally, a socket can not be freed/reused unless all its TX packets
    left qdisc and were TX-completed. However connect(AF_UNSPEC) allows
    this to happen.

    With commit fc59d5bdf1e3 ("pkt_sched: fq: clear time_next_packet for
    reused flows") we cleared f->time_next_packet but took no special
    action if the flow was still in the throttled rb-tree.

    Since f->time_next_packet is the key used in the rb-tree searches,
    blindly clearing it might break rb-tree integrity. We need to make
    sure the flow is no longer in the rb-tree to avoid this problem.

    Fixes: fc59d5bdf1e3 ("pkt_sched: fq: clear time_next_packet for reused flows")
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     

13 Jul, 2017

1 commit

  • __GFP_REPEAT was designed to allow retry-but-eventually-fail semantic to
    the page allocator. This has been true but only for allocations
    requests larger than PAGE_ALLOC_COSTLY_ORDER. It has been always
    ignored for smaller sizes. This is a bit unfortunate because there is
    no way to express the same semantic for those requests and they are
    considered too important to fail so they might end up looping in the
    page allocator for ever, similarly to GFP_NOFAIL requests.

    Now that the whole tree has been cleaned up and accidental or misled
    usage of __GFP_REPEAT flag has been removed for !costly requests we can
    give the original flag a better name and more importantly a more useful
    semantic. Let's rename it to __GFP_RETRY_MAYFAIL which tells the user
    that the allocator would try really hard but there is no promise of a
    success. This will work independent of the order and overrides the
    default allocator behavior. Page allocator users have several levels of
    guarantee vs. cost options (take GFP_KERNEL as an example)

    - GFP_KERNEL & ~__GFP_RECLAIM - optimistic allocation without _any_
    attempt to free memory at all. The most light weight mode which even
    doesn't kick the background reclaim. Should be used carefully because
    it might deplete the memory and the next user might hit the more
    aggressive reclaim

    - GFP_KERNEL & ~__GFP_DIRECT_RECLAIM (or GFP_NOWAIT)- optimistic
    allocation without any attempt to free memory from the current
    context but can wake kswapd to reclaim memory if the zone is below
    the low watermark. Can be used from either atomic contexts or when
    the request is a performance optimization and there is another
    fallback for a slow path.

    - (GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM (aka GFP_ATOMIC) -
    non sleeping allocation with an expensive fallback so it can access
    some portion of memory reserves. Usually used from interrupt/bh
    context with an expensive slow path fallback.

    - GFP_KERNEL - both background and direct reclaim are allowed and the
    _default_ page allocator behavior is used. That means that !costly
    allocation requests are basically nofail but there is no guarantee of
    that behavior so failures have to be checked properly by callers
    (e.g. OOM killer victim is allowed to fail currently).

    - GFP_KERNEL | __GFP_NORETRY - overrides the default allocator behavior
    and all allocation requests fail early rather than cause disruptive
    reclaim (one round of reclaim in this implementation). The OOM killer
    is not invoked.

    - GFP_KERNEL | __GFP_RETRY_MAYFAIL - overrides the default allocator
    behavior and all allocation requests try really hard. The request
    will fail if the reclaim cannot make any progress. The OOM killer
    won't be triggered.

    - GFP_KERNEL | __GFP_NOFAIL - overrides the default allocator behavior
    and all allocation requests will loop endlessly until they succeed.
    This might be really dangerous especially for larger orders.

    Existing users of __GFP_REPEAT are changed to __GFP_RETRY_MAYFAIL
    because they already had their semantic. No new users are added.
    __alloc_pages_slowpath is changed to bail out for __GFP_RETRY_MAYFAIL if
    there is no progress and we have already passed the OOM point.

    This means that all the reclaim opportunities have been exhausted except
    the most disruptive one (the OOM killer) and a user defined fallback
    behavior is more sensible than keep retrying in the page allocator.

    [akpm@linux-foundation.org: fix arch/sparc/kernel/mdesc.c]
    [mhocko@suse.com: semantic fix]
    Link: http://lkml.kernel.org/r/20170626123847.GM11534@dhcp22.suse.cz
    [mhocko@kernel.org: address other thing spotted by Vlastimil]
    Link: http://lkml.kernel.org/r/20170626124233.GN11534@dhcp22.suse.cz
    Link: http://lkml.kernel.org/r/20170623085345.11304-3-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Acked-by: Vlastimil Babka
    Cc: Alex Belits
    Cc: Chris Wilson
    Cc: Christoph Hellwig
    Cc: Darrick J. Wong
    Cc: David Daney
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Cc: NeilBrown
    Cc: Ralf Baechle
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

17 May, 2017

1 commit

  • BBR congestion control depends on pacing, and pacing is
    currently handled by sch_fq packet scheduler for performance reasons,
    and also because implemening pacing with FQ was convenient to truly
    avoid bursts.

    However there are many cases where this packet scheduler constraint
    is not practical.
    - Many linux hosts are not focusing on handling thousands of TCP
    flows in the most efficient way.
    - Some routers use fq_codel or other AQM, but still would like
    to use BBR for the few TCP flows they initiate/terminate.

    This patch implements an automatic fallback to internal pacing.

    Pacing is requested either by BBR or use of SO_MAX_PACING_RATE option.

    If sch_fq happens to be in the egress path, pacing is delegated to
    the qdisc, otherwise pacing is done by TCP itself.

    One advantage of pacing from TCP stack is to get more precise rtt
    estimations, and less work done from TX completion, since TCP Small
    queue limits are not generally hit. Setups with single TX queue but
    many cpus might even benefit from this.

    Note that unlike sch_fq, we do not take into account header sizes.
    Taking care of these headers would add additional complexity for
    no practical differences in behavior.

    Some performance numbers using 800 TCP_STREAM flows rate limited to
    ~48 Mbit per second on 40Gbit NIC.

    If MQ+pfifo_fast is used on the NIC :

    $ sar -n DEV 1 5 | grep eth
    14:48:44 eth0 725743.00 2932134.00 46776.76 4335184.68 0.00 0.00 1.00
    14:48:45 eth0 725349.00 2932112.00 46751.86 4335158.90 0.00 0.00 0.00
    14:48:46 eth0 725101.00 2931153.00 46735.07 4333748.63 0.00 0.00 0.00
    14:48:47 eth0 725099.00 2931161.00 46735.11 4333760.44 0.00 0.00 1.00
    14:48:48 eth0 725160.00 2931731.00 46738.88 4334606.07 0.00 0.00 0.00
    Average: eth0 725290.40 2931658.20 46747.54 4334491.74 0.00 0.00 0.40
    $ vmstat 1 5
    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    4 0 0 259825920 45644 2708324 0 0 21 2 247 98 0 0 100 0 0
    4 0 0 259823744 45644 2708356 0 0 0 0 2400825 159843 0 19 81 0 0
    0 0 0 259824208 45644 2708072 0 0 0 0 2407351 159929 0 19 81 0 0
    1 0 0 259824592 45644 2708128 0 0 0 0 2405183 160386 0 19 80 0 0
    1 0 0 259824272 45644 2707868 0 0 0 32 2396361 158037 0 19 81 0 0

    Now use MQ+FQ :

    lpaa23:~# echo fq >/proc/sys/net/core/default_qdisc
    lpaa23:~# tc qdisc replace dev eth0 root mq

    $ sar -n DEV 1 5 | grep eth
    14:49:57 eth0 678614.00 2727930.00 43739.13 4033279.14 0.00 0.00 0.00
    14:49:58 eth0 677620.00 2723971.00 43674.69 4027429.62 0.00 0.00 1.00
    14:49:59 eth0 676396.00 2719050.00 43596.83 4020125.02 0.00 0.00 0.00
    14:50:00 eth0 675197.00 2714173.00 43518.62 4012938.90 0.00 0.00 1.00
    14:50:01 eth0 676388.00 2719063.00 43595.47 4020171.64 0.00 0.00 0.00
    Average: eth0 676843.00 2720837.40 43624.95 4022788.86 0.00 0.00 0.40
    $ vmstat 1 5
    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 259832240 46008 2710912 0 0 21 2 223 192 0 1 99 0 0
    1 0 0 259832896 46008 2710744 0 0 0 0 1702206 198078 0 17 82 0 0
    0 0 0 259830272 46008 2710596 0 0 0 0 1696340 197756 1 17 83 0 0
    4 0 0 259829168 46024 2710584 0 0 16 0 1688472 197158 1 17 82 0 0
    3 0 0 259830224 46024 2710408 0 0 0 0 1692450 197212 0 18 82 0 0

    As expected, number of interrupts per second is very different.

    Signed-off-by: Eric Dumazet
    Acked-by: Soheil Hassas Yeganeh
    Cc: Neal Cardwell
    Cc: Yuchung Cheng
    Cc: Van Jacobson
    Cc: Jerry Chu
    Signed-off-by: David S. Miller

    Eric Dumazet
     

09 May, 2017

1 commit

  • fq_alloc_node, alloc_netdev_mqs and netif_alloc* open code kmalloc with
    vmalloc fallback. Use the kvmalloc variant instead. Keep the
    __GFP_REPEAT flag based on explanation from Eric:

    "At the time, tests on the hardware I had in my labs showed that
    vmalloc() could deliver pages spread all over the memory and that was
    a small penalty (once memory is fragmented enough, not at boot time)"

    The way how the code is constructed means, however, that we prefer to go
    and hit the OOM killer before we fall back to the vmalloc for requests

    Acked-by: Vlastimil Babka
    Cc: Eric Dumazet
    Cc: David Miller
    Cc: Shakeel Butt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

14 Apr, 2017

1 commit


21 Dec, 2016

1 commit


18 Nov, 2016

1 commit

  • When I wrote sch_fq.c, hash_ptr() on 64bit arches was awful,
    and I chose hash_32().

    Linus Torvalds and George Spelvin fixed this issue, so we can
    use hash_ptr() to get more entropy on 64bit arches with Terabytes
    of memory, and avoid the cast games.

    Signed-off-by: Eric Dumazet
    Cc: Hugh Dickins
    Signed-off-by: David S. Miller

    Eric Dumazet
     

23 Sep, 2016

1 commit

  • It looks like the following patch can make FQ very precise, even in VM
    or stressed hosts. It matters at high pacing rates.

    We take into account the difference between the time that was programmed
    when last packet was sent, and current time (a drift of tens of usecs is
    often observed)

    Add an EWMA of the unthrottle latency to help diagnostics.

    This latency is the difference between current time and oldest packet in
    delayed RB-tree. This accounts for the high resolution timer latency,
    but can be different under stress, as fq_check_throttled() can be
    opportunistically be called from a dequeue() called after an enqueue()
    for a different flow.

    Tested:
    // Start a 10Gbit flow
    $ netperf --google-pacing-rate 1250000000 -H lpaa24 -l 10000 -- -K bbr &

    Before patch :
    $ sar -n DEV 10 5 | grep eth0 | grep Average
    Average: eth0 17106.04 756876.84 1102.75 1119049.02 0.00 0.00 0.52

    After patch :
    $ sar -n DEV 10 5 | grep eth0 | grep Average
    Average: eth0 17867.00 800245.90 1151.77 1183172.12 0.00 0.00 0.52

    A new iproute2 tc can output the 'unthrottle latency' :

    $ tc -s qd sh dev eth0 | grep latency
    0 gc, 0 highprio, 32490767 throttled, 2382 ns latency

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

21 Sep, 2016

1 commit

  • This commit adds to the fq module a low_rate_threshold parameter to
    insert a delay after all packets if the socket requests a pacing rate
    below the threshold.

    This helps achieve more precise control of the sending rate with
    low-rate paths, especially policers. The basic issue is that if a
    congestion control module detects a policer at a certain rate, it may
    want fq to be able to shape to that policed rate. That way the sender
    can avoid policer drops by having the packets arrive at the policer at
    or just under the policed rate.

    The default threshold of 550Kbps was chosen analytically so that for
    policers or links at 500Kbps or 512Kbps fq would very likely invoke
    this mechanism, even if the pacing rate was briefly slightly above the
    available bandwidth. This value was then empirically validated with
    two years of production testing on YouTube video servers.

    Signed-off-by: Van Jacobson
    Signed-off-by: Neal Cardwell
    Signed-off-by: Yuchung Cheng
    Signed-off-by: Nandita Dukkipati
    Signed-off-by: Eric Dumazet
    Signed-off-by: Soheil Hassas Yeganeh
    Signed-off-by: David S. Miller

    Eric Dumazet
     

19 Sep, 2016

1 commit


26 Jun, 2016

1 commit

  • Qdisc performance suffers when packets are dropped at enqueue()
    time because drops (kfree_skb()) are done while qdisc lock is held,
    delaying a dequeue() draining the queue.

    Nominal throughput can be reduced by 50 % when this happens,
    at a time we would like the dequeue() to proceed as fast as possible.

    Even FQ is vulnerable to this problem, while one of FQ goals was
    to provide some flow isolation.

    This patch adds a 'struct sk_buff **to_free' parameter to all
    qdisc->enqueue(), and in qdisc_drop() helper.

    I measured a performance increase of up to 12 %, but this patch
    is a prereq so that future batches in enqueue() can fly.

    Signed-off-by: Eric Dumazet
    Acked-by: Jesper Dangaard Brouer
    Signed-off-by: David S. Miller

    Eric Dumazet
     

16 Jun, 2016

1 commit


11 Jun, 2016

1 commit

  • __QDISC_STATE_THROTTLED bit manipulation is rather expensive
    for HTB and few others.

    I already removed it for sch_fq in commit f2600cf02b5b
    ("net: sched: avoid costly atomic operation in fq_dequeue()")
    and so far nobody complained.

    When one ore more packets are stuck in one or more throttled
    HTB class, a htb dequeue() performs two atomic operations
    to clear/set __QDISC_STATE_THROTTLED bit, while root qdisc
    lock is held.

    Removing this pair of atomic operations bring me a 8 % performance
    increase on 200 TCP_RR tests, in presence of throttled classes.

    This patch has no side effect, since nothing actually uses
    disc_is_throttled() anymore.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

01 Mar, 2016

1 commit

  • When the bottom qdisc decides to, for example, drop some packet,
    it calls qdisc_tree_decrease_qlen() to update the queue length
    for all its ancestors, we need to update the backlog too to
    keep the stats on root qdisc accurate.

    Cc: Jamal Hadi Salim
    Acked-by: Jamal Hadi Salim
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    WANG Cong
     

11 Oct, 2015

1 commit

  • selinux needs few changes to accommodate fact that SYNACK messages
    can be attached to a request socket, lacking sk_security pointer

    (Only syncookies are still attached to a TCP_LISTEN socket)

    Adds a new sk_listener() helper, and use it in selinux and sch_fq

    Fixes: ca6fb0651883 ("tcp: attach SYNACK messages to request sockets instead of listener")
    Signed-off-by: Eric Dumazet
    Reported by: kernel test robot
    Cc: Paul Moore
    Cc: Stephen Smalley
    Cc: Eric Paris
    Acked-by: Paul Moore
    Signed-off-by: David S. Miller

    Eric Dumazet
     

03 Oct, 2015

1 commit

  • If a listen backlog is very big (to avoid syncookies), then
    the listener sk->sk_wmem_alloc is the main source of false
    sharing, as we need to touch it twice per SYNACK re-transmit
    and TX completion.

    (One SYN packet takes listener lock once, but up to 6 SYNACK
    are generated)

    By attaching the skb to the request socket, we remove this
    source of contention.

    Tested:

    listen(fd, 10485760); // single listener (no SO_REUSEPORT)
    16 RX/TX queue NIC
    Sustain a SYNFLOOD attack of ~320,000 SYN per second,
    Sending ~1,400,000 SYNACK per second.
    Perf profiles now show listener spinlock being next bottleneck.

    20.29% [kernel] [k] queued_spin_lock_slowpath
    10.06% [kernel] [k] __inet_lookup_established
    5.12% [kernel] [k] reqsk_timer_handler
    3.22% [kernel] [k] get_next_timer_interrupt
    3.00% [kernel] [k] tcp_make_synack
    2.77% [kernel] [k] ipt_do_table
    2.70% [kernel] [k] run_timer_softirq
    2.50% [kernel] [k] ip_finish_output
    2.04% [kernel] [k] cascade

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

02 Apr, 2015

1 commit


06 Feb, 2015

1 commit

  • Conflicts:
    drivers/net/vxlan.c
    drivers/vhost/net.c
    include/linux/if_vlan.h
    net/core/dev.c

    The net/core/dev.c conflict was the overlap of one commit marking an
    existing function static whilst another was adding a new function.

    In the include/linux/if_vlan.h case, the type used for a local
    variable was changed in 'net', whereas the function got rewritten
    to fix a stacked vlan bug in 'net-next'.

    In drivers/vhost/net.c, Al Viro's iov_iter conversions in 'net-next'
    overlapped with an endainness fix for VHOST 1.0 in 'net'.

    In drivers/net/vxlan.c, vxlan_find_vni() added a 'flags' parameter
    in 'net-next' whereas in 'net' there was a bug fix to pass in the
    correct network namespace pointer in calls to this function.

    Signed-off-by: David S. Miller

    David S. Miller
     

05 Feb, 2015

3 commits

  • FQ has a fast path for skb attached to a socket, as it does not
    have to compute a flow hash. But for other packets, FQ being non
    stochastic means that hosts exposed to random Internet traffic
    can allocate million of flows structure (104 bytes each) pretty
    easily. Not only host can OOM, but lookup in RB trees can take
    too much cpu and memory resources.

    This patch adds a new attribute, orphan_mask, that is adding
    possibility of having a stochastic hash for orphaned skb.

    Its default value is 1024 slots, to mimic SFQ behavior.

    Note: This does not apply to locally generated TCP traffic,
    and no locally generated traffic will share a flow structure
    with another perfect or stochastic flow.

    This patch also handles the specific case of SYNACK messages:

    They are attached to the listener socket, and therefore all map
    to a single hash bucket. If listener have set SO_MAX_PACING_RATE,
    hoping to have new accepted socket inherit this rate, SYNACK
    might be paced and even dropped.

    This is very similar to an internal patch Google have used more
    than one year.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • When we added pacing to TCP, we decided to let sch_fq take care
    of actual pacing.

    All TCP had to do was to compute sk->pacing_rate using simple formula:

    sk->pacing_rate = 2 * cwnd * mss / rtt

    It works well for senders (bulk flows), but not very well for receivers
    or even RPC :

    cwnd on the receiver can be less than 10, rtt can be around 100ms, so we
    can end up pacing ACK packets, slowing down the sender.

    Really, only the sender should pace, according to its own logic.

    Instead of adding a new bit in skb, or call yet another flow
    dissection, we tweak skb->truesize to a small value (2), and
    we instruct sch_fq to use new helper and not pace pure ack.

    Note this also helps TCP small queue, as ack packets present
    in qdisc/NIC do not prevent sending a data packet (RPC workload)

    This helps to reduce tx completion overhead, ack packets can use regular
    sock_wfree() instead of tcp_wfree() which is a bit more expensive.

    This has no impact in the case packets are sent to loopback interface,
    as we do not coalesce ack packets (were we would detect skb->truesize
    lie)

    In case netem (with a delay) is used, skb_orphan_partial() also sets
    skb->truesize to 1.

    This patch is a combination of two patches we used for about one year at
    Google.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Configuring fq with quantum 0 hangs the system, presumably because of a
    non-interruptible infinite loop. Either way quantum 0 does not make sense.

    Reproduce with:
    sudo tc qdisc add dev lo root fq quantum 0 initial_quantum 0
    ping 127.0.0.1

    Signed-off-by: Kenneth Klette Jonassen
    Acked-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Kenneth Klette Jonassen
     

29 Jan, 2015

1 commit

  • TIME_WAIT sockets are not owning any skb.

    ip_send_unicast_reply() and tcp_v6_send_response() both use
    regular sockets.

    We can safely remove a test in sch_fq and save one cache line miss,
    as sk_state is far away from sk_pacing_rate.

    Tested at Google for about one year.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

27 Nov, 2014

1 commit

  • FQ/pacing has a clamp of delay of 125 ms, to avoid some possible harm.

    It turns out this delay is too small to allow pacing low rates :
    Some ISP setup very aggressive policers as low as 16kbit.

    Now TCP stack has spurious rtx prevention, it seems safe to increase
    this fixed parameter, without adding a qdisc attribute.

    Signed-off-by: Eric Dumazet
    Cc: Yang Yingliang
    Signed-off-by: David S. Miller

    Eric Dumazet
     

06 Oct, 2014

1 commit

  • Standard qdisc API to setup a timer implies an atomic operation on every
    packet dequeue : qdisc_unthrottled()

    It turns out this is not really needed for FQ, as FQ has no concept of
    global qdisc throttling, being a qdisc handling many different flows,
    some of them can be throttled, while others are not.

    Fix is straightforward : add a 'bool throttle' to
    qdisc_watchdog_schedule_ns(), and remove calls to qdisc_unthrottled()
    in sch_fq.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

30 Sep, 2014

1 commit


23 Aug, 2014

1 commit


05 Jun, 2014

1 commit


14 Mar, 2014

1 commit


09 Mar, 2014

1 commit

  • Resizing fq hash table allocates memory while holding qdisc spinlock,
    with BH disabled.

    This is definitely not good, as allocation might sleep.

    We can drop the lock and get it when needed, we hold RTNL so no other
    changes can happen at the same time.

    Signed-off-by: Eric Dumazet
    Fixes: afe4fd062416 ("pkt_sched: fq: Fair Queue packet scheduler")
    Signed-off-by: David S. Miller

    Eric Dumazet
     

18 Dec, 2013

2 commits


16 Nov, 2013

2 commits

  • For performance reasons, sch_fq tried hard to not setup timers for every
    sent packet, using a quantum based heuristic : A delay is setup only if
    the flow exhausted its credit.

    Problem is that application limited flows can refill their credit
    for every queued packet, and they can evade pacing.

    This problem can also be triggered when TCP flows use small MSS values,
    as TSO auto sizing builds packets that are smaller than the default fq
    quantum (3028 bytes)

    This patch adds a 40 ms delay to guard flow credit refill.

    Fixes: afe4fd062416 ("pkt_sched: fq: Fair Queue packet scheduler")
    Signed-off-by: Eric Dumazet
    Cc: Maciej Żenczykowski
    Cc: Willem de Bruijn
    Cc: Yuchung Cheng
    Cc: Neal Cardwell
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Commit 7eec4174ff29 ("pkt_sched: fq: fix non TCP flows pacing")
    obsoleted TCA_FQ_FLOW_DEFAULT_RATE without notice for the users.

    Suggested by David Miller

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

15 Nov, 2013

1 commit

  • Initial sch_fq implementation copied code from pfifo_fast to classify
    a packet as a high prio packet.

    This clashes with setups using PRIO with say 7 bands, as one of the
    band could be incorrectly (mis)classified by FQ.

    Packets would be queued in the 'internal' queue, and no pacing ever
    happen for this special queue.

    Fixes: afe4fd062416 ("pkt_sched: fq: Fair Queue packet scheduler")
    Signed-off-by: Maciej Żenczykowski
    Signed-off-by: Eric Dumazet
    Cc: Stephen Hemminger
    Cc: Willem de Bruijn
    Cc: Yuchung Cheng
    Signed-off-by: David S. Miller

    Maciej Żenczykowski
     

28 Oct, 2013

1 commit


09 Oct, 2013

2 commits

  • Steinar reported FQ pacing was not working for UDP flows.

    It looks like the initial sk->sk_pacing_rate value of 0 was
    a wrong choice. We should init it to ~0U (unlimited)

    Then, TCA_FQ_FLOW_DEFAULT_RATE should be removed because it makes
    no real sense. The default rate is really unlimited, and we
    need to avoid a zero divide.

    Reported-by: Steinar H. Gunderson
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • TCA_FQ_INITIAL_QUANTUM should set q->initial_quantum

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

02 Oct, 2013

1 commit

  • FQ rate limiting suffers from two problems, reported
    by Steinar :

    1) FQ enforces a delay when flow quantum is exhausted in order
    to reduce cpu overhead. But if packets are small, current
    delay computation is slightly wrong, and observed rates can
    be too high.

    Steinar had this problem because he disabled TSO and GSO,
    and default FQ quantum is 2*1514.

    (Of course, I wish recent TSO auto sizing changes will help
    to not having to disable TSO in the first place)

    2) maxrate was not used for forwarded flows (skbs not attached
    to a socket)

    Tested:

    tc qdisc add dev eth0 root est 1sec 4sec fq maxrate 8Mbit
    netperf -H lpq84 -l 1000 &
    sleep 10 ; tc -s qdisc show dev eth0
    qdisc fq 8003: root refcnt 32 limit 10000p flow_limit 100p buckets 1024
    quantum 3028 initial_quantum 15140 maxrate 8000Kbit
    Sent 16819357 bytes 11258 pkt (dropped 0, overlimits 0 requeues 0)
    rate 7831Kbit 653pps backlog 7570b 5p requeues 0
    44 flows (43 inactive, 1 throttled), next packet delay 2977352 ns
    0 gc, 0 highprio, 5545 throttled

    lpq83:~# tcpdump -p -i eth0 host lpq84 -c 12
    09:02:52.079484 IP lpq83 > lpq84: . 1389536928:1389538376(1448) ack 3808678021 win 457
    09:02:52.079499 IP lpq83 > lpq84: . 1448:2896(1448) ack 1 win 457
    09:02:52.079906 IP lpq84 > lpq83: . ack 2896 win 16384
    09:02:52.082568 IP lpq83 > lpq84: . 2896:4344(1448) ack 1 win 457
    09:02:52.082581 IP lpq83 > lpq84: . 4344:5792(1448) ack 1 win 457
    09:02:52.083017 IP lpq84 > lpq83: . ack 5792 win 16384
    09:02:52.085678 IP lpq83 > lpq84: . 5792:7240(1448) ack 1 win 457
    09:02:52.085693 IP lpq83 > lpq84: . 7240:8688(1448) ack 1 win 457
    09:02:52.086117 IP lpq84 > lpq83: . ack 8688 win 16384
    09:02:52.088792 IP lpq83 > lpq84: . 8688:10136(1448) ack 1 win 457
    09:02:52.088806 IP lpq83 > lpq84: . 10136:11584(1448) ack 1 win 457
    09:02:52.089217 IP lpq84 > lpq83: . ack 11584 win 16384

    Reported-by: Steinar H. Gunderson
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

01 Oct, 2013

1 commit

  • fq_reset() should drops all packets in queue, including
    throttled flows.

    This patch moves code from fq_destroy() to fq_reset()
    to do the cleaning.

    fq_change() must stop calling fq_dequeue() if all remaining
    packets are from throttled flows.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

31 Aug, 2013

1 commit

  • kbuild bot reported following m68k build error :

    net/sched/sch_fq.c: In function 'fq_dequeue':
    >> net/sched/sch_fq.c:491:2: error: implicit declaration of function
    'prefetch' [-Werror=implicit-function-declaration]
    cc1: some warnings being treated as errors

    While we are fixing this, move this prefetch() call a bit earlier.

    Reported-by: Wu Fengguang
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet