22 Sep, 2017

1 commit


02 Sep, 2017

1 commit


31 Aug, 2017

1 commit

  • Depending on where ->init fails we can get a null pointer deref due to
    uninitialized hires timer (watchdog) or a double free of the qdisc hash
    because it is already freed by ->destroy().

    Fixes: 8d5537387505 ("net/sched/hfsc: allocate tcf block for hfsc root class")
    Fixes: 87b60cfacf9f ("net_sched: fix error recovery at qdisc creation")
    Signed-off-by: Nikolay Aleksandrov
    Signed-off-by: David S. Miller

    Nikolay Aleksandrov
     

26 Aug, 2017

1 commit

  • For TC classes, their ->get() and ->put() are always paired, and the
    reference counting is completely useless, because:

    1) For class modification and dumping paths, we already hold RTNL lock,
    so all of these ->get(),->change(),->put() are atomic.

    2) For filter bindiing/unbinding, we use other reference counter than
    this one, and they should have RTNL lock too.

    3) For ->qlen_notify(), it is special because it is called on ->enqueue()
    path, but we already hold qdisc tree lock there, and we hold this
    tree lock when graft or delete the class too, so it should not be gone
    or changed until we release the tree lock.

    Therefore, this patch removes ->get() and ->put(), but:

    1) Adds a new ->find() to find the pointer to a class by classid, no
    refcnt.

    2) Move the original class destroy upon the last refcnt into ->delete(),
    right after releasing tree lock. This is fine because the class is
    already removed from hash when holding the lock.

    For those who also use ->put() as ->unbind(), just rename them to reflect
    this change.

    Cc: Jamal Hadi Salim
    Signed-off-by: Cong Wang
    Acked-by: Jiri Pirko
    Acked-by: Jamal Hadi Salim
    Signed-off-by: David S. Miller

    WANG Cong
     

17 Aug, 2017

2 commits


16 Aug, 2017

1 commit

  • Traffic filters could keep direct pointers to classes in classful qdisc,
    thus qdisc destruction first removes all filters before freeing classes.
    Class destruction methods also tries to free attached filters but now
    this isn't safe because tcf_block_put() unlike to tcf_destroy_chain()
    cannot be called second time.

    This patch set class->block to NULL after first tcf_block_put() and
    turn second call into no-op.

    Fixes: 6529eaba33f0 ("net: sched: introduce tcf block infractructure")
    Signed-off-by: Konstantin Khlebnikov
    Acked-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Konstantin Khlebnikov
     

12 Aug, 2017

1 commit


07 Jun, 2017

1 commit

  • There is need to instruct the HW offloaded path to push certain matched
    packets to cpu/kernel for further analysis. So this patch introduces a
    new TRAP control action to TC.

    For kernel datapath, this action does not make much sense. So with the
    same logic as in HW, new TRAP behaves similar to STOLEN. The skb is just
    dropped in the datapath (and virtually ejected to an upper level, which
    does not exist in case of kernel).

    Signed-off-by: Jiri Pirko
    Reviewed-by: Yotam Gigi
    Reviewed-by: Andrew Lunn
    Signed-off-by: David S. Miller

    Jiri Pirko
     

18 May, 2017

2 commits

  • Currently, the filter chains are direcly put into the private structures
    of qdiscs. In order to be able to have multiple chains per qdisc and to
    allow filter chains sharing among qdiscs, there is a need for common
    object that would hold the chains. This introduces such object and calls
    it "tcf_block".

    Helpers to get and put the blocks are provided to be called from
    individual qdisc code. Also, the original filter_list pointers are left
    in qdisc privs to allow the entry into tcf_block processing without any
    added overhead of possible multiple pointer dereference on fast path.

    Signed-off-by: Jiri Pirko
    Acked-by: Jamal Hadi Salim
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • Move tc_classify function to cls_api.c where it belongs, rename it to
    fit the namespace.

    Signed-off-by: Jiri Pirko
    Acked-by: Jamal Hadi Salim
    Signed-off-by: David S. Miller

    Jiri Pirko
     

14 Apr, 2017

1 commit


13 Mar, 2017

1 commit

  • The original reason [1] for having hidden qdiscs (potential scalability
    issues in qdisc_match_from_root() with single linked list in case of large
    amount of qdiscs) has been invalidated by 59cc1f61f0 ("net: sched: convert
    qdisc linked list to hashtable").

    This allows us for bringing more clarity and determinism into the dump by
    making default pfifo qdiscs visible.

    We're not turning this on by default though, at it was deemed [2] too
    intrusive / unnecessary change of default behavior towards userspace.
    Instead, TCA_DUMP_INVISIBLE netlink attribute is introduced, which allows
    applications to request complete qdisc hierarchy dump, including the
    ones that have always been implicit/invisible.

    Singleton noop_qdisc stays invisible, as teaching the whole infrastructure
    about singletons would require quite some surgery with very little gain
    (seeing no qdisc or seeing noop qdisc in the dump is probably setting
    the same user expectation).

    [1] http://lkml.kernel.org/r/1460732328.10638.74.camel@edumazet-glaptop3.roam.corp.google.com
    [2] http://lkml.kernel.org/r/20161021.105935.1907696543877061916.davem@davemloft.net

    Signed-off-by: Jiri Kosina
    Signed-off-by: David S. Miller

    Jiri Kosina
     

06 Dec, 2016

1 commit

  • 1) Old code was hard to maintain, due to complex lock chains.
    (We probably will be able to remove some kfree_rcu() in callers)

    2) Using a single timer to update all estimators does not scale.

    3) Code was buggy on 32bit kernel (WRITE_ONCE() on 64bit quantity
    is not supposed to work well)

    In this rewrite :

    - I removed the RB tree that had to be scanned in
    gen_estimator_active(). qdisc dumps should be much faster.

    - Each estimator has its own timer.

    - Estimations are maintained in net_rate_estimator structure,
    instead of dirtying the qdisc. Minor, but part of the simplification.

    - Reading the estimator uses RCU and a seqcount to provide proper
    support for 32bit kernels.

    - We reduce memory need when estimators are not used, since
    we store a pointer, instead of the bytes/packets counters.

    - xt_rateest_mt() no longer has to grab a spinlock.
    (In the future, xt_rateest_tg() could be switched to per cpu counters)

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

09 Aug, 2016

2 commits

  • The code using this variable has been commented out in the past as it
    was causing issues in upperlimited link-sharing scenarios.

    Signed-off-by: Michal Soltys
    Signed-off-by: David S. Miller

    Michal Soltys
     
  • This patch simplifies how we update fsc and calculate vt from it - while
    keeping the expected functionality identical with how hfsc behaves
    curently. It also fixes a certain issue introduced with
    a very old patch.

    The idea is, that instead of correcting cl_vt before fsc curve update
    (rtsc_min) and correcting cl_vt after calculation (rtsc_y2x) to keep
    cl_vt local to the current period - we can simply rely on virtual times
    and curve values always being in sync - analogously to how rsc and usc
    function, except that we use virtual time here.

    Why hasn't it been done since the beginning this way ? The likely scenario
    (basing on the code trying to correct curves whenever possible) was to
    keep the virtual times as small as possible - as they have tendency to
    "gallop" forward whenever their siblings and other fair sharing
    subtrees are idling. On top of that, current code is subtly bugged, so
    cumulative time (without any corrections) is always kept and used in
    init_vf() when a new backlog period begins (using cl_cvtoff).

    Is cumulative value safe ? Generally yes, though corner cases are easy
    to create. For example consider:

    1gbit interface
    some 100kbit leaf, everything else idle

    With current tick (64ns) 1s is 15625000 ticks, but the leaf is alone and
    it's virtual time, so in reality it's 10000 times more. ITOW 38 bits are
    needed to hold 1 second. 54 - 1 day, 59 - 1 month, 63 - 1 year (all
    logarithms rounded up). It's getting somewhat dangerous, but also
    requires setup excusing this kind of values not mentioning permanently
    backlogged class for a year. In near most extreme case (10gbit, 10kbit
    leaf), we have "enough" to hold ~13.6 days in 64 bits.

    Well, the issue remains mostly theoretical and cl_cvtoff has been
    working fine for all those years. Sensible configuration are de-facto
    immune to this issue, and not so sensible can solve it with a cronjob
    and its period inversely proportional to the insanity of such setup =)

    Now let's explain the subtle bug mentioned earlier.

    The issue is related to how offsets are kept and how we calculate
    virtual times and update fair service curve(s). The issue itself is
    subtle, but easy to observe with long m1 segments. It was introduced in
    rather old patch:

    Commit 99296150c7: "[NET_SCHED]: O(1) children vtoff adjustment
    in HFSC scheduler"

    (available in git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git)

    Originally when a new backlog period was started, cl_vtoff of each
    sibling was updated with cl_cvtmax from past period - naturally moving
    all cl_vt to proper starting point. That patch adjusted it so cumulative
    offset is kept in the parent, and there is no need for traversing the
    list (as any subsequent child activation derives new vt from already
    active sibling(s)).

    But with this change, cl_vtoff (of each sibling) is no longer persistent
    across the inactivity periods, as it's calculated from parent's
    cl_cvtoff on a new backlog period, conflicting with the following curve
    correction from the previous period:

    if (cl->cl_virtual.x == vt) {
    cl->cl_virtual.x -= cl->cl_vtoff;
    cl->cl_vtoff = 0;
    }

    This essentially tries to keep curve as if it was local to the period
    and resets cl_vtoff (cumulative vt offset of the class) to 0 when
    possible (read: when we have an intersection or if a new curve is below
    the old one). But then it's recalculated from cl_cvtoff on next active
    period. Then rtsc_min() call preceding the above if() doesn't really
    do what we expect it to do in such scenario - as it calculates the
    minimum of corrected curve (from the previous backlog period) and the
    new uncorrected curve (with offset derived from cl_cvtoff).

    Example:

    tc class add dev $ife parent 1:0 classid 1:1 hfsc ls m2 100mbit ul m2 100mbit
    tc class add dev $ife parent 1:1 classid 1:10 hfsc ls m1 80mbit d 10s m2 20mbit
    tc class add dev $ife parent 1:1 classid 1:11 hfsc ls m2 20mbit

    start B, keep it backlogged, let it run 6s (30s worth of vt as A is idle)
    pause B briefly to force cl_cvtoff update in parent (whole 1:1 going idle)
    start A, let it run 10s
    pause A briefly to force rtsc_min()

    At this point we would expect A to continue at 20mbit after a brief
    moment of 80mbit. But instead A will use 80mbit for full 10s again. It's
    the effect of first correcting A (during 'start A'), and then - after
    unpausing - calculating rtsc_min() from old corrected and new uncorrected
    curve.

    The patch fixes this bug and keepis vt and fsc in sync (virtual times
    are cumulative, not local to the backlog period).

    Signed-off-by: Michal Soltys
    Signed-off-by: David S. Miller

    Michal Soltys
     

09 Jul, 2016

1 commit

  • hfsc_sched is huge (size: 920, cachelines: 15), but we can get it to 14
    cachelines by placing level after filter_cnt (covering 4 byte hole) and
    reducing period/nactive/flags to u32 (period is just a counter,
    incremented when class becomes active -- 2**32 is plenty for this
    purpose, also, long is only 32bit wide on 32bit platforms anyway).

    cl_vtperiod is exported to userspace via tc_hfsc_stats, but its period
    member is already u32, so no precision is lost there either.

    Cc: Michal Soltys
    Signed-off-by: Florian Westphal
    Signed-off-by: David S. Miller

    Florian Westphal
     

01 Jul, 2016

5 commits

  • cl->cl_vt alone is relative only to the current backlog period, while
    the curve operates on cumulative virtual time. This patch adds missing
    cl->cl_vtoff.

    Signed-off-by: Michal Soltys
    Signed-off-by: David S. Miller

    Michal Soltys
     
  • When a class is going passive, it should update its cl_vt first
    to be consistent with the last dequeue operation.

    Otherwise its cl_vt will be one packet behind and parent's cvtmax might
    not be updated as well.

    One possible side effect is if some class goes passive and subsequently
    goes active /without/ its parent going passive - with cl_vt lagging one
    packet behind - comparison made in init_vf() will be affected (same
    period).

    Signed-off-by: Michal Soltys
    Signed-off-by: David S. Miller

    Michal Soltys
     
  • This is update to:
    commit a09ceb0e08140a ("sched: remove qdisc->drop")

    That commit removed qdisc->drop, but left alone dlist and droplist
    that no longer serve any meaningful purpose.

    Signed-off-by: Michal Soltys
    Signed-off-by: David S. Miller

    Michal Soltys
     
  • The condition can only succeed on wrong configurations.

    Signed-off-by: Michal Soltys
    Signed-off-by: David S. Miller

    Michal Soltys
     
  • Realtime scheduling implemented in HFSC uses head of the queue to make
    the decision about which packet to schedule next. But in case of any
    head drop, the deadline calculated for the previous head is not
    necessarily correct for the next head (unless both packets have the same
    length).

    Thanks to peek() function used during dequeue - which internally is a
    dequeue operation - hfsc is almost safe from this issue, as peek()
    dequeues and isolates the head storing it temporarily until the real
    dequeue happens.

    But there is one exception: if after the class activation a drop happens
    before the first dequeue operation, there's never a chance to do the
    peek().

    Adding peek() call in enqueue - if this is the first packet in a new
    backlog period AND the scheduler has realtime curve defined - fixes that
    one corner case. The 1st hfsc_dequeue() will use that peeked packet,
    similarly as every subsequent hfsc_dequeue() call uses packet peeked by
    the previous call.

    Signed-off-by: Michal Soltys
    Signed-off-by: David S. Miller

    Michal Soltys
     

26 Jun, 2016

1 commit

  • Qdisc performance suffers when packets are dropped at enqueue()
    time because drops (kfree_skb()) are done while qdisc lock is held,
    delaying a dequeue() draining the queue.

    Nominal throughput can be reduced by 50 % when this happens,
    at a time we would like the dequeue() to proceed as fast as possible.

    Even FQ is vulnerable to this problem, while one of FQ goals was
    to provide some flow isolation.

    This patch adds a 'struct sk_buff **to_free' parameter to all
    qdisc->enqueue(), and in qdisc_drop() helper.

    I measured a performance increase of up to 12 %, but this patch
    is a prereq so that future batches in enqueue() can fly.

    Signed-off-by: Eric Dumazet
    Acked-by: Jesper Dangaard Brouer
    Signed-off-by: David S. Miller

    Eric Dumazet
     

11 Jun, 2016

2 commits

  • __QDISC_STATE_THROTTLED bit manipulation is rather expensive
    for HTB and few others.

    I already removed it for sch_fq in commit f2600cf02b5b
    ("net: sched: avoid costly atomic operation in fq_dequeue()")
    and so far nobody complained.

    When one ore more packets are stuck in one or more throttled
    HTB class, a htb dequeue() performs two atomic operations
    to clear/set __QDISC_STATE_THROTTLED bit, while root qdisc
    lock is held.

    Removing this pair of atomic operations bring me a 8 % performance
    increase on 200 TCP_RR tests, in presence of throttled classes.

    This patch has no side effect, since nothing actually uses
    disc_is_throttled() anymore.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Conflicts:
    net/sched/act_police.c
    net/sched/sch_drr.c
    net/sched/sch_hfsc.c
    net/sched/sch_prio.c
    net/sched/sch_red.c
    net/sched/sch_tbf.c

    In net-next the drop methods of the packet schedulers got removed, so
    the bug fixes to them in 'net' are irrelevant.

    A packet action unload crash fix conflicts with the addition of the
    new firstuse timestamp.

    Signed-off-by: David S. Miller

    David S. Miller
     

09 Jun, 2016

1 commit

  • after removal of TCA_CBQ_OVL_STRATEGY from cbq scheduler, there are no
    more callers of ->drop() outside of other ->drop functions, i.e.
    nothing calls them.

    Signed-off-by: Florian Westphal
    Signed-off-by: David S. Miller

    Florian Westphal
     

08 Jun, 2016

1 commit

  • Large tc dumps (tc -s {qdisc|class} sh dev ethX) done by Google BwE host
    agent [1] are problematic at scale :

    For each qdisc/class found in the dump, we currently lock the root qdisc
    spinlock in order to get stats. Sampling stats every 5 seconds from
    thousands of HTB classes is a challenge when the root qdisc spinlock is
    under high pressure. Not only the dumps take time, they also slow
    down the fast path (queue/dequeue packets) by 10 % to 20 % in some cases.

    An audit of existing qdiscs showed that sch_fq_codel is the only qdisc
    that might need the qdisc lock in fq_codel_dump_stats() and
    fq_codel_dump_class_stats()

    In v2 of this patch, I now use the Qdisc running seqcount to provide
    consistent reads of packets/bytes counters, regardless of 32/64 bit arches.

    I also changed rate estimators to use the same infrastructure
    so that they no longer need to lock root qdisc lock.

    [1]
    http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43838.pdf

    Signed-off-by: Eric Dumazet
    Cc: Cong Wang
    Cc: Jamal Hadi Salim
    Cc: John Fastabend
    Cc: Kevin Athey
    Cc: Xiaotian Pei
    Signed-off-by: David S. Miller

    Eric Dumazet
     

04 Jun, 2016

1 commit

  • hfsc updates backlog lazily, that is only when we
    dump the stats. This is problematic after we begin to
    update backlog in qdisc_tree_reduce_backlog().

    Reported-by: Stas Nichiporovich
    Tested-by: Stas Nichiporovich
    Fixes: 2ccccf5fb43f ("net_sched: update hierarchical backlog too")
    Cc: Jamal Hadi Salim
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    WANG Cong
     

01 Mar, 2016

2 commits

  • When the bottom qdisc decides to, for example, drop some packet,
    it calls qdisc_tree_decrease_qlen() to update the queue length
    for all its ancestors, we need to update the backlog too to
    keep the stats on root qdisc accurate.

    Cc: Jamal Hadi Salim
    Acked-by: Jamal Hadi Salim
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    WANG Cong
     
  • Remove nearly duplicated code and prepare for the following patch.

    Cc: Jamal Hadi Salim
    Acked-by: Jamal Hadi Salim
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    WANG Cong
     

28 Aug, 2015

1 commit

  • For classifiers getting invoked via tc_classify(), we always need an
    extra function call into tc_classify_compat(), as both are being
    exported as symbols and tc_classify() itself doesn't do much except
    handling of reclassifications when tp->classify() returned with
    TC_ACT_RECLASSIFY.

    CBQ and ATM are the only qdiscs that directly call into tc_classify_compat(),
    all others use tc_classify(). When tc actions are being configured
    out in the kernel, tc_classify() effectively does nothing besides
    delegating.

    We could spare this layer and consolidate both functions. pktgen on
    single CPU constantly pushing skbs directly into the netif_receive_skb()
    path with a dummy classifier on ingress qdisc attached, improves
    slightly from 22.3Mpps to 23.1Mpps.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

30 Sep, 2014

4 commits

  • After previous patches to simplify qstats the qstats can be
    made per cpu with a packed union in Qdisc struct.

    Signed-off-by: John Fastabend
    Signed-off-by: David S. Miller

    John Fastabend
     
  • This removes the use of qstats->qlen variable from the classifiers
    and makes it an explicit argument to gnet_stats_copy_queue().

    The qlen represents the qdisc queue length and is packed into
    the qstats at the last moment before passnig to user space. By
    handling it explicitely we avoid, in the percpu stats case, having
    to figure out which per_cpu variable to put it in.

    It would probably be best to remove it from qstats completely
    but qstats is a user space ABI and can't be broken. A future
    patch could make an internal only qstats structure that would
    avoid having to allocate an additional u32 variable on the
    Qdisc struct. This would make the qstats struct 128bits instead
    of 128+32.

    Signed-off-by: John Fastabend
    Signed-off-by: David S. Miller

    John Fastabend
     
  • This adds helpers to manipulate qstats logic and replaces locations
    that touch the counters directly. This simplifies future patches
    to push qstats onto per cpu counters.

    Signed-off-by: John Fastabend
    Signed-off-by: David S. Miller

    John Fastabend
     
  • In order to run qdisc's without locking statistics and estimators
    need to be handled correctly.

    To resolve bstats make the statistics per cpu. And because this is
    only needed for qdiscs that are running without locks which is not
    the case for most qdiscs in the near future only create percpu
    stats when qdiscs set the TCQ_F_CPUSTATS flag.

    Next because estimators use the bstats to calculate packets per
    second and bytes per second the estimator code paths are updated
    to use the per cpu statistics.

    Signed-off-by: John Fastabend
    Signed-off-by: David S. Miller

    John Fastabend
     

14 Sep, 2014

1 commit

  • rcu'ify tcf_proto this allows calling tc_classify() without holding
    any locks. Updaters are protected by RTNL.

    This patch prepares the core net_sched infrastracture for running
    the classifier/action chains without holding the qdisc lock however
    it does nothing to ensure cls_xxx and act_xxx types also work without
    locking. Additional patches are required to address the fall out.

    Signed-off-by: John Fastabend
    Acked-by: Eric Dumazet
    Signed-off-by: David S. Miller

    John Fastabend
     

14 Mar, 2014

1 commit


11 Jun, 2013

1 commit

  • struct gnet_stats_rate_est contains u32 fields, so the bytes per second
    field can wrap at 34360Mbit.

    Add a new gnet_stats_rate_est64 structure to get 64bit bps/pps fields,
    and switch the kernel to use this structure natively.

    This structure is dumped to user space as a new attribute :

    TCA_STATS_RATE_EST64

    Old tc command will now display the capped bps (to 34360Mbit), instead
    of wrapped values, and updated tc command will display correct
    information.

    Old tc command output, after patch :

    eric:~# tc -s -d qd sh dev lo
    qdisc pfifo 8001: root refcnt 2 limit 1000p
    Sent 80868245400 bytes 1978837 pkt (dropped 0, overlimits 0 requeues 0)
    rate 34360Mbit 189696pps backlog 0b 0p requeues 0

    This patch carefully reorganizes "struct Qdisc" layout to get optimal
    performance on SMP.

    Signed-off-by: Eric Dumazet
    Cc: Ben Hutchings
    Signed-off-by: David S. Miller

    Eric Dumazet
     

28 Feb, 2013

1 commit

  • I'm not sure why, but the hlist for each entry iterators were conceived

    list_for_each_entry(pos, head, member)

    The hlist ones were greedy and wanted an extra parameter:

    hlist_for_each_entry(tpos, pos, head, member)

    Why did they need an extra pos parameter? I'm not quite sure. Not only
    they don't really need it, it also prevents the iterator from looking
    exactly like the list iterator, which is unfortunate.

    Besides the semantic patch, there was some manual work required:

    - Fix up the actual hlist iterators in linux/list.h
    - Fix up the declaration of other iterators based on the hlist ones.
    - A very small amount of places were using the 'node' parameter, this
    was modified to use 'obj->member' instead.
    - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
    properly, so those had to be fixed up manually.

    The semantic patch which is mostly the work of Peter Senna Tschudin is here:

    @@
    iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;

    type T;
    expression a,c,d,e;
    identifier b;
    statement S;
    @@

    -T b;

    [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
    [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
    [akpm@linux-foundation.org: checkpatch fixes]
    [akpm@linux-foundation.org: fix warnings]
    [akpm@linux-foudnation.org: redo intrusive kvm changes]
    Tested-by: Peter Senna Tschudin
    Acked-by: Paul E. McKenney
    Signed-off-by: Sasha Levin
    Cc: Wu Fengguang
    Cc: Marcelo Tosatti
    Cc: Gleb Natapov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

11 May, 2012

1 commit

  • Class bytes/packets stats can be misleading because they are updated in
    enqueue() while packet might be dropped later.

    We already fixed all qdiscs but sch_atm.

    This patch makes the final cleanup.

    class rate estimators can now match qdisc ones.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet