19 Jun, 2005

10 commits


14 Jun, 2005

1 commit


09 Jun, 2005

4 commits


01 Jun, 2005

3 commits


27 May, 2005

3 commits

  • Here is a fixed up version of the reorder feature of netem.
    It is the same as the earlier patch plus with the bugfix from Julio merged in.
    Has expected backwards compatibility behaviour.

    Go ahead and merge this one, the TCP strangeness I was seeing was due
    to the reordering bug, and previous version of TSO patch.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Stephen Hemminger
     
  • Netem works better if there if packets are just queued in the inner discipline
    rather than having a separate delayed queue. Change to use the dequeue/requeue
    to peek like TBF does.

    By doing this potential qlen problems with the old method are avoided. The problems
    happened when the netem_run that moved packets from the inner discipline to the nested
    discipline failed (because inner queue was full). This happened in dequeue, so the
    effective qlen of the netem would be decreased (because of the drop), but there was
    no way to keep the outer qdisc (caller of netem dequeue) in sync.

    The problem window is still there since this patch doesn't address the issue of
    requeue failing in netem_dequeue, but that shouldn't happen since the sequence dequeue/requeue
    should always work. Long term correct fix is to implement qdisc->peek in all the qdisc's
    to allow for this (needed by several other qdisc's as well).

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Stephen Hemminger
     
  • Handle duplication of packets in netem by re-inserting at top of qdisc tree.
    This avoid problems with qlen accounting with nested qdisc. This recursion
    requires no additional locking but will potentially increase stack depth.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Stephen Hemminger
     

04 May, 2005

7 commits


26 Apr, 2005

1 commit


25 Apr, 2005

2 commits

  • Calculate hashtable size to fit into a page instead of a hardcoded
    256 buckets hash table. Results in a 1024 buckets hashtable on
    most systems.

    Replace old naive extract-8-lsb-bits algorithm with a better
    algorithm xor'ing 3 or 4 bit fields at the size of the hashtable
    array index in order to improve distribution if the majority of
    the lower bits are unused while keeping zero collision behaviour
    for the most common use case.

    Thanks to Wang Jian for bringing this issue
    to attention and to Eran Mann for the initial
    idea for this new algorithm.

    Signed-off-by: Thomas Graf
    Signed-off-by: David S. Miller

    Thomas Graf
     
  • And provide an example simply action in order to
    demonstrate usage.

    Signed-off-by: Jamal Hadi Salim
    Signed-off-by: David S. Miller

    Jamal Hadi Salim
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds