07 Jul, 2020

1 commit

  • Rationale:
    Reduces attack surface on kernel devs opening the links for MITM
    as HTTPS traffic is much harder to manipulate.

    Deterministic algorithm:
    For each file:
    If not .svg:
    For each line:
    If doesn't contain `\bxmlns\b`:
    For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
    If both the HTTP and HTTPS versions
    return 200 OK and serve the same content:
    Replace HTTP with HTTPS.

    Signed-off-by: Alexander A. Klimov
    Signed-off-by: David S. Miller

    Alexander A. Klimov
     

21 May, 2019

1 commit

  • Add SPDX license identifiers to all files which:

    - Have no license information of any form

    - Have MODULE_LICENCE("GPL*") inside which was used in the initial
    scan/conversion to ignore the file

    These files fall under the project license, GPL v2 only. The resulting SPDX
    license identifier is:

    GPL-2.0-only

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

07 Aug, 2017

1 commit

  • Most TCP congestion controls are using identical logic to undo
    cwnd except BBR. This patch consolidates these similar functions
    to the one used currently by Reno and others.

    Suggested-by: Neal Cardwell
    Signed-off-by: Yuchung Cheng
    Signed-off-by: Neal Cardwell
    Signed-off-by: David S. Miller

    Yuchung Cheng
     

22 Nov, 2016

1 commit

  • congestion control algorithms that do not halve cwnd in their .ssthresh
    should provide a .cwnd_undo rather than rely on current fallback which
    assumes reno halving (and thus doubles the cwnd).

    All of these do 'something else' in their .ssthresh implementation, thus
    store the cwnd on loss and provide .undo_cwnd to restore it again.

    A followup patch will remove the fallback and all algorithms will
    need to provide a .cwnd_undo function.

    Signed-off-by: Florian Westphal
    Signed-off-by: David S. Miller

    Florian Westphal
     

10 Jul, 2015

1 commit

  • Add a helper to test the slow start condition in various congestion
    control modules and other places. This is to prepare a slight improvement
    in policy as to exactly when to slow start.

    Signed-off-by: Yuchung Cheng
    Signed-off-by: Neal Cardwell
    Signed-off-by: Eric Dumazet
    Signed-off-by: Nandita Dukkipati
    Signed-off-by: David S. Miller

    Yuchung Cheng
     

02 Sep, 2014

1 commit

  • Fix places where there is space before tab, long lines, and
    awkward if(){, double spacing etc. Add blank line after declaration/initialization.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    stephen hemminger
     

04 May, 2014

1 commit


14 Feb, 2014

1 commit


05 Nov, 2013

1 commit

  • Slow start now increases cwnd by 1 if an ACK acknowledges some packets,
    regardless the number of packets. Consequently slow start performance
    is highly dependent on the degree of the stretch ACKs caused by
    receiver or network ACK compression mechanisms (e.g., delayed-ACK,
    GRO, etc). But slow start algorithm is to send twice the amount of
    packets of packets left so it should process a stretch ACK of degree
    N as if N ACKs of degree 1, then exits when cwnd exceeds ssthresh. A
    follow up patch will use the remainder of the N (if greater than 1)
    to adjust cwnd in the congestion avoidance phase.

    In addition this patch retires the experimental limited slow start
    (LSS) feature. LSS has multiple drawbacks but questionable benefit. The
    fractional cwnd increase in LSS requires a loop in slow start even
    though it's rarely used. Configuring such an increase step via a global
    sysctl on different BDPS seems hard. Finally and most importantly the
    slow start overshoot concern is now better covered by the Hybrid slow
    start (hystart) enabled by default.

    Signed-off-by: Yuchung Cheng
    Signed-off-by: Neal Cardwell
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Yuchung Cheng
     

10 Mar, 2011

1 commit


29 Jan, 2008

1 commit


18 Jul, 2007

1 commit


04 May, 2007

1 commit


11 Feb, 2007

1 commit


23 Sep, 2006

1 commit


13 Jul, 2006

1 commit

  • I think there is still a problem with the AIMD parameter update in
    HighSpeed TCP code.

    Line 125~138 of the code (net/ipv4/tcp_highspeed.c):

    /* Update AIMD parameters */
    if (tp->snd_cwnd > hstcp_aimd_vals[ca->ai].cwnd) {
    while (tp->snd_cwnd > hstcp_aimd_vals[ca->ai].cwnd &&
    ca->ai < HSTCP_AIMD_MAX - 1)
    ca->ai++;
    } else if (tp->snd_cwnd < hstcp_aimd_vals[ca->ai].cwnd) {
    while (tp->snd_cwnd > hstcp_aimd_vals[ca->ai].cwnd &&
    ca->ai > 0)
    ca->ai--;

    In fact, the second part (decreasing ca->ai) never decreases since the
    while loop's inequality is in the reverse direction. This leads to
    unfairness with multiple flows (once a flow happens to enjoy a higher
    ca->ai, it keeps enjoying that even its cwnd decreases)

    Here is a tentative fix (I also added a comment, trying to keep the
    change clear):

    Acked-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Xiaoliang (David) Wei
     

01 Jul, 2006

1 commit


18 Jun, 2006

1 commit


03 Jun, 2006

1 commit


06 May, 2006

1 commit

  • Xiaoliang (David) Wei wrote:
    > Hi gurus,
    >
    > I am reading the code of tcp_highspeed.c in the kernel and have a
    > question on the hstcp_cong_avoid function, specifically the following
    > AI part (line 136~143 in net/ipv4/tcp_highspeed.c ):
    >
    > /* Do additive increase */
    > if (tp->snd_cwnd < tp->snd_cwnd_clamp) {
    > tp->snd_cwnd_cnt += ca->ai;
    > if (tp->snd_cwnd_cnt >= tp->snd_cwnd) {
    > tp->snd_cwnd++;
    > tp->snd_cwnd_cnt -= tp->snd_cwnd;
    > }
    > }
    >
    > In this part, when (tp->snd_cwnd_cnt == tp->snd_cwnd),
    > snd_cwnd_cnt will be -1... snd_cwnd_cnt is defined as u16, will this
    > small chance of getting -1 becomes a problem?
    > Shall we change it by reversing the order of the cwnd++ and cwnd_cnt -=
    > cwnd?

    Absolutely correct. Thanks.

    Signed-off-by: John Heffner
    Signed-off-by: David S. Miller

    John Heffner
     

13 Mar, 2006

1 commit


18 Nov, 2005

1 commit


11 Nov, 2005

2 commits

  • Move all the code that does linear TCP slowstart to one
    inline function to ease later patch to add ABC support.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Stephen Hemminger
     
  • TCP peformance with TSO over networks with delay is awful.
    On a 100Mbit link with 150ms delay, we get 4Mbits/sec with TSO and
    50Mbits/sec without TSO.

    The problem is with TSO, we intentionally do not keep the maximum
    number of packets in flight to fill the window, we hold out to until
    we can send a MSS chunk. But, we also don't update the congestion window
    unless we have filled, as per RFC2861.

    This patch replaces the check for the congestion window being full
    with something smarter that accounts for TSO.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller

    Stephen Hemminger
     

30 Aug, 2005

1 commit

  • This changeset basically moves tcp_sk()->{ca_ops,ca_state,etc} to inet_csk(),
    minimal renaming/moving done in this changeset to ease review.

    Most of it is just changes of struct tcp_sock * to struct sock * parameters.

    With this we move to a state closer to two interesting goals:

    1. Generalisation of net/ipv4/tcp_diag.c, becoming inet_diag.c, being used
    for any INET transport protocol that has struct inet_hashinfo and are
    derived from struct inet_connection_sock. Keeps the userspace API, that will
    just not display DCCP sockets, while newer versions of tools can support
    DCCP.

    2. INET generic transport pluggable Congestion Avoidance infrastructure, using
    the current TCP CA infrastructure with DCCP.

    Signed-off-by: Arnaldo Carvalho de Melo
    Signed-off-by: David S. Miller

    Arnaldo Carvalho de Melo
     

24 Jun, 2005

1 commit