12 May, 2020

1 commit

  • The current codebase makes use of the zero-length array language
    extension to the C90 standard, but the preferred mechanism to declare
    variable-length types such as these ones is a flexible array member[1][2],
    introduced in C99:

    struct foo {
    int stuff;
    struct boo array[];
    };

    By making use of the mechanism above, we will get a compiler warning
    in case the flexible array does not occur last in the structure, which
    will help us prevent some kind of undefined behavior bugs from being
    inadvertently introduced[3] to the codebase from now on.

    Also, notice that, dynamic memory allocations won't be affected by
    this change:

    "Flexible array members have incomplete type, and so the sizeof operator
    may not be applied. As a quirk of the original implementation of
    zero-length arrays, sizeof evaluates to zero."[1]

    sizeof(flexible-array-member) triggers a warning because flexible array
    members have incomplete type[1]. There are some instances of code in
    which the sizeof operator is being incorrectly/erroneously applied to
    zero-length arrays and the result is zero. Such instances may be hiding
    some bugs. So, this work (flexible-array member conversions) will also
    help to get completely rid of those sorts of issues.

    This issue was found with the help of Coccinelle.

    [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
    [2] https://github.com/KSPP/linux/issues/21
    [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

    Signed-off-by: Gustavo A. R. Silva
    Signed-off-by: David S. Miller

    Gustavo A. R. Silva
     

08 May, 2020

1 commit


25 Oct, 2019

1 commit

  • team interface could be nested and it's lock variable could be nested too.
    But this lock uses static lockdep key and there is no nested locking
    handling code such as mutex_lock_nested() and so on.
    so the Lockdep would warn about the circular locking scenario that
    couldn't happen.
    In order to fix, this patch makes the team module to use dynamic lock key
    instead of static key.

    Test commands:
    ip link add team0 type team
    ip link add team1 type team
    ip link set team0 master team1
    ip link set team0 nomaster
    ip link set team1 master team0
    ip link set team1 nomaster

    Splat that looks like:
    [ 40.364352] WARNING: possible recursive locking detected
    [ 40.364964] 5.4.0-rc3+ #96 Not tainted
    [ 40.365405] --------------------------------------------
    [ 40.365973] ip/750 is trying to acquire lock:
    [ 40.366542] ffff888060b34c40 (&team->lock){+.+.}, at: team_set_mac_address+0x151/0x290 [team]
    [ 40.367689]
    but task is already holding lock:
    [ 40.368729] ffff888051201c40 (&team->lock){+.+.}, at: team_del_slave+0x29/0x60 [team]
    [ 40.370280]
    other info that might help us debug this:
    [ 40.371159] Possible unsafe locking scenario:

    [ 40.371942] CPU0
    [ 40.372338] ----
    [ 40.372673] lock(&team->lock);
    [ 40.373115] lock(&team->lock);
    [ 40.373549]
    *** DEADLOCK ***

    [ 40.374432] May be due to missing lock nesting notation

    [ 40.375338] 2 locks held by ip/750:
    [ 40.375851] #0: ffffffffabcc42b0 (rtnl_mutex){+.+.}, at: rtnetlink_rcv_msg+0x466/0x8a0
    [ 40.376927] #1: ffff888051201c40 (&team->lock){+.+.}, at: team_del_slave+0x29/0x60 [team]
    [ 40.377989]
    stack backtrace:
    [ 40.378650] CPU: 0 PID: 750 Comm: ip Not tainted 5.4.0-rc3+ #96
    [ 40.379368] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
    [ 40.380574] Call Trace:
    [ 40.381208] dump_stack+0x7c/0xbb
    [ 40.381959] __lock_acquire+0x269d/0x3de0
    [ 40.382817] ? register_lock_class+0x14d0/0x14d0
    [ 40.383784] ? check_chain_key+0x236/0x5d0
    [ 40.384518] lock_acquire+0x164/0x3b0
    [ 40.385074] ? team_set_mac_address+0x151/0x290 [team]
    [ 40.385805] __mutex_lock+0x14d/0x14c0
    [ 40.386371] ? team_set_mac_address+0x151/0x290 [team]
    [ 40.387038] ? team_set_mac_address+0x151/0x290 [team]
    [ 40.387632] ? mutex_lock_io_nested+0x1380/0x1380
    [ 40.388245] ? team_del_slave+0x60/0x60 [team]
    [ 40.388752] ? rcu_read_lock_sched_held+0x90/0xc0
    [ 40.389304] ? rcu_read_lock_bh_held+0xa0/0xa0
    [ 40.389819] ? lock_acquire+0x164/0x3b0
    [ 40.390285] ? lockdep_rtnl_is_held+0x16/0x20
    [ 40.390797] ? team_port_get_rtnl+0x90/0xe0 [team]
    [ 40.391353] ? __module_text_address+0x13/0x140
    [ 40.391886] ? team_set_mac_address+0x151/0x290 [team]
    [ 40.392547] team_set_mac_address+0x151/0x290 [team]
    [ 40.393111] dev_set_mac_address+0x1f0/0x3f0
    [ ... ]

    Fixes: 3d249d4ca7d0 ("net: introduce ethernet teaming device")
    Signed-off-by: Taehee Yoo
    Signed-off-by: David S. Miller

    Taehee Yoo
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

12 Jul, 2018

2 commits

  • LAG devices (team or bond) recognize for each one of their slave devices
    whether LAG traffic is going to be sent through that device. Bond calls
    such devices "active", team calls them "txable". When this state
    changes, a NETDEV_CHANGELOWERSTATE notification is distributed, together
    with a netdev_notifier_changelowerstate_info structure that for LAG
    devices includes a tx_enabled flag that refers to the new state. The
    notification thus makes it possible to react to the changes in txability
    in drivers.

    However there's no way to query txability from the outside on demand.
    That is problematic namely for mlxsw, which when resolving ERSPAN packet
    path, may encounter a LAG device, and needs to determine which of the
    slaves it should choose.

    To that end, introduce a new function, net_lag_port_dev_txable(), which
    determines whether a given slave device is "active" or
    "txable" (depending on the flavor of the LAG device). That function then
    dispatches to per-LAG-flavor helpers, bond_is_active_slave_dev() resp.
    team_port_dev_txable().

    Because there currently is no good place where net_lag_port_dev_txable()
    should be added, introduce a new header file, lag.h, which should from
    now on hold any logic common to both team and bond. (But keep
    netif_is_lag_master() together with the rest of netif_is_*_master()
    functions).

    Signed-off-by: Petr Machata
    Reviewed-by: Jiri Pirko
    Signed-off-by: Ido Schimmel
    Signed-off-by: David S. Miller

    Petr Machata
     
  • A follow-up patch adds a new entry point, team_port_dev_txable(). Making
    it an ordinary exported function would mean that any module that may
    need the service in one of the supported configurations also
    unconditionally needs to pull in the team module, whether or not the
    user actually intends to create team interfaces.

    To prevent that, team_port_dev_txable() is defined in if_team.h, and
    therefore all dependencies of that function also need to be
    publicly-visible.

    Therefore move team_port_get_rcu() from team.c to if_team.h.

    Signed-off-by: Petr Machata
    Reviewed-by: Jiri Pirko
    Signed-off-by: Ido Schimmel
    Signed-off-by: David S. Miller

    Petr Machata
     

25 Oct, 2017

1 commit

  • …READ_ONCE()/WRITE_ONCE()

    Please do not apply this to mainline directly, instead please re-run the
    coccinelle script shown below and apply its output.

    For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
    preference to ACCESS_ONCE(), and new code is expected to use one of the
    former. So far, there's been no reason to change most existing uses of
    ACCESS_ONCE(), as these aren't harmful, and changing them results in
    churn.

    However, for some features, the read/write distinction is critical to
    correct operation. To distinguish these cases, separate read/write
    accessors must be used. This patch migrates (most) remaining
    ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
    coccinelle script:

    ----
    // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
    // WRITE_ONCE()

    // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch

    virtual patch

    @ depends on patch @
    expression E1, E2;
    @@

    - ACCESS_ONCE(E1) = E2
    + WRITE_ONCE(E1, E2)

    @ depends on patch @
    expression E;
    @@

    - ACCESS_ONCE(E)
    + READ_ONCE(E)
    ----

    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: davem@davemloft.net
    Cc: linux-arch@vger.kernel.org
    Cc: mpe@ellerman.id.au
    Cc: shuah@kernel.org
    Cc: snitzer@redhat.com
    Cc: thor.thayer@linux.intel.com
    Cc: tj@kernel.org
    Cc: viro@zeniv.linux.org.uk
    Cc: will.deacon@arm.com
    Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>

    Mark Rutland
     

02 Jun, 2017

1 commit


08 Oct, 2016

1 commit

  • Roundrobin runner of team driver uses 'unsigned int' variable to count
    the number of sent_packets. Later it is passed to a subroutine
    team_num_to_port_index(struct team *team, int num) as 'num' and when
    we reach MAXINT (2**31-1), 'num' becomes negative.

    This leads to using incorrect hash-bucket for port lookup
    and as a result, packets are dropped. The fix consists of changing
    'int num' to 'unsigned int num'. Testing of a fixed kernel shows that
    there is no packet drop anymore.

    Signed-off-by: Alex Sidorenko
    Signed-off-by: David S. Miller

    Alex Sidorenko
     

06 Feb, 2016

1 commit


04 Dec, 2015

1 commit


03 Jun, 2014

1 commit

  • Now it is not possible to set mtu to team device which has a port
    enslaved to it. The reason is that when team_change_mtu() calls
    dev_set_mtu() for port device, notificator for NETDEV_PRECHANGEMTU
    event is called and team_device_event() returns NOTIFY_BAD forbidding
    the change. So fix this by returning NOTIFY_DONE here in case team is
    changing mtu in team_change_mtu().

    Introduced-by: 3d249d4c "net: introduce ethernet teaming device"
    Signed-off-by: Jiri Pirko
    Acked-by: Flavio Leitner
    Signed-off-by: David S. Miller

    Jiri Pirko
     

24 Jul, 2013

2 commits


20 Jun, 2013

1 commit

  • Conflicts:
    drivers/net/wireless/ath/ath9k/Kconfig
    drivers/net/xen-netback/netback.c
    net/batman-adv/bat_iv_ogm.c
    net/wireless/nl80211.c

    The ath9k Kconfig conflict was a change of a Kconfig option name right
    next to the deletion of another option.

    The xen-netback conflict was overlapping changes involving the
    handling of the notify list in xen_netbk_rx_action().

    Batman conflict resolution provided by Antonio Quartulli, basically
    keep everything in both conflict hunks.

    The nl80211 conflict is a little more involved. In 'net' we added a
    dynamic memory allocation to nl80211_dump_wiphy() to fix a race that
    Linus reported. Meanwhile in 'net-next' the handlers were converted
    to use pre and post doit handlers which use a flag to determine
    whether to hold the RTNL mutex around the operation.

    However, the dump handlers to not use this logic. Instead they have
    to explicitly do the locking. There were apparent bugs in the
    conversion of nl80211_dump_wiphy() in that we were not dropping the
    RTNL mutex in all the return paths, and it seems we very much should
    be doing so. So I fixed that whilst handling the overlapping changes.

    To simplify the initial returns, I take the RTNL mutex after we try
    to allocate 'tb'.

    Signed-off-by: David S. Miller

    David S. Miller
     

12 Jun, 2013

3 commits


07 Mar, 2013

2 commits


28 Feb, 2013

1 commit

  • I'm not sure why, but the hlist for each entry iterators were conceived

    list_for_each_entry(pos, head, member)

    The hlist ones were greedy and wanted an extra parameter:

    hlist_for_each_entry(tpos, pos, head, member)

    Why did they need an extra pos parameter? I'm not quite sure. Not only
    they don't really need it, it also prevents the iterator from looking
    exactly like the list iterator, which is unfortunate.

    Besides the semantic patch, there was some manual work required:

    - Fix up the actual hlist iterators in linux/list.h
    - Fix up the declaration of other iterators based on the hlist ones.
    - A very small amount of places were using the 'node' parameter, this
    was modified to use 'obj->member' instead.
    - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
    properly, so those had to be fixed up manually.

    The semantic patch which is mostly the work of Peter Senna Tschudin is here:

    @@
    iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;

    type T;
    expression a,c,d,e;
    identifier b;
    statement S;
    @@

    -T b;

    [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
    [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
    [akpm@linux-foundation.org: checkpatch fixes]
    [akpm@linux-foundation.org: fix warnings]
    [akpm@linux-foudnation.org: redo intrusive kvm changes]
    Tested-by: Peter Senna Tschudin
    Acked-by: Paul E. McKenney
    Signed-off-by: Sasha Levin
    Cc: Wu Fengguang
    Cc: Marcelo Tosatti
    Cc: Gleb Natapov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

07 Feb, 2013

1 commit

  • Some modes don't require any special carrier handling so
    in these cases, the kernel can control the carrier as for
    any other interface. However, some other modes, e.g. lacp,
    requires more than just that, so userspace needs to control
    the carrier itself.

    The daemon today is ready to control it, but the kernel
    still can change it based on events.

    This fix so that either kernel or userspace is controlling
    the carrier.

    Signed-off-by: Flavio Leitner
    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Flavio Leitner
     

13 Oct, 2012

1 commit


23 Aug, 2012

1 commit


20 Aug, 2012

1 commit


15 Aug, 2012

1 commit

  • Although this doesn't matter actually, because netpoll_tx_running()
    doesn't use the parameter, the code will be more readable.

    For team_dev_queue_xmit() we have to move it down to avoid
    compile errors.

    Cc: David Miller
    Signed-off-by: Jiri Pirko
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    Amerigo Wang
     

04 Aug, 2012

3 commits


21 Jul, 2012

1 commit


18 Jul, 2012

1 commit


12 Jul, 2012

2 commits


27 Jun, 2012

2 commits


20 Jun, 2012

5 commits