02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

01 Jul, 2017

1 commit

  • refcount_t type and corresponding API should be
    used instead of atomic_t when the variable is used as
    a reference counter. This allows to avoid accidental
    refcounter overflows that might lead to use-after-free
    situations.

    Signed-off-by: Elena Reshetova
    Signed-off-by: Hans Liljestrand
    Signed-off-by: Kees Cook
    Signed-off-by: David Windsor
    Signed-off-by: David S. Miller

    Reshetova, Elena
     

17 Nov, 2016

1 commit

  • Callers of netpoll_poll_lock() own NAPI_STATE_SCHED

    Callers of netpoll_poll_unlock() have BH blocked between
    the NAPI_STATE_SCHED being cleared and poll_lock is released.

    We can avoid the spinlock which has no contention, and use cmpxchg()
    on poll_owner which we need to set anyway.

    This removes a possible lockdep violation after the cited commit,
    since sk_busy_loop() re-enables BH before calling busy_poll_stop()

    Fixes: 217f69743681 ("net: busy-poll: allow preemption in sk_busy_loop()")
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

30 Mar, 2014

2 commits

  • The netpoll_rx_enable and netpoll_rx_disable functions have always
    controlled polling the network drivers transmit and receive queues.

    Rename them to netpoll_poll_enable and netpoll_poll_disable to make
    their functionality clear.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • The gfp parameter was added in:
    commit 47be03a28cc6c80e3aa2b3e8ed6d960ff0c5c0af
    Author: Amerigo Wang
    Date: Fri Aug 10 01:24:37 2012 +0000

    netpoll: use GFP_ATOMIC in slave_enable_netpoll() and __netpoll_setup()

    slave_enable_netpoll() and __netpoll_setup() may be called
    with read_lock() held, so should use GFP_ATOMIC to allocate
    memory. Eric suggested to pass gfp flags to __netpoll_setup().

    Cc: Eric Dumazet
    Cc: "David S. Miller"
    Reported-by: Dan Carpenter
    Signed-off-by: Eric Dumazet
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    The reason for the gfp parameter was removed in:
    commit c4cdef9b7183159c23c7302aaf270d64c549f557
    Author: dingtianhong
    Date: Tue Jul 23 15:25:27 2013 +0800

    bonding: don't call slave_xxx_netpoll under spinlocks

    The slave_xxx_netpoll will call synchronize_rcu_bh(),
    so the function may schedule and sleep, it should't be
    called under spinlocks.

    bond_netpoll_setup() and bond_netpoll_cleanup() are always
    protected by rtnl lock, it is no need to take the read lock,
    as the slave list couldn't be changed outside rtnl lock.

    Signed-off-by: Ding Tianhong
    Cc: Jay Vosburgh
    Cc: Andy Gospodarek
    Signed-off-by: David S. Miller

    Nothing else that calls __netpoll_setup or ndo_netpoll_setup
    requires a gfp paramter, so remove the gfp parameter from both
    of these functions making the code clearer.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     

18 Mar, 2014

5 commits

  • The netpoll packet receive code only becomes active if the netpoll
    rx_skb_hook is implemented, and there is not a single implementation
    of the netpoll rx_skb_hook in the kernel.

    All of the out of tree implementations I have found all call
    netpoll_poll which was removed from the kernel in 2011, so this
    change should not add any additional breakage.

    There are problems with the netpoll packet receive code. __netpoll_rx
    does not call dev_kfree_skb_irq or dev_kfree_skb_any in hard irq
    context. netpoll_neigh_reply leaks every skb it receives. Reception
    of packets does not work successfully on stacked devices (aka bonding,
    team, bridge, and vlans).

    Given that the netpoll packet receive code is buggy, there are no
    out of tree users that will be merged soon, and the code has
    not been used for in tree for a decade let's just remove it.

    Reverting this commit can server as a starting point for anyone
    who wants to resurrect netpoll packet reception support.

    Acked-by: Eric Dumazet
    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • Make rx_skb_hook, and rx in struct netpoll depend on
    CONFIG_NETPOLL_TRAP Make rx_lock, rx_np, and neigh_tx in struct
    netpoll_info depend on CONFIG_NETPOLL_TRAP

    Make the functions netpoll_rx_on, netpoll_rx, and netpoll_receive_skb
    no-ops when CONFIG_NETPOLL_TRAP is not set.

    Only build netpoll_neigh_reply, checksum_udp service_neigh_queue,
    pkt_is_ns, and __netpoll_rx when CONFIG_NETPOLL_TRAP is defined.

    Add helper functions netpoll_trap_setup, netpoll_trap_setup_info,
    netpoll_trap_cleanup, and netpoll_trap_cleanup_info that initialize
    and cleanup the struct netpoll and struct netpoll_info receive
    specific fields when CONFIG_NETPOLL_TRAP is enabled and do nothing
    otherwise.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • Now that we no longer need to receive packets to safely drain the
    network drivers receive queue move netpoll_trap and netpoll_set_trap
    under CONFIG_NETPOLL_TRAP

    Making netpoll_trap and netpoll_set_trap noop inline functions
    when CONFIG_NETPOLL_TRAP is not set.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • Change the strategy of netpoll from dropping all packets received
    during netpoll_poll_dev to calling napi poll with a budget of 0
    (to avoid processing drivers rx queue), and to ignore packets received
    with netif_rx (those will safely be placed on the backlog queue).

    All of the netpoll supporting drivers have been reviewed to ensure
    either thay use netif_rx or that a budget of 0 is supported by their
    napi poll routine and that a budget of 0 will not process the drivers
    rx queues.

    Not dropping packets makes NETPOLL_RX_DROP unnecesary so it is removed.

    npinfo->rx_flags is removed as rx_flags with just the NETPOLL_RX_ENABLED
    flag becomes just a redundant mirror of list_empty(&npinfo->rx_np).

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • Add a helper netpoll_rx_processing that reports when netpoll has
    receive side processing to perform.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     

26 Oct, 2013

1 commit

  • Right now skb->data is passed to rx_hook() even if the skb
    has not been linearised and without giving rx_hook() a way
    to linearise it.

    Change the rx_hook() interface and make it accept the skb
    and the offset to the UDP payload as arguments. rx_hook() is
    also renamed to rx_skb_hook() to ensure that out of the tree
    users notice the API change.

    In this way any rx_skb_hook() implementation can perform all
    the needed operations to properly (and safely) access the
    skb data.

    Signed-off-by: Antonio Quartulli
    Signed-off-by: David S. Miller

    Antonio Quartulli
     

28 May, 2013

1 commit


02 May, 2013

1 commit

  • Bart Van Assche recently reported a warning to me:

    [] warn_slowpath_common+0x7f/0xc0
    [] warn_slowpath_null+0x1a/0x20
    [] mutex_trylock+0x16d/0x180
    [] netpoll_poll_dev+0x49/0xc30
    [] ? __alloc_skb+0x82/0x2a0
    [] netpoll_send_skb_on_dev+0x265/0x410
    [] netpoll_send_udp+0x28a/0x3a0
    [] ? write_msg+0x53/0x110 [netconsole]
    [] write_msg+0xcf/0x110 [netconsole]
    [] call_console_drivers.constprop.17+0xa1/0x1c0
    [] console_unlock+0x2d6/0x450
    [] vprintk_emit+0x1ee/0x510
    [] printk+0x4d/0x4f
    [] scsi_print_command+0x7d/0xe0 [scsi_mod]

    This resulted from my commit ca99ca14c which introduced a mutex_trylock
    operation in a path that could execute in interrupt context. When mutex
    debugging is enabled, the above warns the user when we are in fact
    exectuting in interrupt context
    interrupt context.

    After some discussion, It seems that a semaphore is the proper mechanism to use
    here. While mutexes are defined to be unusable in interrupt context, no such
    condition exists for semaphores (save for the fact that the non blocking api
    calls, like up and down_trylock must be used when in irq context).

    Signed-off-by: Neil Horman
    Reported-by: Bart Van Assche
    CC: Bart Van Assche
    CC: David Miller
    CC: netdev@vger.kernel.org
    Signed-off-by: David S. Miller

    Neil Horman
     

12 Feb, 2013

1 commit

  • __netpoll_rcu_free is used to free netpoll structures when the rtnl_lock is
    already held. The mechanism is used to asynchronously call __netpoll_cleanup
    outside of the holding of the rtnl_lock, so as to avoid deadlock.
    Unfortunately, __netpoll_cleanup modifies pointers (dev->np), which means the
    rtnl_lock must be held while calling it. Further, it cannot be held, because
    rcu callbacks may be issued in softirq contexts, which cannot sleep.

    Fix this by converting the rcu callback to a work queue that is guaranteed to
    get scheduled in process context, so that we can hold the rtnl properly while
    calling __netpoll_cleanup

    Tested successfully by myself.

    Signed-off-by: Neil Horman
    CC: "David S. Miller"
    CC: Cong Wang
    CC: Eric Dumazet
    Signed-off-by: David S. Miller

    Neil Horman
     

07 Feb, 2013

1 commit

  • Ivan Vercera was recently backporting commit
    9c13cb8bb477a83b9a3c9e5a5478a4e21294a760 to a RHEL kernel, and I noticed that,
    while this patch protects the tg3 driver from having its ndo_poll_controller
    routine called during device initalization, it does nothing for the driver
    during shutdown. I.e. it would be entirely possible to have the
    ndo_poll_controller method (or subsequently the ndo_poll) routine called for a
    driver in the netpoll path on CPU A while in parallel on CPU B, the ndo_close or
    ndo_open routine could be called. Given that the two latter routines tend to
    initizlize and free many data structures that the former two rely on, the result
    can easily be data corruption or various other crashes. Furthermore, it seems
    that this is potentially a problem with all net drivers that support netpoll,
    and so this should ideally be fixed in a common path.

    As Ben H Pointed out to me, we can't preform dev_open/dev_close in atomic
    context, so I've come up with this solution. We can use a mutex to sleep in
    open/close paths and just do a mutex_trylock in the napi poll path and abandon
    the poll attempt if we're locked, as we'll just retry the poll on the next send
    anyway.

    I've tested this here by flooding netconsole with messages on a system whos nic
    driver I modfied to periodically return NETDEV_TX_BUSY, so that the netpoll tx
    workqueue would be forced to send frames and poll the device. While this was
    going on I rapidly ifdown/up'ed the interface and watched for any problems.
    I've not found any.

    Signed-off-by: Neil Horman
    CC: Ivan Vecera
    CC: "David S. Miller"
    CC: Ben Hutchings
    CC: Francois Romieu
    CC: Eric Dumazet
    Signed-off-by: David S. Miller

    Neil Horman
     

09 Jan, 2013

1 commit


15 Aug, 2012

6 commits


18 Jul, 2012

1 commit


04 Jul, 2011

1 commit


18 Oct, 2010

1 commit

  • The bonding driver currently modifies the netpoll structure in its xmit path
    while sending frames from netpoll. This is racy, as other cpus can access the
    netpoll structure in parallel. Since the bonding driver points np->dev to a
    slave device, other cpus can inadvertently attempt to send data directly to
    slave devices, leading to improper locking with the bonding master, lost frames,
    and deadlocks. This patch fixes that up.

    This patch also removes the real_dev pointer from the netpoll structure as that
    data is really only used by bonding in the poll_controller, and we can emulate
    its behavior by check each slave for IS_UP.

    Signed-off-by: Neil Horman
    Signed-off-by: David S. Miller

    Neil Horman
     

18 Sep, 2010

1 commit

  • We cannot use rcu_dereference_bh safely in netpoll_rx as we may
    be called with IRQs disabled. We could however simply disable
    IRQs as that too causes BH to be disabled and is safe in either
    case.

    Thanks to John Linville for discovering this bug and providing
    a patch.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

11 Aug, 2010

1 commit


16 Jun, 2010

4 commits

  • Now that RCU debugging checks for matching rcu_dereference calls
    and rcu_read_lock, we need to use the correct primitives or face
    nasty warnings.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • This patch adds the helper netpoll_tx_running for use within
    ndo_start_xmit. It returns non-zero if ndo_start_xmit is being
    invoked by netpoll, and zero otherwise.

    This is currently implemented by simply looking at the hardirq
    count. This is because for all non-netpoll uses of ndo_start_xmit,
    IRQs must be enabled while netpoll always disables IRQs before
    calling ndo_start_xmit.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • This patch adds the functions __netpoll_setup/__netpoll_cleanup
    which is designed to be called recursively through ndo_netpoll_seutp.

    They must be called with RTNL held, and the caller must initialise
    np->dev and ensure that it has a valid reference count.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • The use of RCU in netpoll is incorrect in a number of places:

    1) The initial setting is lacking a write barrier.
    2) The synchronize_rcu is in the wrong place.
    3) Read barriers are missing.
    4) Some places are even missing rcu_read_lock.
    5) npinfo is zeroed after freeing.

    This patch fixes those issues. As most users are in BH context,
    this also converts the RCU usage to the BH variant.

    Signed-off-by: Herbert Xu
    Acked-by: Paul E. McKenney
    Signed-off-by: David S. Miller

    Herbert Xu
     

06 May, 2010

2 commits

  • Signed-off-by: David S. Miller

    David S. Miller
     
  • This whole patchset is for adding netpoll support to bridge and bonding
    devices. I already tested it for bridge, bonding, bridge over bonding,
    and bonding over bridge. It looks fine now.

    To make bridge and bonding support netpoll, we need to adjust
    some netpoll generic code. This patch does the following things:

    1) introduce two new priv_flags for struct net_device:
    IFF_IN_NETPOLL which identifies we are processing a netpoll;
    IFF_DISABLE_NETPOLL is used to disable netpoll support for a device
    at run-time;

    2) introduce one new method for netdev_ops:
    ->ndo_netpoll_cleanup() is used to clean up netpoll when a device is
    removed.

    3) introduce netpoll_poll_dev() which takes a struct net_device * parameter;
    export netpoll_send_skb() and netpoll_poll_dev() which will be used later;

    4) hide a pointer to struct netpoll in struct netpoll_info, ditto.

    5) introduce ->real_dev for struct netpoll.

    6) introduce a new status NETDEV_BONDING_DESLAE, which is used to disable
    netconsole before releasing a slave, to avoid deadlocks.

    Cc: David Miller
    Cc: Neil Horman
    Signed-off-by: WANG Cong
    Signed-off-by: David S. Miller

    WANG Cong
     

14 Jan, 2010

1 commit


29 Mar, 2009

1 commit


17 Mar, 2009

1 commit

  • As my netpoll fix for net doesn't really work for net-next, we
    need this update to move the checks into the right place. As it
    stands we may pass freed skbs to netpoll_receive_skb.

    This patch also introduces a netpoll_rx_on function to avoid GRO
    completely if we're invoked through netpoll. This might seem
    paranoid but as netpoll may have an external receive hook it's
    better to be safe than sorry. I don't think we need this for
    2.6.29 though since there's nothing immediately broken by it.

    This patch also moves the GRO_* return values to netdevice.h since
    VLAN needs them too (I tried to avoid this originally but alas
    this seems to be the easiest way out). This fixes a bug in VLAN
    where it continued to use the old return value 2 instead of the
    correct GRO_DROP.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

16 Dec, 2008

1 commit

  • This patch adds the top-level GRO (Generic Receive Offload) infrastructure.
    This is pretty similar to LRO except that this is protocol-independent.
    Instead of holding packets in an lro_mgr structure, they're now held in
    napi_struct.

    For drivers that intend to use this, they can set the NETIF_F_GRO bit and
    call napi_gro_receive instead of netif_receive_skb or just call netif_rx.
    The latter will call napi_receive_skb automatically. When napi_gro_receive
    is used, the driver must either call napi_complete/napi_rx_complete, or
    call napi_gro_flush in softirq context if the driver uses the primitives
    __napi_complete/__napi_rx_complete.

    Protocols will set the gro_receive and gro_complete function pointers in
    order to participate in this scheme.

    In addition to the packet, gro_receive will get a list of currently held
    packets. Each packet in the list has a same_flow field which is non-zero
    if it is a potential match for the new packet. For each packet that may
    match, they also have a flush field which is non-zero if the held packet
    must not be merged with the new packet.

    Once gro_receive has determined that the new skb matches a held packet,
    the held packet may be processed immediately if the new skb cannot be
    merged with it. In this case gro_receive should return the pointer to
    the existing skb in gro_list. Otherwise the new skb should be merged into
    the existing packet and NULL should be returned, unless the new skb makes
    it impossible for any further merges to be made (e.g., FIN packet) where
    the merged skb should be returned.

    Whenever the skb is merged into an existing entry, the gro_receive
    function should set NAPI_GRO_CB(skb)->same_flow. Note that if an skb
    merely matches an existing entry but can't be merged with it, then
    this shouldn't be set.

    If gro_receive finds it pointless to hold the new skb for future merging,
    it should set NAPI_GRO_CB(skb)->flush.

    Held packets will be flushed by napi_gro_flush which is called by
    napi_complete and napi_rx_complete.

    Currently held packets are stored in a singly liked list just like LRO.
    The list is limited to a maximum of 8 entries. In future, this may be
    expanded to use a hash table to allow more flows to be held for merging.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

05 Mar, 2008

1 commit

  • Based upon a report by Andrew Morton and code analysis done
    by Jarek Poplawski.

    This reverts 33f807ba0d9259e7c75c7a2ce8bd2787e5b540c7 ("[NETPOLL]:
    Kill NETPOLL_RX_DROP, set but never tested.") and
    c7b6ea24b43afb5749cb704e143df19d70e23dea ("[NETPOLL]: Don't need
    rx_flags.").

    The rx_flags did get tested for zero vs. non-zero and therefore we do
    need those tests and that code which sets NETPOLL_RX_DROP et al.

    Signed-off-by: David S. Miller

    David S. Miller
     

29 Jan, 2008

2 commits