09 Oct, 2014

1 commit


08 Oct, 2014

1 commit

  • Testing xmit_more support with netperf and connected UDP sockets,
    I found strange dst refcount false sharing.

    Current handling of IFF_XMIT_DST_RELEASE is not optimal.

    Dropping dst in validate_xmit_skb() is certainly too late in case
    packet was queued by cpu X but dequeued by cpu Y

    The logical point to take care of drop/force is in __dev_queue_xmit()
    before even taking qdisc lock.

    As Julian Anastasov pointed out, need for skb_dst() might come from some
    packet schedulers or classifiers.

    This patch adds new helper to cleanly express needs of various drivers
    or qdiscs/classifiers.

    Drivers that need skb_dst() in their ndo_start_xmit() should call
    following helper in their setup instead of the prior :

    dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
    ->
    netif_keep_dst(dev);

    Instead of using a single bit, we use two bits, one being
    eventually rebuilt in bonding/team drivers.

    The other one, is permanent and blocks IFF_XMIT_DST_RELEASE being
    rebuilt in bonding/team. Eventually, we could add something
    smarter later.

    Signed-off-by: Eric Dumazet
    Cc: Julian Anastasov
    Signed-off-by: David S. Miller

    Eric Dumazet
     

05 Oct, 2014

1 commit

  • When team_notify_peers and team_mcast_rejoin are called, they both reset
    their respective .count_pending atomic variable. Then when the actual
    worker function is executed, the variable is atomically decremented.
    This pattern introduces a potential race condition where the
    .count_pending rolls over and the worker function keeps rescheduling
    until .count_pending decrements to zero again:

    THREAD 1 THREAD 2

    ======== ========
    team_notify_peers(teamX)
    atomic_set count_pending = 1
    schedule_delayed_work
    team_notify_peers(teamX)
    atomic_set count_pending = 1
    team_notify_peers_work
    atomic_dec_and_test
    count_pending = 0
    (return)
    schedule_delayed_work
    team_notify_peers_work
    atomic_dec_and_test
    count_pending = -1
    schedule_delayed_work
    (repeat until count_pending = 0)

    Instead of assigning a new value to .count_pending, use atomic_add to
    tack-on the additional desired worker function invocations.

    Signed-off-by: Joe Lawrence
    Acked-by: Jiri Pirko
    Fixes: fc423ff00df3a19554414ee ("team: add peer notification")
    Fixes: 492b200efdd20b8fcfdac87 ("team: add support for sending multicast rejoins")
    Signed-off-by: David S. Miller

    Joe Lawrence
     

26 Aug, 2014

1 commit

  • When one tries to add eth as a port into team and that eth is already in
    use by other rx_handler device (macvlan, bond, bridge, ...) a bug in
    team_port_add() causes that IFF_TEAM_PORT flag is set before rx_handler
    is registered. In between, netdev nofifier is called and
    team_device_event() sees IFF_TEAM_PORT and thinks that rx_handler_data
    pointer is set to team_port. But it isn't.

    Fix this by reordering rx_handler register and IFF_TEAM_PORT priv flag
    set so it is very similar to how bonding does this.

    Reported-by: Erik Hugne
    Fixes: 3d249d4ca7 "net: introduce ethernet teaming device"
    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     

06 Aug, 2014

1 commit


03 Aug, 2014

1 commit

  • clean up names related to socket filtering and bpf in the following way:
    - everything that deals with sockets keeps 'sk_*' prefix
    - everything that is pure BPF is changed to 'bpf_*' prefix

    split 'struct sk_filter' into
    struct sk_filter {
    atomic_t refcnt;
    struct rcu_head rcu;
    struct bpf_prog *prog;
    };
    and
    struct bpf_prog {
    u32 jited:1,
    len:31;
    struct sock_fprog_kern *orig_prog;
    unsigned int (*bpf_func)(const struct sk_buff *skb,
    const struct bpf_insn *filter);
    union {
    struct sock_filter insns[0];
    struct bpf_insn insnsi[0];
    struct work_struct work;
    };
    };
    so that 'struct bpf_prog' can be used independent of sockets and cleans up
    'unattached' bpf use cases

    split SK_RUN_FILTER macro into:
    SK_RUN_FILTER to be used with 'struct sk_filter *' and
    BPF_PROG_RUN to be used with 'struct bpf_prog *'

    __sk_filter_release(struct sk_filter *) gains
    __bpf_prog_release(struct bpf_prog *) helper function

    also perform related renames for the functions that work
    with 'struct bpf_prog *', since they're on the same lines:

    sk_filter_size -> bpf_prog_size
    sk_filter_select_runtime -> bpf_prog_select_runtime
    sk_filter_free -> bpf_prog_free
    sk_unattached_filter_create -> bpf_prog_create
    sk_unattached_filter_destroy -> bpf_prog_destroy
    sk_store_orig_filter -> bpf_prog_store_orig_filter
    sk_release_orig_filter -> bpf_release_orig_filter
    __sk_migrate_filter -> bpf_migrate_filter
    __sk_prepare_filter -> bpf_prepare_filter

    API for attaching classic BPF to a socket stays the same:
    sk_attach_filter(prog, struct sock *)/sk_detach_filter(struct sock *)
    and SK_RUN_FILTER(struct sk_filter *, ctx) to execute a program
    which is used by sockets, tun, af_packet

    API for 'unattached' BPF programs becomes:
    bpf_prog_create(struct bpf_prog **)/bpf_prog_destroy(struct bpf_prog *)
    and BPF_PROG_RUN(struct bpf_prog *, ctx) to execute a program
    which is used by isdn, ppp, team, seccomp, ptp, xt_bpf, cls_bpf, test_bpf

    Signed-off-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Alexei Starovoitov
     

01 Aug, 2014

1 commit

  • Commit 34c5bd66e5ed introduced the possibility that an
    uninitialized pointer on the stack (orig_fp) can call into
    sk_unattached_filter_destroy() when its value is non NULL.

    Before that commit orig_fp was only destroyed in the same
    block where it was assigned a valid BPF prog before. Fix it
    up by initializing it to NULL.

    Fixes: 34c5bd66e5ed ("net: filter: don't release unattached filter through call_rcu()")
    Signed-off-by: Daniel Borkmann
    Cc: Pablo Neira
    Cc: Alexei Starovoitov
    Cc: Jiri Pirko
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

31 Jul, 2014

1 commit

  • sk_unattached_filter_destroy() does not always need to release the
    filter object via rcu. Since this filter is never attached to the
    socket, the caller should be responsible for releasing the filter
    in a safe way, which may not necessarily imply rcu.

    This is a short summary of clients of this function:

    1) xt_bpf.c and cls_bpf.c use the bpf matchers from rules, these rules
    are removed from the packet path before the filter is released. Thus,
    the framework makes sure the filter is safely removed.

    2) In the ppp driver, the ppp_lock ensures serialization between the
    xmit and filter attachment/detachment path. This doesn't use rcu
    so deferred release via rcu makes no sense.

    3) In the isdn/ppp driver, it is called from isdn_ppp_release()
    the isdn_ppp_ioctl(). This driver uses mutex and spinlocks, no rcu.
    Thus, deferred rcu makes no sense to me either, the deferred releases
    may be just masking the effects of wrong locking strategy, which
    should be fixed in the driver itself.

    4) In the team driver, this is the only place where the rcu
    synchronization with unattached filter is used. Therefore, this
    patch introduces synchronize_rcu() which is called from the
    genetlink path to make sure the filter doesn't go away while packets
    are still walking over it. I think we can revisit this once struct
    bpf_prog (that only wraps specific bpf code bits) is in place, then
    add some specific struct rcu_head in the scope of the team driver if
    Jiri thinks this is needed.

    Deferred rcu release for unattached filters was originally introduced
    in 302d663 ("filter: Allow to create sk-unattached filters").

    Signed-off-by: Pablo Neira Ayuso
    Signed-off-by: David S. Miller

    Pablo Neira
     

04 Jun, 2014

1 commit


03 Jun, 2014

1 commit

  • Now it is not possible to set mtu to team device which has a port
    enslaved to it. The reason is that when team_change_mtu() calls
    dev_set_mtu() for port device, notificator for NETDEV_PRECHANGEMTU
    event is called and team_device_event() returns NOTIFY_BAD forbidding
    the change. So fix this by returning NOTIFY_DONE here in case team is
    changing mtu in team_change_mtu().

    Introduced-by: 3d249d4c "net: introduce ethernet teaming device"
    Signed-off-by: Jiri Pirko
    Acked-by: Flavio Leitner
    Signed-off-by: David S. Miller

    Jiri Pirko
     

25 May, 2014

1 commit


24 May, 2014

1 commit

  • The sk_unattached_filter_create() API is used by BPF filters that
    are not directly attached or related to sockets, and are used in
    team, ptp, xt_bpf, cls_bpf, etc. As such all users do their own
    internal managment of obtaining filter blocks and thus already
    have them in kernel memory and set up before calling into
    sk_unattached_filter_create(). As a result, due to __user annotation
    in sock_fprog, sparse triggers false positives (incorrect type in
    assignment [different address space]) when filters are set up before
    passing them to sk_unattached_filter_create(). Therefore, let
    sk_unattached_filter_create() API use sock_fprog_kern to overcome
    this issue.

    Signed-off-by: Daniel Borkmann
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

23 May, 2014

1 commit

  • __team_compute_features() uses netdev_increment_features() to
    combine vlan_features of slaves into vlan_features of the team.
    As netdev_increment_features() only adds most features and we
    start with TEAM_VLAN_FEATURES, we can end up with features none
    of the slaves provided.

    Initialize vlan_features only with the flags which are both in
    TEAM_VLAN_FEATURES and NETIF_F_ALL_FOR_ALL. Right now there is
    no such feature so that we actually initialize vlan_features
    with zero but stating it explicitely will make the code more
    future proof.

    Signed-off-by: Michal Kubecek
    Signed-off-by: David S. Miller

    Michal Kubeček
     

25 Apr, 2014

1 commit


30 Mar, 2014

1 commit

  • The gfp parameter was added in:
    commit 47be03a28cc6c80e3aa2b3e8ed6d960ff0c5c0af
    Author: Amerigo Wang
    Date: Fri Aug 10 01:24:37 2012 +0000

    netpoll: use GFP_ATOMIC in slave_enable_netpoll() and __netpoll_setup()

    slave_enable_netpoll() and __netpoll_setup() may be called
    with read_lock() held, so should use GFP_ATOMIC to allocate
    memory. Eric suggested to pass gfp flags to __netpoll_setup().

    Cc: Eric Dumazet
    Cc: "David S. Miller"
    Reported-by: Dan Carpenter
    Signed-off-by: Eric Dumazet
    Signed-off-by: Cong Wang
    Signed-off-by: David S. Miller

    The reason for the gfp parameter was removed in:
    commit c4cdef9b7183159c23c7302aaf270d64c549f557
    Author: dingtianhong
    Date: Tue Jul 23 15:25:27 2013 +0800

    bonding: don't call slave_xxx_netpoll under spinlocks

    The slave_xxx_netpoll will call synchronize_rcu_bh(),
    so the function may schedule and sleep, it should't be
    called under spinlocks.

    bond_netpoll_setup() and bond_netpoll_cleanup() are always
    protected by rtnl lock, it is no need to take the read lock,
    as the slave list couldn't be changed outside rtnl lock.

    Signed-off-by: Ding Tianhong
    Cc: Jay Vosburgh
    Cc: Andy Gospodarek
    Signed-off-by: David S. Miller

    Nothing else that calls __netpoll_setup or ndo_netpoll_setup
    requires a gfp paramter, so remove the gfp parameter from both
    of these functions making the code clearer.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     

15 Mar, 2014

1 commit

  • Replace the bh safe variant with the hard irq safe variant.

    We need a hard irq safe variant to deal with netpoll transmitting
    packets from hard irq context, and we need it in most if not all of
    the places using the bh safe variant.

    Except on 32bit uni-processor the code is exactly the same so don't
    bother with a bh variant, just have a hard irq safe variant that
    everyone can use.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: David S. Miller

    Eric W. Biederman
     

19 Feb, 2014

1 commit


17 Feb, 2014

1 commit

  • Add a new argument for ndo_select_queue() callback that passes a
    fallback handler. This gets invoked through netdev_pick_tx();
    fallback handler is currently __netdev_pick_tx() as most drivers
    invoke this function within their customized implementation in
    case for skbs that don't need any special handling. This fallback
    handler can then be replaced on other call-sites with different
    queue selection methods (e.g. in packet sockets, pktgen etc).

    This also has the nice side-effect that __netdev_pick_tx() is
    then only invoked from netdev_pick_tx() and export of that
    function to modules can be undone.

    Suggested-by: David S. Miller
    Signed-off-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

15 Feb, 2014

1 commit


23 Jan, 2014

1 commit


22 Jan, 2014

1 commit

  • Many functions have open coded a function that returns a random
    number in range [0,N-1]. Under the assumption that we have a PRNG
    such as taus113 with being well distributed in [0, ~0U] space,
    we can implement such a function as uword t = (n*m')>>32, where
    m' is a random number obtained from PRNG, n the right open interval
    border and t our resulting random number, with n,m',t in u32 universe.

    Lets go with Joe and simply call it prandom_u32_max(), although
    technically we have an right open interval endpoint, but that we
    have documented. Other users can further be migrated to the new
    prandom_u32_max() function later on; for now, we need to make sure
    to migrate reciprocal_divide() users for the reciprocal_divide()
    follow-up fixup since their function signatures are going to change.

    Joint work with Hannes Frederic Sowa.

    Cc: Jakub Zawadzki
    Cc: Eric Dumazet
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

17 Jan, 2014

1 commit

  • Now it catches the NETDEV_CHANGEMTU notification, which is signaled after
    the actual change happened on the device, and returns NOTIFY_BAD, so that
    the change on the device is reverted.

    This might be quite costly and messy, so use the new NETDEV_PRECHANGEMTU to
    catch the MTU change before the actual change happens and signal that it's
    forbidden to do it.

    CC: Jiri Pirko
    Signed-off-by: Veaceslav Falico
    Acked-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Veaceslav Falico
     

11 Jan, 2014

1 commit

  • Currently, the tx queue were selected implicitly in ndo_dfwd_start_xmit(). The
    will cause several issues:

    - NETIF_F_LLTX were removed for macvlan, so txq lock were done for macvlan
    instead of lower device which misses the necessary txq synchronization for
    lower device such as txq stopping or frozen required by dev watchdog or
    control path.
    - dev_hard_start_xmit() was called with NULL txq which bypasses the net device
    watchdog.
    - dev_hard_start_xmit() does not check txq everywhere which will lead a crash
    when tso is disabled for lower device.

    Fix this by explicitly introducing a new param for .ndo_select_queue() for just
    selecting queues in the case of l2 forwarding offload. netdev_pick_tx() was also
    extended to accept this parameter and dev_queue_xmit_accel() was used to do l2
    forwarding transmission.

    With this fixes, NETIF_F_LLTX could be preserved for macvlan and there's no need
    to check txq against NULL in dev_hard_start_xmit(). Also there's no need to keep
    a dedicated ndo_dfwd_start_xmit() and we can just reuse the code of
    dev_queue_xmit() to do the transmission.

    In the future, it was also required for macvtap l2 forwarding support since it
    provides a necessary synchronization method.

    Cc: John Fastabend
    Cc: Neil Horman
    Cc: e1000-devel@lists.sourceforge.net
    Signed-off-by: Jason Wang
    Acked-by: Neil Horman
    Acked-by: John Fastabend
    Signed-off-by: David S. Miller

    Jason Wang
     

30 Nov, 2013

1 commit

  • When user linkup is enabled and user sets linkup of individual port,
    we need to recompute linkup (carrier) of master interface so the change
    is reflected. Fix this by calling __team_carrier_check() which does the
    needed work.

    Please apply to all stable kernels as well. Thanks.

    Reported-by: Jan Tluka
    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     

20 Nov, 2013

4 commits

  • Pull networking fixes from David Miller:
    "Mostly these are fixes for fallout due to merge window changes, as
    well as cures for problems that have been with us for a much longer
    period of time"

    1) Johannes Berg noticed two major deficiencies in our genetlink
    registration. Some genetlink protocols we passing in constant
    counts for their ops array rather than something like
    ARRAY_SIZE(ops) or similar. Also, some genetlink protocols were
    using fixed IDs for their multicast groups.

    We have to retain these fixed IDs to keep existing userland tools
    working, but reserve them so that other multicast groups used by
    other protocols can not possibly conflict.

    In dealing with these two problems, we actually now use less state
    management for genetlink operations and multicast groups.

    2) When configuring interface hardware timestamping, fix several
    drivers that simply do not validate that the hwtstamp_config value
    is one the driver actually supports. From Ben Hutchings.

    3) Invalid memory references in mwifiex driver, from Amitkumar Karwar.

    4) In dev_forward_skb(), set the skb->protocol in the right order
    relative to skb_scrub_packet(). From Alexei Starovoitov.

    5) Bridge erroneously fails to use the proper wrapper functions to make
    calls to netdev_ops->ndo_vlan_rx_{add,kill}_vid. Fix from Toshiaki
    Makita.

    6) When detaching a bridge port, make sure to flush all VLAN IDs to
    prevent them from leaking, also from Toshiaki Makita.

    7) Put in a compromise for TCP Small Queues so that deep queued devices
    that delay TX reclaim non-trivially don't have such a performance
    decrease. One particularly problematic area is 802.11 AMPDU in
    wireless. From Eric Dumazet.

    8) Fix crashes in tcp_fastopen_cache_get(), we can see NULL socket dsts
    here. Fix from Eric Dumzaet, reported by Dave Jones.

    9) Fix use after free in ipv6 SIT driver, from Willem de Bruijn.

    10) When computing mergeable buffer sizes, virtio-net fails to take the
    virtio-net header into account. From Michael Dalton.

    11) Fix seqlock deadlock in ip4_datagram_connect() wrt. statistic
    bumping, this one has been with us for a while. From Eric Dumazet.

    12) Fix NULL deref in the new TIPC fragmentation handling, from Erik
    Hugne.

    13) 6lowpan bit used for traffic classification was wrong, from Jukka
    Rissanen.

    14) macvlan has the same issue as normal vlans did wrt. propagating LRO
    disabling down to the real device, fix it the same way. From Michal
    Kubecek.

    15) CPSW driver needs to soft reset all slaves during suspend, from
    Daniel Mack.

    16) Fix small frame pacing in FQ packet scheduler, from Eric Dumazet.

    17) The xen-netfront RX buffer refill timer isn't properly scheduled on
    partial RX allocation success, from Ma JieYue.

    18) When ipv6 ping protocol support was added, the AF_INET6 protocol
    initialization cleanup path on failure was borked a little. Fix
    from Vlad Yasevich.

    19) If a socket disconnects during a read/recvmsg/recvfrom/etc that
    blocks we can do the wrong thing with the msg_name we write back to
    userspace. From Hannes Frederic Sowa. There is another fix in the
    works from Hannes which will prevent future problems of this nature.

    20) Fix route leak in VTI tunnel transmit, from Fan Du.

    * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (106 commits)
    genetlink: make multicast groups const, prevent abuse
    genetlink: pass family to functions using groups
    genetlink: add and use genl_set_err()
    genetlink: remove family pointer from genl_multicast_group
    genetlink: remove genl_unregister_mc_group()
    hsr: don't call genl_unregister_mc_group()
    quota/genetlink: use proper genetlink multicast APIs
    drop_monitor/genetlink: use proper genetlink multicast APIs
    genetlink: only pass array to genl_register_family_with_ops()
    tcp: don't update snd_nxt, when a socket is switched from repair mode
    atm: idt77252: fix dev refcnt leak
    xfrm: Release dst if this dst is improper for vti tunnel
    netlink: fix documentation typo in netlink_set_err()
    be2net: Delete secondary unicast MAC addresses during be_close
    be2net: Fix unconditional enabling of Rx interface options
    net, virtio_net: replace the magic value
    ping: prevent NULL pointer dereference on write to msg_name
    bnx2x: Prevent "timeout waiting for state X"
    bnx2x: prevent CFC attention
    bnx2x: Prevent panic during DMAE timeout
    ...

    Linus Torvalds
     
  • Register generic netlink multicast groups as an array with
    the family and give them contiguous group IDs. Then instead
    of passing the global group ID to the various functions that
    send messages, pass the ID relative to the family - for most
    families that's just 0 because the only have one group.

    This avoids the list_head and ID in each group, adding a new
    field for the mcast group ID offset to the family.

    At the same time, this allows us to prevent abusing groups
    again like the quota and dropmon code did, since we can now
    check that a family only uses a group it owns.

    Signed-off-by: Johannes Berg
    Signed-off-by: David S. Miller

    Johannes Berg
     
  • This doesn't really change anything, but prepares for the
    next patch that will change the APIs to pass the group ID
    within the family, rather than the global group ID.

    Signed-off-by: Johannes Berg
    Signed-off-by: David S. Miller

    Johannes Berg
     
  • As suggested by David Miller, make genl_register_family_with_ops()
    a macro and pass only the array, evaluating ARRAY_SIZE() in the
    macro, this is a little safer.

    The openvswitch has some indirection, assing ops/n_ops directly in
    that code. This might ultimately just assign the pointers in the
    family initializations, saving the struct genl_family_and_ops and
    code (once mcast groups are handled differently.)

    Signed-off-by: Johannes Berg
    Signed-off-by: David S. Miller

    Johannes Berg
     

15 Nov, 2013

1 commit

  • Now that genl_ops are no longer modified in place when
    registering, they can be made const. This patch was done
    mostly with spatch:

    @@
    identifier ops;
    @@
    +const
    struct genl_ops ops[] = {
    ...
    };

    (except the struct thing in net/openvswitch/datapath.c)

    Signed-off-by: Johannes Berg
    Signed-off-by: David S. Miller

    Johannes Berg
     

06 Nov, 2013

1 commit

  • In order to enable lockdep on seqcount/seqlock structures, we
    must explicitly initialize any locks.

    The u64_stats_sync structure, uses a seqcount, and thus we need
    to introduce a u64_stats_init() function and use it to initialize
    the structure.

    This unfortunately adds a lot of fairly trivial initialization code
    to a number of drivers. But the benefit of ensuring correctness makes
    this worth while.

    Because these changes are required for lockdep to be enabled, and the
    changes are quite trivial, I've not yet split this patch out into 30-some
    separate patches, as I figured it would be better to get the various
    maintainers thoughts on how to best merge this change along with
    the seqcount lockdep enablement.

    Feedback would be appreciated!

    Signed-off-by: John Stultz
    Acked-by: Julian Anastasov
    Signed-off-by: Peter Zijlstra
    Cc: Alexey Kuznetsov
    Cc: "David S. Miller"
    Cc: Eric Dumazet
    Cc: Hideaki YOSHIFUJI
    Cc: James Morris
    Cc: Jesse Gross
    Cc: Mathieu Desnoyers
    Cc: "Michael S. Tsirkin"
    Cc: Mirko Lindner
    Cc: Patrick McHardy
    Cc: Roger Luethi
    Cc: Rusty Russell
    Cc: Simon Horman
    Cc: Stephen Hemminger
    Cc: Steven Rostedt
    Cc: Thomas Petazzoni
    Cc: Wensong Zhang
    Cc: netdev@vger.kernel.org
    Link: http://lkml.kernel.org/r/1381186321-4906-2-git-send-email-john.stultz@linaro.org
    Signed-off-by: Ingo Molnar

    John Stultz
     

04 Sep, 2013

1 commit


27 Jul, 2013

1 commit

  • This started out with fixing a sparse warning, then I realized that
    the wrapper function team_netpoll_info could just be collapsed away
    by rolling it into the enable code.

    Signed-off-by: Stephen Hemminger
    Acked-by: Jiri Pirko
    Signed-off-by: David S. Miller

    stephen hemminger
     

24 Jul, 2013

3 commits

  • Similar to what is implemented in bonding. User is able to ask team
    driver to send IGMP rejoins in case port is enabled or disabled. Using
    previously introduced netdev notifier.

    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • Until now, bond_resend_igmp_join_requests() looks for vlans attached to
    bonding device, bridge where bonding act as port manually. It does not
    care of other scenarios, like stacked bonds or team device above. Make
    this more generic and use netdev notifier to propagate the event to
    upper devices and to actually call ip_mc_rejoin_groups().

    Signed-off-by: Jiri Pirko
    Acked-by: Veaceslav Falico
    Signed-off-by: David S. Miller

    Jiri Pirko
     
  • When port is enabled or disabled, allow to notify peers by unsolicitated
    NAs or gratuitous ARPs. Disabled by default.

    Signed-off-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Jiri Pirko
     

20 Jun, 2013

1 commit

  • Conflicts:
    drivers/net/wireless/ath/ath9k/Kconfig
    drivers/net/xen-netback/netback.c
    net/batman-adv/bat_iv_ogm.c
    net/wireless/nl80211.c

    The ath9k Kconfig conflict was a change of a Kconfig option name right
    next to the deletion of another option.

    The xen-netback conflict was overlapping changes involving the
    handling of the notify list in xen_netbk_rx_action().

    Batman conflict resolution provided by Antonio Quartulli, basically
    keep everything in both conflict hunks.

    The nl80211 conflict is a little more involved. In 'net' we added a
    dynamic memory allocation to nl80211_dump_wiphy() to fix a race that
    Linus reported. Meanwhile in 'net-next' the handlers were converted
    to use pre and post doit handlers which use a flag to determine
    whether to hold the RTNL mutex around the operation.

    However, the dump handlers to not use this logic. Instead they have
    to explicitly do the locking. There were apparent bugs in the
    conversion of nl80211_dump_wiphy() in that we were not dropping the
    RTNL mutex in all the return paths, and it seems we very much should
    be doing so. So I fixed that whilst handling the overlapping changes.

    To simplify the initial returns, I take the RTNL mutex after we try
    to allocate 'tb'.

    Signed-off-by: David S. Miller

    David S. Miller
     

12 Jun, 2013

4 commits