06 Oct, 2012

1 commit

  • Pull networking changes from David Miller:
    "The most important bit in here is the fix for input route caching from
    Eric Dumazet, it's a shame we couldn't fully analyze this in time for
    3.6 as it's a 3.6 regression introduced by the routing cache removal.

    Anyways, will send quickly to -stable after you pull this in.

    Other changes of note:

    1) Fix lockdep splats in team and bonding, from Eric Dumazet.

    2) IPV6 adds link local route even when there is no link local
    address, from Nicolas Dichtel.

    3) Fix ixgbe PTP implementation, from Jacob Keller.

    4) Fix excessive stack usage in cxgb4 driver, from Vipul Pandya.

    5) MAC length computed improperly in VLAN demux, from Antonio
    Quartulli."

    * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (26 commits)
    ipv6: release reference of ip6_null_entry's dst entry in __ip6_del_rt
    Remove noisy printks from llcp_sock_connect
    tipc: prevent dropped connections due to rcvbuf overflow
    silence some noisy printks in irda
    team: set qdisc_tx_busylock to avoid LOCKDEP splat
    bonding: set qdisc_tx_busylock to avoid LOCKDEP splat
    sctp: check src addr when processing SACK to update transport state
    sctp: fix a typo in prototype of __sctp_rcv_lookup()
    ipv4: add a fib_type to fib_info
    can: mpc5xxx_can: fix section type conflict
    can: peak_pcmcia: fix error return code
    can: peak_pci: fix error return code
    cxgb4: Fix build error due to missing linux/vmalloc.h include.
    bnx2x: fix ring size for 10G functions
    cxgb4: Dynamically allocate memory in t4_memory_rw() and get_vpd_params()
    ixgbe: add support for X540-AT1
    ixgbe: fix poll loop for FDIRCTRL.INIT_DONE bit
    ixgbe: fix PTP ethtool timestamping function
    ixgbe: (PTP) Fix PPS interrupt code
    ixgbe: Fix PTP X540 SDP alignment code for PPS signal
    ...

    Linus Torvalds
     

05 Oct, 2012

2 commits

  • Suppose we have an SCTP connection with two paths. After connection is
    established, path1 is not available, thus this path is marked as inactive. Then
    traffic goes through path2, but for some reasons packets are delayed (after
    rto.max). Because packets are delayed, the retransmit mechanism will switch
    again to path1. At this time, we receive a delayed SACK from path2. When we
    update the state of the path in sctp_check_transmitted(), we do not take into
    account the source address of the SACK, hence we update the wrong path.

    Signed-off-by: Nicolas Dichtel
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Nicolas Dichtel
     
  • Just to avoid confusion when people only reads this prototype.

    Signed-off-by: Nicolas Dichtel
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Nicolas Dichtel
     

03 Oct, 2012

1 commit

  • Pull vfs update from Al Viro:

    - big one - consolidation of descriptor-related logics; almost all of
    that is moved to fs/file.c

    (BTW, I'm seriously tempted to rename the result to fd.c. As it is,
    we have a situation when file_table.c is about handling of struct
    file and file.c is about handling of descriptor tables; the reasons
    are historical - file_table.c used to be about a static array of
    struct file we used to have way back).

    A lot of stray ends got cleaned up and converted to saner primitives,
    disgusting mess in android/binder.c is still disgusting, but at least
    doesn't poke so much in descriptor table guts anymore. A bunch of
    relatively minor races got fixed in process, plus an ext4 struct file
    leak.

    - related thing - fget_light() partially unuglified; see fdget() in
    there (and yes, it generates the code as good as we used to have).

    - also related - bits of Cyrill's procfs stuff that got entangled into
    that work; _not_ all of it, just the initial move to fs/proc/fd.c and
    switch of fdinfo to seq_file.

    - Alex's fs/coredump.c spiltoff - the same story, had been easier to
    take that commit than mess with conflicts. The rest is a separate
    pile, this was just a mechanical code movement.

    - a few misc patches all over the place. Not all for this cycle,
    there'll be more (and quite a few currently sit in akpm's tree)."

    Fix up trivial conflicts in the android binder driver, and some fairly
    simple conflicts due to two different changes to the sock_alloc_file()
    interface ("take descriptor handling from sock_alloc_file() to callers"
    vs "net: Providing protocol type via system.sockprotoname xattr of
    /proc/PID/fd entries" adding a dentry name to the socket)

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (72 commits)
    MAX_LFS_FILESIZE should be a loff_t
    compat: fs: Generic compat_sys_sendfile implementation
    fs: push rcu_barrier() from deactivate_locked_super() to filesystems
    btrfs: reada_extent doesn't need kref for refcount
    coredump: move core dump functionality into its own file
    coredump: prevent double-free on an error path in core dumper
    usb/gadget: fix misannotations
    fcntl: fix misannotations
    ceph: don't abuse d_delete() on failure exits
    hypfs: ->d_parent is never NULL or negative
    vfs: delete surplus inode NULL check
    switch simple cases of fget_light to fdget
    new helpers: fdget()/fdput()
    switch o2hb_region_dev_write() to fget_light()
    proc_map_files_readdir(): don't bother with grabbing files
    make get_file() return its argument
    vhost_set_vring(): turn pollstart/pollstop into bool
    switch prctl_set_mm_exe_file() to fget_light()
    switch xfs_find_handle() to fget_light()
    switch xfs_swapext() to fget_light()
    ...

    Linus Torvalds
     

27 Sep, 2012

1 commit

  • Both modular callers of sock_map_fd() had been buggy; sctp one leaks
    descriptor and file if copy_to_user() fails, 9p one shouldn't be
    exposing file in the descriptor table at all.

    Switch both to sock_alloc_file(), export it, unexport sock_map_fd() and
    make it static.

    Signed-off-by: Al Viro

    Al Viro
     

15 Sep, 2012

1 commit

  • Conflicts:
    net/netfilter/nfnetlink_log.c
    net/netfilter/xt_LOG.c

    Rather easy conflict resolution, the 'net' tree had bug fixes to make
    sure we checked if a socket is a time-wait one or not and elide the
    logging code if so.

    Whereas on the 'net-next' side we are calculating the UID and GID from
    the creds using different interfaces due to the user namespace changes
    from Eric Biederman.

    Signed-off-by: David S. Miller

    David S. Miller
     

05 Sep, 2012

1 commit


04 Sep, 2012

1 commit

  • SCTP charges wmem_alloc via sctp_set_owner_w() in sctp_sendmsg() and via
    skb_set_owner_w() in sctp_packet_transmit(). If a sender runs out of
    sndbuf it will sleep in sctp_wait_for_sndbuf() and expects to be waken up
    by __sctp_write_space().

    Buffer space charged via sctp_set_owner_w() is released in sctp_wfree()
    which calls __sctp_write_space() directly.

    Buffer space charged via skb_set_owner_w() is released via sock_wfree()
    which calls sk->sk_write_space() _if_ SOCK_USE_WRITE_QUEUE is not set.
    sctp_endpoint_init() sets SOCK_USE_WRITE_QUEUE on all sockets.

    Therefore if sctp_packet_transmit() manages to queue up more than sndbuf
    bytes, sctp_wait_for_sndbuf() will never be woken up again unless it is
    interrupted by a signal.

    This could be fixed by clearing the SOCK_USE_WRITE_QUEUE flag but ...

    Charging for the data twice does not make sense in the first place, it
    leads to overcharging sndbuf by a factor 2. Therefore this patch only
    charges a single byte in wmem_alloc when transmitting an SCTP packet to
    ensure that the socket stays alive until the packet has been released.

    This means that control chunks are no longer accounted for in wmem_alloc
    which I believe is not a problem as skb->truesize will typically lead
    to overcharging anyway and thus compensates for any control overhead.

    Signed-off-by: Thomas Graf
    CC: Vlad Yasevich
    CC: Neil Horman
    CC: David Miller
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Thomas Graf
     

25 Aug, 2012

1 commit


17 Aug, 2012

2 commits


15 Aug, 2012

17 commits


01 Aug, 2012

1 commit

  • This patch series is based on top of "Swap-over-NBD without deadlocking
    v15" as it depends on the same reservation of PF_MEMALLOC reserves logic.

    When a user or administrator requires swap for their application, they
    create a swap partition and file, format it with mkswap and activate it
    with swapon. In diskless systems this is not an option so if swap if
    required then swapping over the network is considered. The two likely
    scenarios are when blade servers are used as part of a cluster where the
    form factor or maintenance costs do not allow the use of disks and thin
    clients.

    The Linux Terminal Server Project recommends the use of the Network Block
    Device (NBD) for swap but this is not always an option. There is no
    guarantee that the network attached storage (NAS) device is running Linux
    or supports NBD. However, it is likely that it supports NFS so there are
    users that want support for swapping over NFS despite any performance
    concern. Some distributions currently carry patches that support swapping
    over NFS but it would be preferable to support it in the mainline kernel.

    Patch 1 avoids a stream-specific deadlock that potentially affects TCP.

    Patch 2 is a small modification to SELinux to avoid using PFMEMALLOC
    reserves.

    Patch 3 adds three helpers for filesystems to handle swap cache pages.
    For example, page_file_mapping() returns page->mapping for
    file-backed pages and the address_space of the underlying
    swap file for swap cache pages.

    Patch 4 adds two address_space_operations to allow a filesystem
    to pin all metadata relevant to a swapfile in memory. Upon
    successful activation, the swapfile is marked SWP_FILE and
    the address space operation ->direct_IO is used for writing
    and ->readpage for reading in swap pages.

    Patch 5 notes that patch 3 is bolting
    filesystem-specific-swapfile-support onto the side and that
    the default handlers have different information to what
    is available to the filesystem. This patch refactors the
    code so that there are generic handlers for each of the new
    address_space operations.

    Patch 6 adds an API to allow a vector of kernel addresses to be
    translated to struct pages and pinned for IO.

    Patch 7 adds support for using highmem pages for swap by kmapping
    the pages before calling the direct_IO handler.

    Patch 8 updates NFS to use the helpers from patch 3 where necessary.

    Patch 9 avoids setting PF_private on PG_swapcache pages within NFS.

    Patch 10 implements the new swapfile-related address_space operations
    for NFS and teaches the direct IO handler how to manage
    kernel addresses.

    Patch 11 prevents page allocator recursions in NFS by using GFP_NOIO
    where appropriate.

    Patch 12 fixes a NULL pointer dereference that occurs when using
    swap-over-NFS.

    With the patches applied, it is possible to mount a swapfile that is on an
    NFS filesystem. Swap performance is not great with a swap stress test
    taking roughly twice as long to complete than if the swap device was
    backed by NBD.

    This patch: netvm: prevent a stream-specific deadlock

    It could happen that all !SOCK_MEMALLOC sockets have buffered so much data
    that we're over the global rmem limit. This will prevent SOCK_MEMALLOC
    buffers from receiving data, which will prevent userspace from running,
    which is needed to reduce the buffered data.

    Fix this by exempting the SOCK_MEMALLOC sockets from the rmem limit. Once
    this change it applied, it is important that sockets that set
    SOCK_MEMALLOC do not clear the flag until the socket is being torn down.
    If this happens, a warning is generated and the tokens reclaimed to avoid
    accounting errors until the bug is fixed.

    [davem@davemloft.net: Warning about clearing SOCK_MEMALLOC]
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Mel Gorman
    Acked-by: David S. Miller
    Acked-by: Rik van Riel
    Cc: Trond Myklebust
    Cc: Neil Brown
    Cc: Christoph Hellwig
    Cc: Mike Christie
    Cc: Eric B Munson
    Cc: Sebastian Andrzej Siewior
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

24 Jul, 2012

1 commit

  • Use inet_iif() consistently, and for TCP record the input interface of
    cached RX dst in inet sock.

    rt->rt_iif is going to be encoded differently, so that we can
    legitimately cache input routes in the FIB info more aggressively.

    When the input interface is "use SKB device index" the rt->rt_iif will
    be set to zero.

    This forces us to move the TCP RX dst cache installation into the ipv4
    specific code, and as well it should since doing the route caching for
    ipv6 is pointless at the moment since it is not inspected in the ipv6
    input paths yet.

    Also, remove the unlikely on dst->obsolete, all ipv4 dsts have
    obsolete set to a non-zero value to force invocation of the check
    callback.

    Signed-off-by: David S. Miller

    David S. Miller
     

23 Jul, 2012

2 commits

  • The ipv4 routing cache is non-deterministic, performance wise, and is
    subject to reasonably easy to launch denial of service attacks.

    The routing cache works great for well behaved traffic, and the world
    was a much friendlier place when the tradeoffs that led to the routing
    cache's design were considered.

    What it boils down to is that the performance of the routing cache is
    a product of the traffic patterns seen by a system rather than being a
    product of the contents of the routing tables. The former of which is
    controllable by external entitites.

    Even for "well behaved" legitimate traffic, high volume sites can see
    hit rates in the routing cache of only ~%10.

    The general flow of this patch series is that first the routing cache
    is removed. We build a completely new rtable entry every lookup
    request.

    Next we make some simplifications due to the fact that removing the
    routing cache causes several members of struct rtable to become no
    longer necessary.

    Then we need to make some amends such that we can legally cache
    pre-constructed routes in the FIB nexthops. Firstly, we need to
    invalidate routes which are hit with nexthop exceptions. Secondly we
    have to change the semantics of rt->rt_gateway such that zero means
    that the destination is on-link and non-zero otherwise.

    Now that the preparations are ready, we start caching precomputed
    routes in the FIB nexthops. Output and input routes need different
    kinds of care when determining if we can legally do such caching or
    not. The details are in the commit log messages for those changes.

    The patch series then winds down with some more struct rtable
    simplifications and other tidy ups that remove unnecessary overhead.

    On a SPARC-T3 output route lookups are ~876 cycles. Input route
    lookups are ~1169 cycles with rpfilter disabled, and about ~1468
    cycles with rpfilter enabled.

    These measurements were taken with the kbench_mod test module in the
    net_test_tools GIT tree:

    git://git.kernel.org/pub/scm/linux/kernel/git/davem/net_test_tools.git

    That GIT tree also includes a udpflood tester tool and stresses
    route lookups on packet output.

    For example, on the same SPARC-T3 system we can run:

    time ./udpflood -l 10000000 10.2.2.11

    with routing cache:
    real 1m21.955s user 0m6.530s sys 1m15.390s

    without routing cache:
    real 1m31.678s user 0m6.520s sys 1m25.140s

    Performance undoubtedly can easily be improved further.

    For example fib_table_lookup() performs a lot of excessive
    computations with all the masking and shifting, some of it
    conditionalized to deal with edge cases.

    Also, Eric's no-ref optimization for input route lookups can be
    re-instated for the FIB nexthop caching code path. I would be really
    pleased if someone would work on that.

    In fact anyone suitable motivated can just fire up perf on the loading
    of the test net_test_tools benchmark kernel module. I spend much of
    my time going:

    bash# perf record insmod ./kbench_mod.ko dst=172.30.42.22 src=74.128.0.1 iif=2
    bash# perf report

    Thanks to helpful feedback from Joe Perches, Eric Dumazet, Ben
    Hutchings, and others.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • I've seen several attempts recently made to do quick failover of sctp transports
    by reducing various retransmit timers and counters. While its possible to
    implement a faster failover on multihomed sctp associations, its not
    particularly robust, in that it can lead to unneeded retransmits, as well as
    false connection failures due to intermittent latency on a network.

    Instead, lets implement the new ietf quick failover draft found here:
    http://tools.ietf.org/html/draft-nishida-tsvwg-sctp-failover-05

    This will let the sctp stack identify transports that have had a small number of
    errors, and avoid using them quickly until their reliability can be
    re-established. I've tested this out on two virt guests connected via multiple
    isolated virt networks and believe its in compliance with the above draft and
    works well.

    Signed-off-by: Neil Horman
    CC: Vlad Yasevich
    CC: Sridhar Samudrala
    CC: "David S. Miller"
    CC: linux-sctp@vger.kernel.org
    CC: joe@perches.com
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Neil Horman
     

21 Jul, 2012

1 commit


20 Jul, 2012

1 commit


18 Jul, 2012

1 commit

  • These patches implement the final mechanism necessary to really allow
    us to go without the route cache in ipv4.

    We need a place to have long-term storage of PMTU/redirect information
    which is independent of the routes themselves, yet does not get us
    back into a situation where we have to write to metrics or anything
    like that.

    For this we use an "next-hop exception" table in the FIB nexthops.

    The one thing I desperately want to avoid is having to create clone
    routes in the FIB trie for this purpose, because that is very
    expensive. However, I'm willing to entertain such an idea later
    if this current scheme proves to have downsides that the FIB trie
    variant would not have.

    In order to accomodate this any such scheme, we need to be able to
    produce a full flow key at PMTU/redirect time. That required an
    adjustment of the interface call-sites used to propagate these events.

    For a PMTU/redirect with a fully specified socket, we pass that socket
    and use it to produce the flow key.

    Otherwise we use a passed in SKB to formulate the key. There are two
    cases that need to be distinguished, ICMP message processing (in which
    case the IP header is at skb->data) and output packet processing
    (mostly tunnels, and in all such cases the IP header is at ip_hdr(skb)).

    We also have to make the code able to handle the case where the dst
    itself passed into the dst_ops->{update_pmtu,redirect} method is
    invalidated. This matters for calls from sockets that have cached
    that route. We provide a inet{,6} helper function for this purpose,
    and edit SCTP specially since it caches routes at the transport rather
    than socket level.

    Signed-off-by: David S. Miller

    David S. Miller
     

17 Jul, 2012

3 commits

  • This will be used so that we can compose a full flow key.

    Even though we have a route in this context, we need more. In the
    future the routes will be without destination address, source address,
    etc. keying. One ipv4 route will cover entire subnets, etc.

    In this environment we have to have a way to possess persistent storage
    for redirects and PMTU information. This persistent storage will exist
    in the FIB tables, and that's why we'll need to be able to rebuild a
    full lookup flow key here. Using that flow key will do a fib_lookup()
    and create/update the persistent entry.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • Fix the following sparse warning:
    * symbol 'sctp_init_cause_fixed' was not declared. Should it be
    static?

    Signed-off-by: Ioan Orghici
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller

    Ioan Orghici
     
  • A few days ago Dave Jones reported this oops:

    [22766.294255] general protection fault: 0000 [#1] PREEMPT SMP
    [22766.295376] CPU 0
    [22766.295384] Modules linked in:
    [22766.387137] ffffffffa169f292 6b6b6b6b6b6b6b6b ffff880147c03a90
    ffff880147c03a74
    [22766.387135] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 00000000000
    [22766.387136] Process trinity-watchdo (pid: 10896, threadinfo ffff88013e7d2000,
    [22766.387137] Stack:
    [22766.387140] ffff880147c03a10
    [22766.387140] ffffffffa169f2b6
    [22766.387140] ffff88013ed95728
    [22766.387143] 0000000000000002
    [22766.387143] 0000000000000000
    [22766.387143] ffff880003fad062
    [22766.387144] ffff88013c120000
    [22766.387144]
    [22766.387145] Call Trace:
    [22766.387145]
    [22766.387150] [] ? __sctp_lookup_association+0x62/0xd0
    [sctp]
    [22766.387154] [] __sctp_lookup_association+0x86/0xd0 [sctp]
    [22766.387157] [] sctp_rcv+0x207/0xbb0 [sctp]
    [22766.387161] [] ? trace_hardirqs_off_caller+0x28/0xd0
    [22766.387163] [] ? nf_hook_slow+0x133/0x210
    [22766.387166] [] ? ip_local_deliver_finish+0x4c/0x4c0
    [22766.387168] [] ip_local_deliver_finish+0x18d/0x4c0
    [22766.387169] [] ? ip_local_deliver_finish+0x4c/0x4c0
    [22766.387171] [] ip_local_deliver+0x47/0x80
    [22766.387172] [] ip_rcv_finish+0x150/0x680
    [22766.387174] [] ip_rcv+0x214/0x320
    [22766.387176] [] __netif_receive_skb+0x7b7/0x910
    [22766.387178] [] ? __netif_receive_skb+0x11c/0x910
    [22766.387180] [] ? put_lock_stats.isra.25+0xe/0x40
    [22766.387182] [] netif_receive_skb+0x23/0x1f0
    [22766.387183] [] ? dev_gro_receive+0x139/0x440
    [22766.387185] [] napi_skb_finish+0x70/0xa0
    [22766.387187] [] napi_gro_receive+0xf5/0x130
    [22766.387218] [] e1000_receive_skb+0x59/0x70 [e1000e]
    [22766.387242] [] e1000_clean_rx_irq+0x28b/0x460 [e1000e]
    [22766.387266] [] e1000e_poll+0x78/0x430 [e1000e]
    [22766.387268] [] net_rx_action+0x1aa/0x3d0
    [22766.387270] [] ? account_system_vtime+0x10f/0x130
    [22766.387273] [] __do_softirq+0xe0/0x420
    [22766.387275] [] call_softirq+0x1c/0x30
    [22766.387278] [] do_softirq+0xd5/0x110
    [22766.387279] [] irq_exit+0xd5/0xe0
    [22766.387281] [] do_IRQ+0x63/0xd0
    [22766.387283] [] common_interrupt+0x6f/0x6f
    [22766.387283]
    [22766.387284]
    [22766.387285] [] ? retint_swapgs+0x13/0x1b
    [22766.387285] Code: c0 90 5d c3 66 0f 1f 44 00 00 4c 89 c8 5d c3 0f 1f 00 55 48
    89 e5 48 83
    ec 20 48 89 5d e8 4c 89 65 f0 4c 89 6d f8 66 66 66 66 90 b7 87 98 00 00 00
    48 89 fb
    49 89 f5 66 c1 c0 08 66 39 46 02
    [22766.387307]
    [22766.387307] RIP
    [22766.387311] [] sctp_assoc_is_match+0x19/0x90 [sctp]
    [22766.387311] RSP
    [22766.387142] ffffffffa16ab120
    [22766.599537] ---[ end trace 3f6dae82e37b17f5 ]---
    [22766.601221] Kernel panic - not syncing: Fatal exception in interrupt

    It appears from his analysis and some staring at the code that this is likely
    occuring because an association is getting freed while still on the
    sctp_assoc_hashtable. As a result, we get a gpf when traversing the hashtable
    while a freed node corrupts part of the list.

    Nominally I would think that an mibalanced refcount was responsible for this,
    but I can't seem to find any obvious imbalance. What I did note however was
    that the two places where we create an association using
    sctp_primitive_ASSOCIATE (__sctp_connect and sctp_sendmsg), have failure paths
    which free a newly created association after calling sctp_primitive_ASSOCIATE.
    sctp_primitive_ASSOCIATE brings us into the sctp_sf_do_prm_asoc path, which
    issues a SCTP_CMD_NEW_ASOC side effect, which in turn adds a new association to
    the aforementioned hash table. the sctp command interpreter that process side
    effects has not way to unwind previously processed commands, so freeing the
    association from the __sctp_connect or sctp_sendmsg error path would lead to a
    freed association remaining on this hash table.

    I've fixed this but modifying sctp_[un]hash_established to use hlist_del_init,
    which allows us to proerly use hlist_unhashed to check if the node is on a
    hashlist safely during a delete. That in turn alows us to safely call
    sctp_unhash_established in the __sctp_connect and sctp_sendmsg error paths
    before freeing them, regardles of what the associations state is on the hash
    list.

    I noted, while I was doing this, that the __sctp_unhash_endpoint was using
    hlist_unhsashed in a simmilar fashion, but never nullified any removed nodes
    pointers to make that function work properly, so I fixed that up in a simmilar
    fashion.

    I attempted to test this using a virtual guest running the SCTP_RR test from
    netperf in a loop while running the trinity fuzzer, both in a loop. I wasn't
    able to recreate the problem prior to this fix, nor was I able to trigger the
    failure after (neither of which I suppose is suprising). Given the trace above
    however, I think its likely that this is what we hit.

    Signed-off-by: Neil Horman
    Reported-by: davej@redhat.com
    CC: davej@redhat.com
    CC: "David S. Miller"
    CC: Vlad Yasevich
    CC: Sridhar Samudrala
    CC: linux-sctp@vger.kernel.org
    Signed-off-by: David S. Miller

    Neil Horman
     

16 Jul, 2012

1 commit


12 Jul, 2012

1 commit