16 Jul, 2016

1 commit

  • Use RDS probe-ping to compute how many paths may be used with
    the peer, and to synchronously start the multiple paths. If mprds is
    supported, hash outgoing traffic to one of multiple paths in rds_sendmsg()
    when multipath RDS is supported by the transport.

    CC: Santosh Shilimkar
    Signed-off-by: Sowmini Varadhan
    Acked-by: Santosh Shilimkar
    Signed-off-by: David S. Miller

    Sowmini Varadhan
     

02 Jul, 2016

1 commit

  • Refactor code to avoid separate indirections for single-path
    and multipath transports. All transports (both single and mp-capable)
    will get a pointer to the rds_conn_path, and can trivially derive
    the rds_connection from the ->cp_conn.

    Acked-by: Santosh Shilimkar
    Signed-off-by: Sowmini Varadhan
    Signed-off-by: David S. Miller

    Sowmini Varadhan
     

15 Jun, 2016

10 commits


08 Jun, 2016

1 commit


25 Nov, 2015

1 commit

  • Sasha's found a NULL pointer dereference in the RDS connection code when
    sending a message to an apparently unbound socket. The problem is caused
    by the code checking if the socket is bound in rds_sendmsg(), which checks
    the rs_bound_addr field without taking a lock on the socket. This opens a
    race where rs_bound_addr is temporarily set but where the transport is not
    in rds_bind(), leading to a NULL pointer dereference when trying to
    dereference 'trans' in __rds_conn_create().

    Vegard wrote a reproducer for this issue, so kindly ask him to share if
    you're interested.

    I cannot reproduce the NULL pointer dereference using Vegard's reproducer
    with this patch, whereas I could without.

    Complete earlier incomplete fix to CVE-2015-6937:

    74e98eb08588 ("RDS: verify the underlying transport exists before creating a connection")

    Cc: David S. Miller
    Cc: stable@vger.kernel.org

    Reviewed-by: Vegard Nossum
    Reviewed-by: Sasha Levin
    Acked-by: Santosh Shilimkar
    Signed-off-by: Quentin Casasnovas
    Signed-off-by: David S. Miller

    Quentin Casasnovas
     

19 Oct, 2015

1 commit

  • Sowmini found hang with rds-ping while testing RDS over TCP. Its
    a corner case and doesn't happen always. The issue is not reproducible
    with IB transport. Its clear from below dump why we see it with RDS TCP.

    [] do_tcp_setsockopt+0xb5/0x740
    [] tcp_setsockopt+0x24/0x30
    [] sock_common_setsockopt+0x14/0x20
    [] rds_tcp_xmit_prepare+0x5d/0x70 [rds_tcp]
    [] rds_send_xmit+0xd7/0x740 [rds]
    [] rds_send_pong+0x142/0x180 [rds]
    [] rds_recv_incoming+0x274/0x330 [rds]
    [] ? ttwu_queue+0x11e/0x130
    [] ? skb_copy_bits+0x6d/0x2c0
    [] rds_tcp_data_recv+0x2f0/0x3d0 [rds_tcp]
    [] tcp_read_sock+0x96/0x1c0
    [] ? rds_tcp_recv_init+0x40/0x40 [rds_tcp]
    [] ? sock_def_write_space+0xa0/0xa0
    [] rds_tcp_data_ready+0xa1/0xf0 [rds_tcp]
    [] tcp_data_queue+0x379/0x5b0
    [] ? rds_tcp_write_space+0xbb/0x110 [rds_tcp]
    [] tcp_rcv_established+0x2e2/0x6e0
    [] tcp_v4_do_rcv+0x122/0x220
    [] tcp_v4_rcv+0x867/0x880
    [] ip_local_deliver_finish+0xa3/0x220

    This happens because rds_send_xmit() chain wants to take
    sock_lock which is already taken by tcp_v4_rcv() on its
    way to rds_tcp_data_ready(). Commit db6526dcb51b ("RDS: use
    rds_send_xmit() state instead of RDS_LL_SEND_FULL") which
    was trying to opportunistically finish the send request
    in same thread context.

    But because of above recursive lock hang with RDS TCP,
    the send work from rds_send_pong() needs to deferred to
    worker to avoid lock up. Given RDS ping is more of connectivity
    test than performance critical path, its should be ok even
    for transport like IB.

    Reported-by: Sowmini Varadhan
    Acked-by: Sowmini Varadhan
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar
    Acked-by: Sowmini Varadhan
    Signed-off-by: David S. Miller

    santosh.shilimkar@oracle.com
     

06 Oct, 2015

3 commits

  • Similar to what we did with receive CQ completion handling, we split
    the transmit completion handler so that it lets us implement batched
    work completion handling.

    We re-use the cq_poll routine and makes use of RDS_IB_SEND_OP to
    identify the send vs receive completion event handler invocation.

    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar

    Santosh Shilimkar
     
  • In Transport indepedent rds_sendmsg(), we shouldn't make decisions based
    on RDS_LL_SEND_FULL which is used to manage the ring for RDMA based
    transports. We can safely issue rds_send_xmit() and the using its
    return value take decision on deferred work. This will also fix
    the scenario where at times we are seeing connections stuck with
    the LL_SEND_FULL bit getting set and never cleared.

    We kick krdsd after any time we see -ENOMEM or -EAGAIN from the
    ring allocation code.

    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar

    Santosh Shilimkar
     
  • Current process gives up if its send work over the batch limit.
    The work queue will get kicked to finish off any other requests.
    This fixes remainder condition from commit 443be0e5affe ("RDS: make
    sure not to loop forever inside rds_send_xmit").

    The restart condition is only for the case where we reached to
    over_batch code for some other reason so just retrying again
    before giving up.

    While at it, make sure we use already available 'send_batch_count'
    parameter instead of magic value. The batch count threshold value
    of 1024 came via commit 443be0e5affe ("RDS: make sure not to loop
    forever inside rds_send_xmit"). The idea is to process as big a
    batch as we can but at the same time we don't hold other waiting
    processes for send. Hence back-off after the send_batch_count
    limit (1024) to avoid soft-lock ups.

    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar

    Santosh Shilimkar
     

26 Aug, 2015

4 commits

  • rds_send_queue_rm() allows for the "current datagram" being queued
    to exceed SO_SNDBUF thresholds by checking bytes queued without
    counting in length of current datagram. (Since sk_sndbuf is set
    to twice requested SO_SNDBUF value as a kernel heuristic this
    is usually fine!)

    If this "current datagram" squeezing past the threshold is itself
    many times the size of the sk_sndbuf threshold itself then even
    twice the SO_SNDBUF does not save us and it gets queued but
    cannot be transmitted. Threads block and deadlock and device
    becomes unusable. The check for this datagram not exceeding
    SNDBUF thresholds (EMSGSIZE) is not done on this datagram as
    that check is only done if queueing attempt fails.
    (Datagrams that follow this datagram fail queueing attempts, go
    through the check and eventually trip EMSGSIZE error but zero
    length datagrams silently fail!)

    This fix moves the check for datagrams exceeding SNDBUF limits
    before any processing or queueing is attempted and returns EMSGSIZE
    early in the rds_sndmsg() code. This change also ensures that all
    datagrams get checked for exceeding SNDBUF/sk_sndbuf size limits
    and the large datagrams that exceed those limits do not get to
    rds_send_queue_rm() code for processing.

    Signed-off-by: Mukesh Kacker
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: David S. Miller

    Mukesh Kacker
     
  • rds_send_drop_to() is used during socket tear down to find all the
    messages on the socket and flush them . It can race with the
    acking code unless it takes the m_rs_lock on each and every message.

    This plugs a hole where we didn't take m_rs_lock on any message that
    didn't have the RDS_MSG_ON_CONN set. Taking m_rs_lock avoids
    double frees and other memory corruptions as the ack code trusts
    the message m_rs pointer on a socket that had actually been freed.

    We must take m_rs_lock to access m_rs. Because of lock nesting and
    rs access, we also need to acquire rs_lock.

    Reviewed-by: Ajaykumar Hotchandani
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: David S. Miller

    santosh.shilimkar@oracle.com
     
  • rds_send_xmit() marks the rds message map flag after
    xmit_[rdma/atomic]() which is clearly wrong. We need
    to maintain the ownership between transport and rds.

    Also take care of error path.

    Reviewed-by: Ajaykumar Hotchandani
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: David S. Miller

    santosh.shilimkar@oracle.com
     
  • Ensure we don't keep sending the data if the link is congested.

    Reviewed-by: Ajaykumar Hotchandani
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: Santosh Shilimkar
    Signed-off-by: David S. Miller

    santosh.shilimkar@oracle.com
     

08 Aug, 2015

1 commit


15 Apr, 2015

1 commit


09 Apr, 2015

1 commit

  • If a determined set of concurrent senders keep the send queue full,
    we can loop forever inside rds_send_xmit. This fix has two parts.

    First we are dropping out of the while(1) loop after we've processed a
    large batch of messages.

    Second we add a generation number that gets bumped each time the
    xmit bit lock is acquired. If someone else has jumped in and
    made progress in the queue, we skip our goto restart.

    Original patch by Chris Mason.

    Signed-off-by: Sowmini Varadhan
    Signed-off-by: David S. Miller

    Sowmini Varadhan
     

03 Mar, 2015

1 commit

  • After TIPC doesn't depend on iocb argument in its internal
    implementations of sendmsg() and recvmsg() hooks defined in proto
    structure, no any user is using iocb argument in them at all now.
    Then we can drop the redundant iocb argument completely from kinds of
    implementations of both sendmsg() and recvmsg() in the entire
    networking stack.

    Cc: Christoph Hellwig
    Suggested-by: Al Viro
    Signed-off-by: Ying Xue
    Signed-off-by: David S. Miller

    Ying Xue
     

11 Dec, 2014

1 commit


10 Dec, 2014

1 commit

  • Note that the code _using_ ->msg_iter at that point will be very
    unhappy with anything other than unshifted iovec-backed iov_iter.
    We still need to convert users to proper primitives.

    Signed-off-by: Al Viro

    Al Viro
     

24 Nov, 2014

1 commit


04 Oct, 2014

1 commit

  • I got a report of a double free happening at RDS slab cache. One
    suspicion was that may be somewhere we were doing a sock_hold/sock_put
    on an already freed sock. Thus after providing a kernel with the
    following change:

    static inline void sock_hold(struct sock *sk)
    {
    - atomic_inc(&sk->sk_refcnt);
    + if (!atomic_inc_not_zero(&sk->sk_refcnt))
    + WARN(1, "Trying to hold sock already gone: %p (family: %hd)\n",
    + sk, sk->sk_family);
    }

    The warning successfuly triggered:

    Trying to hold sock already gone: ffff81f6dda61280 (family: 21)
    WARNING: at include/net/sock.h:350 sock_hold()
    Call Trace:
    [] :rds:rds_send_remove_from_sock+0xf0/0x21b
    [] :rds:rds_send_drop_acked+0xbf/0xcf
    [] :rds_rdma:rds_ib_recv_tasklet_fn+0x256/0x2dc
    [] tasklet_action+0x8f/0x12b
    [] __do_softirq+0x89/0x133
    [] call_softirq+0x1c/0x28
    [] do_softirq+0x2c/0x7d
    [] do_IRQ+0xee/0xf7
    [] ret_from_intr+0x0/0xa

    Looking at the call chain above, the only way I think this would be
    possible is if somewhere we already released the same socket->sock which
    is assigned to the rds_message at rds_send_remove_from_sock. Which seems
    only possible to happen after the tear down done on rds_release.

    rds_release properly calls rds_send_drop_to to drop the socket from any
    rds_message, and some proper synchronization is in place to avoid race
    with rds_send_drop_acked/rds_send_remove_from_sock. However, I still see
    a very narrow window where it may be possible we touch a sock already
    released: when rds_release races with rds_send_drop_acked, we check
    RDS_MSG_ON_CONN to avoid cleanup on the same rds_message, but in this
    specific case we don't clear rm->m_rs. In this case, it seems we could
    then go on at rds_send_drop_to and after it returns, the sock is freed
    by last sock_put on rds_release, with concurrently we being at
    rds_send_remove_from_sock; then at some point in the loop at
    rds_send_remove_from_sock we process an rds_message which didn't have
    rm->m_rs unset for a freed sock, and a possible sock_hold on an sock
    already gone at rds_release happens.

    This hopefully address the described condition above and avoids a double
    free on "second last" sock_put. In addition, I removed the comment about
    socket destruction on top of rds_send_drop_acked: we call rds_send_drop_to
    in rds_release and we should have things properly serialized there, thus
    I can't see the comment being accurate there.

    Signed-off-by: Herton R. Krzesinski
    Signed-off-by: David S. Miller

    Herton R. Krzesinski
     

18 Apr, 2014

1 commit

  • Mostly scripted conversion of the smp_mb__* barriers.

    Signed-off-by: Peter Zijlstra
    Acked-by: Paul E. McKenney
    Link: http://lkml.kernel.org/n/tip-55dhyhocezdw1dg7u19hmh1u@git.kernel.org
    Cc: Linus Torvalds
    Cc: linux-arch@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

19 Jan, 2014

1 commit

  • This is a follow-up patch to f3d3342602f8bc ("net: rework recvmsg
    handler msg_name and msg_namelen logic").

    DECLARE_SOCKADDR validates that the structure we use for writing the
    name information to is not larger than the buffer which is reserved
    for msg->msg_name (which is 128 bytes). Also use DECLARE_SOCKADDR
    consistently in sendmsg code paths.

    Signed-off-by: Steffen Hurrle
    Suggested-by: Hannes Frederic Sowa
    Acked-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Steffen Hurrle
     

10 Oct, 2012

1 commit

  • This is the revised patch for fixing rds-ping spinlock recursion
    according to Venkat's suggestions.

    RDS ping/pong over TCP feature has been broken for years(2.6.39 to
    3.6.0) since we have to set TCP cork and call kernel_sendmsg() between
    ping/pong which both need to lock "struct sock *sk". However, this
    lock has already been hold before rds_tcp_data_ready() callback is
    triggerred. As a result, we always facing spinlock resursion which
    would resulting in system panic.

    Given that RDS ping is only used to test the connectivity and not for
    serious performance measurements, we can queue the pong transmit to
    rds_wq as a delayed response.

    Reported-by: Dan Carpenter
    CC: Venkat Venkatsubra
    CC: David S. Miller
    CC: James Morris
    Signed-off-by: Jie Liu
    Signed-off-by: David S. Miller

    jeff.liu
     

21 Mar, 2012

1 commit


01 Nov, 2011

2 commits


17 Jun, 2011

1 commit


31 Mar, 2011

1 commit


31 Oct, 2010

1 commit

  • Even with the previous fix, we still are reading the iovecs once
    to determine SGs needed, and then again later on. Preallocating
    space for sg lists as part of rds_message seemed like a good idea
    but it might be better to not do this. While working to redo that
    code, this patch attempts to protect against userspace rewriting
    the rds_iovec array between the first and second accesses.

    The consequences of this would be either a too-small or too-large
    sg list array. Too large is not an issue. This patch changes all
    callers of message_alloc_sgs to handle running out of preallocated
    sgs, and fail gracefully.

    Signed-off-by: Andy Grover
    Signed-off-by: David S. Miller

    Andy Grover
     

21 Oct, 2010

1 commit