18 Sep, 2014

1 commit

  • commit 73c3d4812b4c755efeca0140f606f83772a39ce4 upstream.

    We preallocate a few of the message types we get back from the mon. If we
    get a larger message than we are expecting, fall back to trying to allocate
    a new one instead of blindly using the one we have.

    Signed-off-by: Sage Weil
    Reviewed-by: Ilya Dryomov
    Signed-off-by: Jiri Slaby

    Sage Weil
     

17 Sep, 2014

6 commits

  • commit c27a3e4d667fdcad3db7b104f75659478e0c68d8 upstream.

    We hard code cephx auth ticket buffer size to 256 bytes. This isn't
    enough for any moderate setups and, in case tickets themselves are not
    encrypted, leads to buffer overflows (ceph_x_decrypt() errors out, but
    ceph_decode_copy() doesn't - it's just a memcpy() wrapper). Since the
    buffer is allocated dynamically anyway, allocated it a bit later, at
    the point where we know how much is going to be needed.

    Fixes: http://tracker.ceph.com/issues/8979

    Signed-off-by: Ilya Dryomov
    Reviewed-by: Sage Weil
    Signed-off-by: Jiri Slaby

    Ilya Dryomov
     
  • commit 597cda357716a3cf8d994cb11927af917c8d71fa upstream.

    Add a helper for processing individual cephx auth tickets. Needed for
    the next commit, which deals with allocating ticket buffers. (Most of
    the diff here is whitespace - view with git diff -b).

    Signed-off-by: Ilya Dryomov
    Reviewed-by: Sage Weil
    Signed-off-by: Jiri Slaby

    Ilya Dryomov
     
  • commit 5f740d7e1531099b888410e6bab13f68da9b1a4d upstream.

    Determining ->last_piece based on the value of ->page_offset + length
    is incorrect because length here is the length of the entire message.
    ->last_piece set to false even if page array data item length is /dev/null
    rbd snap create foo@snap
    rbd snap protect foo@snap
    rbd clone foo@snap bar
    # rbd_resize calls librbd rbd_resize(), size is in bytes
    ./rbd_resize bar $(((4 << 20) + 512))
    rbd resize --size 10 bar
    BAR_DEV=$(rbd map bar)
    # trigger a 512-byte copyup -- 512-byte page array data item
    dd if=/dev/urandom of=$BAR_DEV bs=1M count=1 seek=5

    The problem exists only in ceph_msg_data_pages_cursor_init(),
    ceph_msg_data_pages_advance() does the right thing. The size_t cast is
    unnecessary.

    Signed-off-by: Ilya Dryomov
    Reviewed-by: Sage Weil
    Reviewed-by: Alex Elder
    Signed-off-by: Jiri Slaby

    Ilya Dryomov
     
  • commit 32333edb82fb2009980eefc5518100068147ab82 upstream.

    The commits 08c30aca9e698faddebd34f81e1196295f9dc063 "Bluetooth: Remove
    RFCOMM session refcnt" and 8ff52f7d04d9cc31f1e81dcf9a2ba6335ed34905
    "Bluetooth: Return RFCOMM session ptrs to avoid freed session"
    allow rfcomm_recv_ua and rfcomm_session_close to delete the session
    (and free the corresponding socket) and propagate NULL session pointer
    to the upper callers.

    Additional fix is required to terminate the loop in rfcomm_process_rx
    function to avoid use of freed 'sk' memory.

    The issue is only reproducible with kernel option CONFIG_PAGE_POISONING
    enabled making freed memory being changed and filled up with fixed char
    value used to unmask use-after-free issues.

    Signed-off-by: Vignesh Raman
    Signed-off-by: Vitaly Kuzmichev
    Acked-by: Dean Jenkins
    Signed-off-by: Marcel Holtmann
    Signed-off-by: Jiri Slaby

    Vignesh Raman
     
  • commit 093facf3634da1b0c2cc7ed106f1983da901bbab upstream.

    If the current process is exiting, lingering on socket close will make
    it unkillable, so we should avoid it.

    Reproducer:

    #include
    #include

    #define BTPROTO_L2CAP 0
    #define BTPROTO_SCO 2
    #define BTPROTO_RFCOMM 3

    int main()
    {
    int fd;
    struct linger ling;

    fd = socket(PF_BLUETOOTH, SOCK_STREAM, BTPROTO_RFCOMM);
    //or: fd = socket(PF_BLUETOOTH, SOCK_DGRAM, BTPROTO_L2CAP);
    //or: fd = socket(PF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_SCO);

    ling.l_onoff = 1;
    ling.l_linger = 1000000000;
    setsockopt(fd, SOL_SOCKET, SO_LINGER, &ling, sizeof(ling));

    return 0;
    }

    Signed-off-by: Vladimir Davydov
    Signed-off-by: Marcel Holtmann
    Signed-off-by: Jiri Slaby

    Vladimir Davydov
     
  • commit 3c45ddf823d679a820adddd53b52c6699c9a05ac upstream.

    The current code always selects XPRT_TRANSPORT_BC_TCP for the back
    channel, even when the forward channel was not TCP (eg, RDMA). When
    a 4.1 mount is attempted with RDMA, the server panics in the TCP BC
    code when trying to send CB_NULL.

    Instead, construct the transport protocol number from the forward
    channel transport or'd with XPRT_TRANSPORT_BC. Transports that do
    not support bi-directional RPC will not have registered a "BC"
    transport, causing create_backchannel_client() to fail immediately.

    Fixes: https://bugzilla.linux-nfs.org/show_bug.cgi?id=265
    Signed-off-by: Chuck Lever
    Signed-off-by: J. Bruce Fields
    Signed-off-by: Jiri Slaby

    Chuck Lever
     

26 Aug, 2014

2 commits

  • commit 5c0a0fc81f4dc786b42c4fc9c7c72ba635406ab5 upstream.

    tipc_msg_build() calls skb_copy_to_linear_data_offset() to copy data
    from user space to kernel space. However, the latter function does
    in its turn call memcpy() to perform the actual copying. This poses
    an obvious security and robustness risk, since memcpy() never makes
    any validity check on the pointer it is copying from.

    To correct this, we the replace the offending function call with
    a call to memcpy_fromiovecend(), which uses copy_from_user() to
    perform the copying.

    Signed-off-by: Ying Xue
    Reviewed-by: Paul Gortmaker
    Signed-off-by: Jon Maloy
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Ying Xue
     
  • commit 448cd2e248732326632957e52ea9c44729affcb2 upstream.

    In case of beacon_loss with IEEE80211_HW_CONNECTION_MONITOR
    device, mac80211 probes the ap (and disconnects on timeout)
    but ignores the ack.

    If we already got an ack, there's no reason to continue
    disconnecting. this can help devices that supports
    IEEE80211_HW_CONNECTION_MONITOR only partially (e.g. take
    care of keep alives, but does not probe the ap.

    In case the device wants to disconnect without probing,
    it can just call ieee80211_connection_loss.

    Signed-off-by: Eliad Peller
    Signed-off-by: Johannes Berg
    Signed-off-by: Jiri Slaby

    Eliad Peller
     

19 Aug, 2014

12 commits

  • [ Upstream commit 757efd32d5ce31f67193cc0e6a56e4dffcc42fb1 ]

    Dave reported following splat, caused by improper use of
    IP_INC_STATS_BH() in process context.

    BUG: using __this_cpu_add() in preemptible [00000000] code: trinity-c117/14551
    caller is __this_cpu_preempt_check+0x13/0x20
    CPU: 3 PID: 14551 Comm: trinity-c117 Not tainted 3.16.0+ #33
    ffffffff9ec898f0 0000000047ea7e23 ffff88022d32f7f0 ffffffff9e7ee207
    0000000000000003 ffff88022d32f818 ffffffff9e397eaa ffff88023ee70b40
    ffff88022d32f970 ffff8801c026d580 ffff88022d32f828 ffffffff9e397ee3
    Call Trace:
    [] dump_stack+0x4e/0x7a
    [] check_preemption_disabled+0xfa/0x100
    [] __this_cpu_preempt_check+0x13/0x20
    [] sctp_packet_transmit+0x692/0x710 [sctp]
    [] sctp_outq_flush+0x2a2/0xc30 [sctp]
    [] ? mark_held_locks+0x7c/0xb0
    [] ? _raw_spin_unlock_irqrestore+0x5d/0x80
    [] sctp_outq_uncork+0x1a/0x20 [sctp]
    [] sctp_cmd_interpreter.isra.23+0x1142/0x13f0 [sctp]
    [] sctp_do_sm+0xdb/0x330 [sctp]
    [] ? preempt_count_sub+0xab/0x100
    [] ? sctp_cname+0x70/0x70 [sctp]
    [] sctp_primitive_ASSOCIATE+0x3a/0x50 [sctp]
    [] sctp_sendmsg+0x88f/0xe30 [sctp]
    [] ? lock_release_holdtime.part.28+0x9a/0x160
    [] ? put_lock_stats.isra.27+0xe/0x30
    [] inet_sendmsg+0x104/0x220
    [] ? inet_sendmsg+0x5/0x220
    [] sock_sendmsg+0x9e/0xe0
    [] ? might_fault+0xb9/0xc0
    [] ? might_fault+0x5e/0xc0
    [] SYSC_sendto+0x124/0x1c0
    [] ? syscall_trace_enter+0x250/0x330
    [] SyS_sendto+0xe/0x10
    [] tracesys+0xdd/0xe2

    This is a followup of commits f1d8cba61c3c4b ("inet: fix possible
    seqlock deadlocks") and 7f88c6b23afbd315 ("ipv6: fix possible seqlock
    deadlock in ip6_finish_output2")

    Signed-off-by: Eric Dumazet
    Cc: Hannes Frederic Sowa
    Reported-by: Dave Jones
    Acked-by: Neil Horman
    Acked-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Eric Dumazet
     
  • [ Upstream commit 06ebb06d49486676272a3c030bfeef4bd969a8e6 ]

    Check for cases when the caller requests 0 bytes instead of running off
    and dereferencing potentially invalid iovecs.

    Signed-off-by: Sasha Levin
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Sasha Levin
     
  • [ Upstream commit fcdfe3a7fa4cb74391d42b6a26dc07c20dab1d82 ]

    When performing segmentation, the mac_len value is copied right
    out of the original skb. However, this value is not always set correctly
    (like when the packet is VLAN-tagged) and we'll end up copying a bad
    value.

    One way to demonstrate this is to configure a VM which tags
    packets internally and turn off VLAN acceleration on the forwarding
    bridge port. The packets show up corrupt like this:
    16:18:24.985548 52:54:00:ab:be:25 > 52:54:00:26:ce:a3, ethertype 802.1Q
    (0x8100), length 1518: vlan 100, p 0, ethertype 0x05e0,
    0x0000: 8cdb 1c7c 8cdb 0064 4006 b59d 0a00 6402 ...|...d@.....d.
    0x0010: 0a00 6401 9e0d b441 0a5e 64ec 0330 14fa ..d....A.^d..0..
    0x0020: 29e3 01c9 f871 0000 0101 080a 000a e833)....q.........3
    0x0030: 000f 8c75 6e65 7470 6572 6600 6e65 7470 ...unetperf.netp
    0x0040: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp
    0x0050: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp
    0x0060: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp
    ...

    This also leads to awful throughput as GSO packets are dropped and
    cause retransmissions.

    The solution is to set the mac_len using the values already available
    in then new skb. We've already adjusted all of the header offset, so we
    might as well correctly figure out the mac_len using skb_reset_mac_len().
    After this change, packets are segmented correctly and performance
    is restored.

    CC: Eric Dumazet
    Signed-off-by: Vlad Yasevich
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Vlad Yasevich
     
  • [ Upstream commit 1be9a950c646c9092fb3618197f7b6bfb50e82aa ]

    Jason reported an oops caused by SCTP on his ARM machine with
    SCTP authentication enabled:

    Internal error: Oops: 17 [#1] ARM
    CPU: 0 PID: 104 Comm: sctp-test Not tainted 3.13.0-68744-g3632f30c9b20-dirty #1
    task: c6eefa40 ti: c6f52000 task.ti: c6f52000
    PC is at sctp_auth_calculate_hmac+0xc4/0x10c
    LR is at sg_init_table+0x20/0x38
    pc : [] lr : [] psr: 40000013
    sp : c6f538e8 ip : 00000000 fp : c6f53924
    r10: c6f50d80 r9 : 00000000 r8 : 00010000
    r7 : 00000000 r6 : c7be4000 r5 : 00000000 r4 : c6f56254
    r3 : c00c8170 r2 : 00000001 r1 : 00000008 r0 : c6f1e660
    Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
    Control: 0005397f Table: 06f28000 DAC: 00000015
    Process sctp-test (pid: 104, stack limit = 0xc6f521c0)
    Stack: (0xc6f538e8 to 0xc6f54000)
    [...]
    Backtrace:
    [] (sctp_auth_calculate_hmac+0x0/0x10c) from [] (sctp_packet_transmit+0x33c/0x5c8)
    [] (sctp_packet_transmit+0x0/0x5c8) from [] (sctp_outq_flush+0x7fc/0x844)
    [] (sctp_outq_flush+0x0/0x844) from [] (sctp_outq_uncork+0x24/0x28)
    [] (sctp_outq_uncork+0x0/0x28) from [] (sctp_side_effects+0x1134/0x1220)
    [] (sctp_side_effects+0x0/0x1220) from [] (sctp_do_sm+0xac/0xd4)
    [] (sctp_do_sm+0x0/0xd4) from [] (sctp_assoc_bh_rcv+0x118/0x160)
    [] (sctp_assoc_bh_rcv+0x0/0x160) from [] (sctp_inq_push+0x6c/0x74)
    [] (sctp_inq_push+0x0/0x74) from [] (sctp_rcv+0x7d8/0x888)

    While we already had various kind of bugs in that area
    ec0223ec48a9 ("net: sctp: fix sctp_sf_do_5_1D_ce to verify if
    we/peer is AUTH capable") and b14878ccb7fa ("net: sctp: cache
    auth_enable per endpoint"), this one is a bit of a different
    kind.

    Giving a bit more background on why SCTP authentication is
    needed can be found in RFC4895:

    SCTP uses 32-bit verification tags to protect itself against
    blind attackers. These values are not changed during the
    lifetime of an SCTP association.

    Looking at new SCTP extensions, there is the need to have a
    method of proving that an SCTP chunk(s) was really sent by
    the original peer that started the association and not by a
    malicious attacker.

    To cause this bug, we're triggering an INIT collision between
    peers; normal SCTP handshake where both sides intent to
    authenticate packets contains RANDOM; CHUNKS; HMAC-ALGO
    parameters that are being negotiated among peers:

    ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->



    ...

    Since such collisions can also happen with verification tags,
    the RFC4895 for AUTH rather vaguely says under section 6.1:

    In case of INIT collision, the rules governing the handling
    of this Random Number follow the same pattern as those for
    the Verification Tag, as explained in Section 5.2.4 of
    RFC 2960 [5]. Therefore, each endpoint knows its own Random
    Number and the peer's Random Number after the association
    has been established.

    In RFC2960, section 5.2.4, we're eventually hitting Action B:

    B) In this case, both sides may be attempting to start an
    association at about the same time but the peer endpoint
    started its INIT after responding to the local endpoint's
    INIT. Thus it may have picked a new Verification Tag not
    being aware of the previous Tag it had sent this endpoint.
    The endpoint should stay in or enter the ESTABLISHED
    state but it MUST update its peer's Verification Tag from
    the State Cookie, stop any init or cookie timers that may
    running and send a COOKIE ACK.

    In other words, the handling of the Random parameter is the
    same as behavior for the Verification Tag as described in
    Action B of section 5.2.4.

    Looking at the code, we exactly hit the sctp_sf_do_dupcook_b()
    case which triggers an SCTP_CMD_UPDATE_ASSOC command to the
    side effect interpreter, and in fact it properly copies over
    peer_{random, hmacs, chunks} parameters from the newly created
    association to update the existing one.

    Also, the old asoc_shared_key is being released and based on
    the new params, sctp_auth_asoc_init_active_key() updated.
    However, the issue observed in this case is that the previous
    asoc->peer.auth_capable was 0, and has *not* been updated, so
    that instead of creating a new secret, we're doing an early
    return from the function sctp_auth_asoc_init_active_key()
    leaving asoc->asoc_shared_key as NULL. However, we now have to
    authenticate chunks from the updated chunk list (e.g. COOKIE-ACK).

    That in fact causes the server side when responding with ...

    active_key_id is still inherited from the
    endpoint, and the same as encoded into the chunk, it uses
    asoc->asoc_shared_key, which is still NULL, as an asoc_key
    and dereferences it in ...

    crypto_hash_setkey(desc.tfm, &asoc_key->data[0], asoc_key->len)

    ... causing an oops. All this happens because sctp_make_cookie_ack()
    called with the *new* association has the peer.auth_capable=1
    and therefore marks the chunk with auth=1 after checking
    sctp_auth_send_cid(), but it is *actually* sent later on over
    the then *updated* association's transport that didn't initialize
    its shared key due to peer.auth_capable=0. Since control chunks
    in that case are not sent by the temporary association which
    are scheduled for deletion, they are issued for xmit via
    SCTP_CMD_REPLY in the interpreter with the context of the
    *updated* association. peer.auth_capable was 0 in the updated
    association (which went from COOKIE_WAIT into ESTABLISHED state),
    since all previous processing that performed sctp_process_init()
    was being done on temporary associations, that we eventually
    throw away each time.

    The correct fix is to update to the new peer.auth_capable
    value as well in the collision case via sctp_assoc_update(),
    so that in case the collision migrated from 0 -> 1,
    sctp_auth_asoc_init_active_key() can properly recalculate
    the secret. This therefore fixes the observed server panic.

    Fixes: 730fc3d05cd4 ("[SCTP]: Implete SCTP-AUTH parameter processing")
    Reported-by: Jason Gunthorpe
    Signed-off-by: Daniel Borkmann
    Tested-by: Jason Gunthorpe
    Cc: Vlad Yasevich
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Daniel Borkmann
     
  • [ Upstream commit 1f74e613ded11517db90b2bd57e9464d9e0fb161 ]

    In vegas we do a multiplication of the cwnd and the rtt. This
    may overflow and thus their result is stored in a u64. However, we first
    need to cast the cwnd so that actually 64-bit arithmetic is done.

    Then, we need to do do_div to allow this to be used on 32-bit arches.

    Cc: Stephen Hemminger
    Cc: Neal Cardwell
    Cc: Eric Dumazet
    Cc: David Laight
    Cc: Doug Leith
    Fixes: 8d3a564da34e (tcp: tcp_vegas cong avoid fix)
    Signed-off-by: Christoph Paasch
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Christoph Paasch
     
  • [ Upstream commit 45a07695bc64b3ab5d6d2215f9677e5b8c05a7d0 ]

    In veno we do a multiplication of the cwnd and the rtt. This
    may overflow and thus their result is stored in a u64. However, we first
    need to cast the cwnd so that actually 64-bit arithmetic is done.

    A first attempt at fixing 76f1017757aa0 ([TCP]: TCP Veno congestion
    control) was made by 159131149c2 (tcp: Overflow bug in Vegas), but it
    failed to add the required cast in tcp_veno_cong_avoid().

    Fixes: 76f1017757aa0 ([TCP]: TCP Veno congestion control)
    Signed-off-by: Christoph Paasch
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Christoph Paasch
     
  • [ Upstream commit 40eea803c6b2cfaab092f053248cbeab3f368412 ]

    Sasha's report:
    > While fuzzing with trinity inside a KVM tools guest running the latest -next
    > kernel with the KASAN patchset, I've stumbled on the following spew:
    >
    > [ 4448.949424] ==================================================================
    > [ 4448.951737] AddressSanitizer: user-memory-access on address 0
    > [ 4448.952988] Read of size 2 by thread T19638:
    > [ 4448.954510] CPU: 28 PID: 19638 Comm: trinity-c76 Not tainted 3.16.0-rc4-next-20140711-sasha-00046-g07d3099-dirty #813
    > [ 4448.956823] ffff88046d86ca40 0000000000000000 ffff880082f37e78 ffff880082f37a40
    > [ 4448.958233] ffffffffb6e47068 ffff880082f37a68 ffff880082f37a58 ffffffffb242708d
    > [ 4448.959552] 0000000000000000 ffff880082f37a88 ffffffffb24255b1 0000000000000000
    > [ 4448.961266] Call Trace:
    > [ 4448.963158] dump_stack (lib/dump_stack.c:52)
    > [ 4448.964244] kasan_report_user_access (mm/kasan/report.c:184)
    > [ 4448.965507] __asan_load2 (mm/kasan/kasan.c:352)
    > [ 4448.966482] ? netlink_sendmsg (net/netlink/af_netlink.c:2339)
    > [ 4448.967541] netlink_sendmsg (net/netlink/af_netlink.c:2339)
    > [ 4448.968537] ? get_parent_ip (kernel/sched/core.c:2555)
    > [ 4448.970103] sock_sendmsg (net/socket.c:654)
    > [ 4448.971584] ? might_fault (mm/memory.c:3741)
    > [ 4448.972526] ? might_fault (./arch/x86/include/asm/current.h:14 mm/memory.c:3740)
    > [ 4448.973596] ? verify_iovec (net/core/iovec.c:64)
    > [ 4448.974522] ___sys_sendmsg (net/socket.c:2096)
    > [ 4448.975797] ? put_lock_stats.isra.13 (./arch/x86/include/asm/preempt.h:98 kernel/locking/lockdep.c:254)
    > [ 4448.977030] ? lock_release_holdtime (kernel/locking/lockdep.c:273)
    > [ 4448.978197] ? lock_release_non_nested (kernel/locking/lockdep.c:3434 (discriminator 1))
    > [ 4448.979346] ? check_chain_key (kernel/locking/lockdep.c:2188)
    > [ 4448.980535] __sys_sendmmsg (net/socket.c:2181)
    > [ 4448.981592] ? trace_hardirqs_on_caller (kernel/locking/lockdep.c:2600)
    > [ 4448.982773] ? trace_hardirqs_on (kernel/locking/lockdep.c:2607)
    > [ 4448.984458] ? syscall_trace_enter (arch/x86/kernel/ptrace.c:1500 (discriminator 2))
    > [ 4448.985621] ? trace_hardirqs_on_caller (kernel/locking/lockdep.c:2600)
    > [ 4448.986754] SyS_sendmmsg (net/socket.c:2201)
    > [ 4448.987708] tracesys (arch/x86/kernel/entry_64.S:542)
    > [ 4448.988929] ==================================================================

    This reports means that we've come to netlink_sendmsg() with msg->msg_name == NULL and msg->msg_namelen > 0.

    After this report there was no usual "Unable to handle kernel NULL pointer dereference"
    and this gave me a clue that address 0 is mapped and contains valid socket address structure in it.

    This bug was introduced in f3d3342602f8bcbf37d7c46641cb9bca7618eb1c
    (net: rework recvmsg handler msg_name and msg_namelen logic).
    Commit message states that:
    "Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
    non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
    affect sendto as it would bail out earlier while trying to copy-in the
    address."
    But in fact this affects sendto when address 0 is mapped and contains
    socket address structure in it. In such case copy-in address will succeed,
    verify_iovec() function will successfully exit with msg->msg_namelen > 0
    and msg->msg_name == NULL.

    This patch fixes it by setting msg_namelen to 0 if msg_name == NULL.

    Cc: Hannes Frederic Sowa
    Cc: Eric Dumazet
    Cc:
    Reported-by: Sasha Levin
    Signed-off-by: Andrey Ryabinin
    Acked-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Andrey Ryabinin
     
  • [ Upstream commit 04ca6973f7c1a0d8537f2d9906a0cf8e69886d75 ]

    In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
    Jedidiah describe ways exploiting linux IP identifier generation to
    infer whether two machines are exchanging packets.

    With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we
    changed IP id generation, but this does not really prevent this
    side-channel technique.

    This patch adds a random amount of perturbation so that IP identifiers
    for a given destination [1] are no longer monotonically increasing after
    an idle period.

    Note that prandom_u32_max(1) returns 0, so if generator is used at most
    once per jiffy, this patch inserts no hole in the ID suite and do not
    increase collision probability.

    This is jiffies based, so in the worst case (HZ=1000), the id can
    rollover after ~65 seconds of idle time, which should be fine.

    We also change the hash used in __ip_select_ident() to not only hash
    on daddr, but also saddr and protocol, so that ICMP probes can not be
    used to infer information for other protocols.

    For IPv6, adds saddr into the hash as well, but not nexthdr.

    If I ping the patched target, we can see ID are now hard to predict.

    21:57:11.008086 IP (...)
    A > target: ICMP echo request, seq 1, length 64
    21:57:11.010752 IP (... id 2081 ...)
    target > A: ICMP echo reply, seq 1, length 64

    21:57:12.013133 IP (...)
    A > target: ICMP echo request, seq 2, length 64
    21:57:12.015737 IP (... id 3039 ...)
    target > A: ICMP echo reply, seq 2, length 64

    21:57:13.016580 IP (...)
    A > target: ICMP echo request, seq 3, length 64
    21:57:13.019251 IP (... id 3437 ...)
    target > A: ICMP echo reply, seq 3, length 64

    [1] TCP sessions uses a per flow ID generator not changed by this patch.

    Signed-off-by: Eric Dumazet
    Reported-by: Jeffrey Knockel
    Reported-by: Jedidiah R. Crandall
    Cc: Willy Tarreau
    Cc: Hannes Frederic Sowa
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Eric Dumazet
     
  • [ Upstream commit 73f156a6e8c1074ac6327e0abd1169e95eb66463 ]

    Ideally, we would need to generate IP ID using a per destination IP
    generator.

    linux kernels used inet_peer cache for this purpose, but this had a huge
    cost on servers disabling MTU discovery.

    1) each inet_peer struct consumes 192 bytes

    2) inetpeer cache uses a binary tree of inet_peer structs,
    with a nominal size of ~66000 elements under load.

    3) lookups in this tree are hitting a lot of cache lines, as tree depth
    is about 20.

    4) If server deals with many tcp flows, we have a high probability of
    not finding the inet_peer, allocating a fresh one, inserting it in
    the tree with same initial ip_id_count, (cf secure_ip_id())

    5) We garbage collect inet_peer aggressively.

    IP ID generation do not have to be 'perfect'

    Goal is trying to avoid duplicates in a short period of time,
    so that reassembly units have a chance to complete reassembly of
    fragments belonging to one message before receiving other fragments
    with a recycled ID.

    We simply use an array of generators, and a Jenkin hash using the dst IP
    as a key.

    ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
    belongs (it is only used from this file)

    secure_ip_id() and secure_ipv6_id() no longer are needed.

    Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
    unnecessary decrement/increment of the number of segments.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Eric Dumazet
     
  • commit 3cf521f7dc87c031617fd47e4b7aa2593c2f3daf upstream.

    The l2tp [get|set]sockopt() code has fallen back to the UDP functions
    for socket option levels != SOL_PPPOL2TP since day one, but that has
    never actually worked, since the l2tp socket isn't an inet socket.

    As David Miller points out:

    "If we wanted this to work, it'd have to look up the tunnel and then
    use tunnel->sk, but I wonder how useful that would be"

    Since this can never have worked so nobody could possibly have depended
    on that functionality, just remove the broken code and return -EINVAL.

    Reported-by: Sasha Levin
    Acked-by: James Chapman
    Acked-by: David Miller
    Cc: Phil Turnbull
    Cc: Vegard Nossum
    Cc: Willy Tarreau
    Signed-off-by: Linus Torvalds
    Signed-off-by: Jiri Slaby

    Sasha Levin
     
  • commit 08b9939997df30e42a228e1ecb97f99e9c8ea84e upstream.

    This reverts commit 277d916fc2e959c3f106904116bb4f7b1148d47a as it was
    at least breaking iwlwifi by setting the IEEE80211_TX_CTL_NO_PS_BUFFER
    flag in all kinds of interface modes, not only for AP mode where it is
    appropriate.

    To avoid reintroducing the original problem, explicitly check for probe
    request frames in the multicast buffering code.

    Fixes: 277d916fc2e9 ("mac80211: move "bufferable MMPDU" check to fix AP mode scan")
    Signed-off-by: Johannes Berg
    Signed-off-by: Jiri Slaby

    Johannes Berg
     
  • commit 8c26d458394be44e135d1c6bd4557e1c4e1a0535 upstream.

    tsc can be NULL (mac80211 currently always passes NULL),
    resulting in NULL-dereference. check before copying it.

    Signed-off-by: Eliad Peller
    Signed-off-by: Emmanuel Grumbach
    Signed-off-by: Johannes Berg
    Signed-off-by: Jiri Slaby

    Eliad Peller
     

29 Jul, 2014

19 commits

  • [ Upstream commit 10ec9472f05b45c94db3c854d22581a20b97db41 ]

    There is a benign buffer overflow in ip_options_compile spotted by
    AddressSanitizer[1] :

    Its benign because we always can access one extra byte in skb->head
    (because header is followed by struct skb_shared_info), and in this case
    this byte is not even used.

    [28504.910798] ==================================================================
    [28504.912046] AddressSanitizer: heap-buffer-overflow in ip_options_compile
    [28504.913170] Read of size 1 by thread T15843:
    [28504.914026] [] ip_options_compile+0x121/0x9c0
    [28504.915394] [] ip_options_get_from_user+0xad/0x120
    [28504.916843] [] do_ip_setsockopt.isra.15+0x8df/0x1630
    [28504.918175] [] ip_setsockopt+0x30/0xa0
    [28504.919490] [] tcp_setsockopt+0x5b/0x90
    [28504.920835] [] sock_common_setsockopt+0x5f/0x70
    [28504.922208] [] SyS_setsockopt+0xa2/0x140
    [28504.923459] [] system_call_fastpath+0x16/0x1b
    [28504.924722]
    [28504.925106] Allocated by thread T15843:
    [28504.925815] [] ip_options_get_from_user+0x35/0x120
    [28504.926884] [] do_ip_setsockopt.isra.15+0x8df/0x1630
    [28504.927975] [] ip_setsockopt+0x30/0xa0
    [28504.929175] [] tcp_setsockopt+0x5b/0x90
    [28504.930400] [] sock_common_setsockopt+0x5f/0x70
    [28504.931677] [] SyS_setsockopt+0xa2/0x140
    [28504.932851] [] system_call_fastpath+0x16/0x1b
    [28504.934018]
    [28504.934377] The buggy address ffff880026382828 is located 0 bytes to the right
    [28504.934377] of 40-byte region [ffff880026382800, ffff880026382828)
    [28504.937144]
    [28504.937474] Memory state around the buggy address:
    [28504.938430] ffff880026382300: ........ rrrrrrrr rrrrrrrr rrrrrrrr
    [28504.939884] ffff880026382400: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28504.941294] ffff880026382500: .....rrr rrrrrrrr rrrrrrrr rrrrrrrr
    [28504.942504] ffff880026382600: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28504.943483] ffff880026382700: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28504.944511] >ffff880026382800: .....rrr rrrrrrrr rrrrrrrr rrrrrrrr
    [28504.945573] ^
    [28504.946277] ffff880026382900: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28505.094949] ffff880026382a00: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28505.096114] ffff880026382b00: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28505.097116] ffff880026382c00: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28505.098472] ffff880026382d00: ffffffff rrrrrrrr rrrrrrrr rrrrrrrr
    [28505.099804] Legend:
    [28505.100269] f - 8 freed bytes
    [28505.100884] r - 8 redzone bytes
    [28505.101649] . - 8 allocated bytes
    [28505.102406] x=1..7 - x allocated bytes + (8-x) redzone bytes
    [28505.103637] ==================================================================

    [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Eric Dumazet
     
  • [ Upstream commit 84a7c0b1db1c17d5ded8d3800228a608e1070b40 ]

    dns_query() credulously assumes that keys are null-terminated and
    returns a copy of a memory block that is off by one.

    Signed-off-by: Manuel Schölling
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Manuel Schölling
     
  • [ Upstream commit 8f2e5ae40ec193bc0a0ed99e95315c3eebca84ea ]

    While working on some other SCTP code, I noticed that some
    structures shared with user space are leaking uninitialized
    stack or heap buffer. In particular, struct sctp_sndrcvinfo
    has a 2 bytes hole between .sinfo_flags and .sinfo_ppid that
    remains unfilled by us in sctp_ulpevent_read_sndrcvinfo() when
    putting this into cmsg. But also struct sctp_remote_error
    contains a 2 bytes hole that we don't fill but place into a skb
    through skb_copy_expand() via sctp_ulpevent_make_remote_error().

    Both structures are defined by the IETF in RFC6458:

    * Section 5.3.2. SCTP Header Information Structure:

    The sctp_sndrcvinfo structure is defined below:

    struct sctp_sndrcvinfo {
    uint16_t sinfo_stream;
    uint16_t sinfo_ssn;
    uint16_t sinfo_flags;

    uint32_t sinfo_ppid;
    uint32_t sinfo_context;
    uint32_t sinfo_timetolive;
    uint32_t sinfo_tsn;
    uint32_t sinfo_cumtsn;
    sctp_assoc_t sinfo_assoc_id;
    };

    * 6.1.3. SCTP_REMOTE_ERROR:

    A remote peer may send an Operation Error message to its peer.
    This message indicates a variety of error conditions on an
    association. The entire ERROR chunk as it appears on the wire
    is included in an SCTP_REMOTE_ERROR event. Please refer to the
    SCTP specification [RFC4960] and any extensions for a list of
    possible error formats. An SCTP error notification has the
    following format:

    struct sctp_remote_error {
    uint16_t sre_type;
    uint16_t sre_flags;
    uint32_t sre_length;
    uint16_t sre_error;

    sctp_assoc_t sre_assoc_id;
    uint8_t sre_data[];
    };

    Fix this by setting both to 0 before filling them out. We also
    have other structures shared between user and kernel space in
    SCTP that contains holes (e.g. struct sctp_paddrthlds), but we
    copy that buffer over from user space first and thus don't need
    to care about it in that cases.

    While at it, we can also remove lengthy comments copied from
    the draft, instead, we update the comment with the correct RFC
    number where one can look it up.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Daniel Borkmann
     
  • [ Upstream commit 999417549c16dd0e3a382aa9f6ae61688db03181 ]

    If the 'next' pointer of the last fragment buffer in a message is not
    zeroed before reassembly, we risk ending up with a corrupt message,
    since the reassembly function itself isn't doing this.

    Currently, when a buffer is retrieved from the deferred queue of the
    broadcast link, the next pointer is not cleared, with the result as
    described above.

    This commit corrects this, and thereby fixes a bug that may occur when
    long broadcast messages are transmitted across dual interfaces. The bug
    has been present since 40ba3cdf542a469aaa9083fa041656e59b109b90 ("tipc:
    message reassembly using fragment chain")

    This commit should be applied to both net and net-next.

    Signed-off-by: Jon Maloy
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Jon Paul Maloy
     
  • [ Upstream commit ac30ef832e6af0505b6f0251a6659adcfa74975e ]

    netlink_dump() returns a negative errno value on error. Until now,
    netlink_recvmsg() directly recorded that negative value in sk->sk_err, but
    that's wrong since sk_err takes positive errno values. (This manifests as
    userspace receiving a positive return value from the recv() system call,
    falsely indicating success.) This bug was introduced in the commit that
    started checking the netlink_dump() return value, commit b44d211 (netlink:
    handle errors from netlink_dump()).

    Multithreaded Netlink dumps are one way to trigger this behavior in
    practice, as described in the commit message for the userspace workaround
    posted here:
    http://openvswitch.org/pipermail/dev/2014-June/042339.html

    This commit also fixes the same bug in netlink_poll(), introduced in commit
    cd1df525d (netlink: add flow control for memory mapped I/O).

    Signed-off-by: Ben Pfaff
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Ben Pfaff
     
  • [ Upstream commit 36beddc272c111689f3042bf3d10a64d8a805f93 ]

    Setting just skb->sk without taking its reference and setting a
    destructor is invalid. However, in the places where this was done, skb
    is used in a way not requiring skb->sk setting. So dropping the setting
    of skb->sk.
    Thanks to Eric Dumazet for correct solution.

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=79441
    Reported-by: Ed Martin
    Signed-off-by: Andrey Utkin
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Andrey Utkin
     
  • [ Upstream commit 6e08d5e3c8236e7484229e46fdf92006e1dd4c49 ]

    The undo code assumes that, upon entering loss recovery, TCP
    1) always retransmit something
    2) the retransmission never fails locally (e.g., qdisc drop)

    so undo_marker is set in tcp_enter_recovery() and undo_retrans is
    incremented only when tcp_retransmit_skb() is successful.

    When the assumption is broken because TCP's cwnd is too small to
    retransmit or the retransmit fails locally. The next (DUP)ACK
    would incorrectly revert the cwnd and the congestion state in
    tcp_try_undo_dsack() or tcp_may_undo(). Subsequent (DUP)ACKs
    may enter the recovery state. The sender repeatedly enter and
    (incorrectly) exit recovery states if the retransmits continue to
    fail locally while receiving (DUP)ACKs.

    The fix is to initialize undo_retrans to -1 and start counting on
    the first retransmission. Always increment undo_retrans even if the
    retransmissions fail locally because they couldn't cause DSACKs to
    undo the cwnd reduction.

    Signed-off-by: Yuchung Cheng
    Signed-off-by: Neal Cardwell
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Yuchung Cheng
     
  • [ Upstream commit 52ad353a5344f1f700c5b777175bdfa41d3cd65a ]

    The problem was triggered by these steps:

    1) create socket, bind and then setsockopt for add mc group.
    mreq.imr_multiaddr.s_addr = inet_addr("255.0.0.37");
    mreq.imr_interface.s_addr = inet_addr("192.168.1.2");
    setsockopt(sockfd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq));

    2) drop the mc group for this socket.
    mreq.imr_multiaddr.s_addr = inet_addr("255.0.0.37");
    mreq.imr_interface.s_addr = inet_addr("0.0.0.0");
    setsockopt(sockfd, IPPROTO_IP, IP_DROP_MEMBERSHIP, &mreq, sizeof(mreq));

    3) and then drop the socket, I found the mc group was still used by the dev:

    netstat -g

    Interface RefCnt Group
    --------------- ------ ---------------------
    eth2 1 255.0.0.37

    Normally even though the IP_DROP_MEMBERSHIP return error, the mc group still need
    to be released for the netdev when drop the socket, but this process was broken when
    route default is NULL, the reason is that:

    The ip_mc_leave_group() will choose the in_dev by the imr_interface.s_addr, if input addr
    is NULL, the default route dev will be chosen, then the ifindex is got from the dev,
    then polling the inet->mc_list and return -ENODEV, but if the default route dev is NULL,
    the in_dev and ifIndex is both NULL, when polling the inet->mc_list, the mc group will be
    released from the mc_list, but the dev didn't dec the refcnt for this mc group, so
    when dropping the socket, the mc_list is NULL and the dev still keep this group.

    v1->v2: According Hideaki's suggestion, we should align with IPv6 (RFC3493) and BSDs,
    so I add the checking for the in_dev before polling the mc_list, make sure when
    we remove the mc group, dec the refcnt to the real dev which was using the mc address.
    The problem would never happened again.

    Signed-off-by: Ding Tianhong
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    dingtianhong
     
  • [ Upstream commit 54951194656e4853e441266fd095f880bc0398f3 ]

    A bug was introduced in NETDEV_CHANGE notifier sequence causing the
    arp table to be sometimes spuriously cleared (including manual arp
    entries marked permanent), upon network link carrier changes.

    The changed argument for the notifier was applied only to a single
    caller of NETDEV_CHANGE, missing among others netdev_state_change().
    So upon net_carrier events induced by the network, which are
    triggering a call to netdev_state_change(), arp_netdev_event() would
    decide whether to clear or not arp cache based on random/junk stack
    values (a kind of read buffer overflow).

    Fixes: be9efd365328 ("net: pass changed flags along with NETDEV_CHANGE event")
    Fixes: 6c8b4e3ff81b ("arp: flush arp cache on IFF_NOARP change")
    Signed-off-by: Loic Prylli
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Loic Prylli
     
  • [ Upstream commit 68b7107b62983f2cff0948292429d5f5999df096 ]

    Some older router implementations still send Fragmentation Needed
    errors with the Next-Hop MTU field set to zero. This is explicitly
    described as an eventuality that hosts must deal with by the
    standard (RFC 1191) since older standards specified that those
    bits must be zero.

    Linux had a generic (for all of IPv4) implementation of the algorithm
    described in the RFC for searching a list of MTU plateaus for a good
    value. Commit 46517008e116 ("ipv4: Kill ip_rt_frag_needed().")
    removed this as part of the changes to remove the routing cache.
    Subsequently any Fragmentation Needed packet with a zero Next-Hop
    MTU has been discarded without being passed to the per-protocol
    handlers or notifying userspace for raw sockets.

    When there is a router which does not implement RFC 1191 on an
    MTU limited path then this results in stalled connections since
    large packets are discarded and the local protocols are not
    notified so they never attempt to lower the pMTU.

    One example I have seen is an OpenBSD router terminating IPSec
    tunnels. It's worth pointing out that this case is distinct from
    the BSD 4.2 bug which incorrectly calculated the Next-Hop MTU
    since the commit in question dismissed that as a valid concern.

    All of the per-protocols handlers implement the simple approach from
    RFC 1191 of immediately falling back to the minimum value. Although
    this is sub-optimal it is vastly preferable to connections hanging
    indefinitely.

    Remove the Next-Hop MTU != 0 check and allow such packets
    to follow the normal path.

    Fixes: 46517008e116 ("ipv4: Kill ip_rt_frag_needed().")
    Signed-off-by: Edward Allcutt
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Edward Allcutt
     
  • [ Upstream commit 5924f17a8a30c2ae18d034a86ee7581b34accef6 ]

    When in repair-mode and TCP_RECV_QUEUE is set, we end up calling
    tcp_push with mss_now being 0. If data is in the send-queue and
    tcp_set_skb_tso_segs gets called, we crash because it will divide by
    mss_now:

    [ 347.151939] divide error: 0000 [#1] SMP
    [ 347.152907] Modules linked in:
    [ 347.152907] CPU: 1 PID: 1123 Comm: packetdrill Not tainted 3.16.0-rc2 #4
    [ 347.152907] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
    [ 347.152907] task: f5b88540 ti: f3c82000 task.ti: f3c82000
    [ 347.152907] EIP: 0060:[] EFLAGS: 00210246 CPU: 1
    [ 347.152907] EIP is at tcp_set_skb_tso_segs+0x49/0xa0
    [ 347.152907] EAX: 00000b67 EBX: f5acd080 ECX: 00000000 EDX: 00000000
    [ 347.152907] ESI: f5a28f40 EDI: f3c88f00 EBP: f3c83d10 ESP: f3c83d00
    [ 347.152907] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
    [ 347.152907] CR0: 80050033 CR2: 083158b0 CR3: 35146000 CR4: 000006b0
    [ 347.152907] Stack:
    [ 347.152907] c167f9d9 f5acd080 000005b4 00000002 f3c83d20 c16013e6 f3c88f00 f5acd080
    [ 347.152907] f3c83da0 c1603b5a f3c83d38 c10a0188 00000000 00000000 f3c83d84 c10acc85
    [ 347.152907] c1ad5ec0 00000000 00000000 c1ad679c 010003e0 00000000 00000000 f3c88fc8
    [ 347.152907] Call Trace:
    [ 347.152907] [] ? apic_timer_interrupt+0x2d/0x34
    [ 347.152907] [] tcp_init_tso_segs+0x36/0x50
    [ 347.152907] [] tcp_write_xmit+0x7a/0xbf0
    [ 347.152907] [] ? up+0x28/0x40
    [ 347.152907] [] ? console_unlock+0x295/0x480
    [ 347.152907] [] ? vprintk_emit+0x1ef/0x4b0
    [ 347.152907] [] __tcp_push_pending_frames+0x36/0xd0
    [ 347.152907] [] tcp_push+0xf0/0x120
    [ 347.152907] [] tcp_sendmsg+0xf1/0xbf0
    [ 347.152907] [] ? kmem_cache_free+0xf0/0x120
    [ 347.152907] [] ? __sigqueue_free+0x32/0x40
    [ 347.152907] [] ? __sigqueue_free+0x32/0x40
    [ 347.152907] [] ? do_wp_page+0x3e0/0x850
    [ 347.152907] [] inet_sendmsg+0x4a/0xb0
    [ 347.152907] [] ? handle_mm_fault+0x709/0xfb0
    [ 347.152907] [] sock_aio_write+0xbb/0xd0
    [ 347.152907] [] do_sync_write+0x69/0xa0
    [ 347.152907] [] vfs_write+0x123/0x160
    [ 347.152907] [] SyS_write+0x55/0xb0
    [ 347.152907] [] sysenter_do_call+0x12/0x28

    This can easily be reproduced with the following packetdrill-script (the
    "magic" with netem, sk_pacing and limit_output_bytes is done to prevent
    the kernel from pushing all segments, because hitting the limit without
    doing this is not so easy with packetdrill):

    0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
    +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0

    +0 bind(3, ..., ...) = 0
    +0 listen(3, 1) = 0

    +0 < S 0:0(0) win 32792
    +0 > S. 0:0(0) ack 1
    +0.1 < . 1:1(0) ack 1 win 65000

    +0 accept(3, ..., ...) = 4

    // This forces that not all segments of the snd-queue will be pushed
    +0 `tc qdisc add dev tun0 root netem delay 10ms`
    +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=2`
    +0 setsockopt(4, SOL_SOCKET, 47, [2], 4) = 0

    +0 write(4,...,10000) = 10000
    +0 write(4,...,10000) = 10000

    // Set tcp-repair stuff, particularly TCP_RECV_QUEUE
    +0 setsockopt(4, SOL_TCP, 19, [1], 4) = 0
    +0 setsockopt(4, SOL_TCP, 20, [1], 4) = 0

    // This now will make the write push the remaining segments
    +0 setsockopt(4, SOL_SOCKET, 47, [20000], 4) = 0
    +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=130000`

    // Now we will crash
    +0 write(4,...,1000) = 1000

    This happens since ec3423257508 (tcp: fix retransmission in repair
    mode). Prior to that, the call to tcp_push was prevented by a check for
    tp->repair.

    The patch fixes it, by adding the new goto-label out_nopush. When exiting
    tcp_sendmsg and a push is not required, which is the case for tp->repair,
    we go to this label.

    When repairing and calling send() with TCP_RECV_QUEUE, the data is
    actually put in the receive-queue. So, no push is required because no
    data has been added to the send-queue.

    Cc: Andrew Vagin
    Cc: Pavel Emelyanov
    Fixes: ec3423257508 (tcp: fix retransmission in repair mode)
    Signed-off-by: Christoph Paasch
    Acked-by: Andrew Vagin
    Acked-by: Pavel Emelyanov
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Christoph Paasch
     
  • [ Upstream commit 7f502361531e9eecb396cf99bdc9e9a59f7ebd7f ]

    We have two different ways to handle changes to sk->sk_dst

    First way (used by TCP) assumes socket lock is owned by caller, and use
    no extra lock : __sk_dst_set() & __sk_dst_reset()

    Another way (used by UDP) uses sk_dst_lock because socket lock is not
    always taken. Note that sk_dst_lock is not softirq safe.

    These ways are not inter changeable for a given socket type.

    ipv4_sk_update_pmtu(), added in linux-3.8, added a race, as it used
    the socket lock as synchronization, but users might be UDP sockets.

    Instead of converting sk_dst_lock to a softirq safe version, use xchg()
    as we did for sk_rx_dst in commit e47eb5dfb296b ("udp: ipv4: do not use
    sk_dst_lock from softirq context")

    In a follow up patch, we probably can remove sk_dst_lock, as it is
    only used in IPv6.

    Signed-off-by: Eric Dumazet
    Cc: Steffen Klassert
    Fixes: 9cb3a50c5f63e ("ipv4: Invalidate the socket cached route on pmtu events if possible")
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Eric Dumazet
     
  • [ Upstream commit f88649721268999bdff09777847080a52004f691 ]

    When IP route cache had been removed in linux-3.6, we broke assumption
    that dst entries were all freed after rcu grace period. DST_NOCACHE
    dst were supposed to be freed from dst_release(). But it appears
    we want to keep such dst around, either in UDP sockets or tunnels.

    In sk_dst_get() we need to make sure dst refcount is not 0
    before incrementing it, or else we might end up freeing a dst
    twice.

    DST_NOCACHE set on a dst does not mean this dst can not be attached
    to a socket or a tunnel.

    Then, before actual freeing, we need to observe a rcu grace period
    to make sure all other cpus can catch the fact the dst is no longer
    usable.

    Signed-off-by: Eric Dumazet
    Reported-by: Dormando
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Eric Dumazet
     
  • [ Upstream commit 5882a07c72093dc3a18e2d2b129fb200686bb6ee ]

    This patch fixes a kernel BUG_ON in skb_segment. It is hit when
    testing two VMs on openvswitch with one VM acting as VXLAN gateway.

    During VXLAN packet GSO, skb_segment is called with skb->data
    pointing to inner TCP payload. skb_segment calls skb_network_protocol
    to retrieve the inner protocol. skb_network_protocol actually expects
    skb->data to point to MAC and it calls pskb_may_pull with ETH_HLEN.
    This ends up pulling in ETH_HLEN data from header tail. As a result,
    pskb_trim logic is skipped and BUG_ON is hit later.

    Move skb_push in front of skb_network_protocol so that skb->data
    lines up properly.

    kernel BUG at net/core/skbuff.c:2999!
    Call Trace:
    [] tcp_gso_segment+0x122/0x410
    [] inet_gso_segment+0x13c/0x390
    [] skb_mac_gso_segment+0x9b/0x170
    [] skb_udp_tunnel_segment+0xd8/0x390
    [] udp4_ufo_fragment+0x120/0x140
    [] inet_gso_segment+0x13c/0x390
    [] ? default_wake_function+0x12/0x20
    [] skb_mac_gso_segment+0x9b/0x170
    [] __skb_gso_segment+0x60/0xc0
    [] dev_hard_start_xmit+0x183/0x550
    [] sch_direct_xmit+0xfe/0x1d0
    [] __dev_queue_xmit+0x214/0x4f0
    [] dev_queue_xmit+0x10/0x20
    [] ip_finish_output+0x66b/0x890
    [] ip_output+0x58/0x90
    [] ? fib_table_lookup+0x29f/0x350
    [] ip_local_out_sk+0x39/0x50
    [] iptunnel_xmit+0x10d/0x130
    [] vxlan_xmit_skb+0x1d0/0x330 [vxlan]
    [] vxlan_tnl_send+0x129/0x1a0 [openvswitch]
    [] ovs_vport_send+0x26/0xa0 [openvswitch]
    [] do_output+0x2e/0x50 [openvswitch]

    Signed-off-by: Wei-Chun Chao
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Wei-Chun Chao
     
  • [ Upstream commit 916c1689a09bc1ca81f2d7a34876f8d35aadd11b ]

    skb_cow called in vlan_reorder_header does not free the skb when it failed,
    and vlan_reorder_header returns NULL to reset original skb when it is called
    in vlan_untag, lead to a memory leak.

    Signed-off-by: Li RongQing
    Acked-by: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Li RongQing
     
  • [ Upstream commit 24599e61b7552673dd85971cf5a35369cd8c119e ]

    When writing to the sysctl field net.sctp.auth_enable, it can well
    be that the user buffer we handed over to proc_dointvec() via
    proc_sctp_do_auth() handler contains something other than integers.

    In that case, we would set an uninitialized 4-byte value from the
    stack to net->sctp.auth_enable that can be leaked back when reading
    the sysctl variable, and it can unintentionally turn auth_enable
    on/off based on the stack content since auth_enable is interpreted
    as a boolean.

    Fix it up by making sure proc_dointvec() returned sucessfully.

    Fixes: b14878ccb7fa ("net: sctp: cache auth_enable per endpoint")
    Reported-by: Florian Westphal
    Signed-off-by: Daniel Borkmann
    Acked-by: Neil Horman
    Acked-by: Vlad Yasevich
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Daniel Borkmann
     
  • [ Upstream commit 2cd0d743b05e87445c54ca124a9916f22f16742e ]

    If there is an MSS change (or misbehaving receiver) that causes a SACK
    to arrive that covers the end of an skb but is less than one MSS, then
    tcp_match_skb_to_sack() was rounding up pkt_len to the full length of
    the skb ("Round if necessary..."), then chopping all bytes off the skb
    and creating a zero-byte skb in the write queue.

    This was visible now because the recently simplified TLP logic in
    bef1909ee3ed1c ("tcp: fixing TLP's FIN recovery") could find that 0-byte
    skb at the end of the write queue, and now that we do not check that
    skb's length we could send it as a TLP probe.

    Consider the following example scenario:

    mss: 1000
    skb: seq: 0 end_seq: 4000 len: 4000
    SACK: start_seq: 3999 end_seq: 4000

    The tcp_match_skb_to_sack() code will compute:

    in_sack = false
    pkt_len = start_seq - TCP_SKB_CB(skb)->seq = 3999 - 0 = 3999
    new_len = (pkt_len / mss) * mss = (3999/1000)*1000 = 3000
    new_len += mss = 4000

    Previously we would find the new_len > skb->len check failing, so we
    would fall through and set pkt_len = new_len = 4000 and chop off
    pkt_len of 4000 from the 4000-byte skb, leaving a 0-byte segment
    afterward in the write queue.

    With this new commit, we notice that the new new_len >= skb->len check
    succeeds, so that we return without trying to fragment.

    Fixes: adb92db857ee ("tcp: Make SACK code to split only at mss boundaries")
    Reported-by: Eric Dumazet
    Signed-off-by: Neal Cardwell
    Cc: Eric Dumazet
    Cc: Yuchung Cheng
    Cc: Ilpo Jarvinen
    Acked-by: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Neal Cardwell
     
  • [ Upstream commit ff5e92c1affe7166b3f6e7073e648ed65a6e2e59 ]

    sysctl handler proc_sctp_do_hmac_alg(), proc_sctp_do_rto_min() and
    proc_sctp_do_rto_max() do not properly reflect some error cases
    when writing values via sysctl from internal proc functions such
    as proc_dointvec() and proc_dostring().

    In all these cases we pass the test for write != 0 and partially
    do additional work just to notice that additional sanity checks
    fail and we return with hard-coded -EINVAL while proc_do*
    functions might also return different errors. So fix this up by
    simply testing a successful return of proc_do* right after
    calling it.

    This also allows to propagate its return value onwards to the user.
    While touching this, also fix up some minor style issues.

    Fixes: 4f3fdf3bc59c ("sctp: add check rto_min and rto_max in sysctl")
    Fixes: 3c68198e7511 ("sctp: Make hmac algorithm selection for cookie generation dynamic")
    Signed-off-by: Daniel Borkmann
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Daniel Borkmann
     
  • [ Upstream commit e0056593b61253f1a8a9941dacda22e73b963cdc ]

    This patch fixes 3 similar bugs where incoming packets might be routed into
    wrong non-wildcard tunnels:

    1) Consider the following setup:
    ip address add 1.1.1.1/24 dev eth0
    ip address add 1.1.1.2/24 dev eth0
    ip tunnel add ipip1 remote 2.2.2.2 local 1.1.1.1 mode ipip dev eth0
    ip link set ipip1 up

    Incoming ipip packets from 2.2.2.2 were routed into ipip1 even if it has dst =
    1.1.1.2. Moreover even if there was wildcard tunnel like
    ip tunnel add ipip0 remote 2.2.2.2 local any mode ipip dev eth0
    but it was created before explicit one (with local 1.1.1.1), incoming ipip
    packets with src = 2.2.2.2 and dst = 1.1.1.2 were still routed into ipip1.

    Same issue existed with all tunnels that use ip_tunnel_lookup (gre, vti)

    2) ip address add 1.1.1.1/24 dev eth0
    ip tunnel add ipip1 remote 2.2.146.85 local 1.1.1.1 mode ipip dev eth0
    ip link set ipip1 up

    Incoming ipip packets with dst = 1.1.1.1 were routed into ipip1, no matter what
    src address is. Any remote ip address which has ip_tunnel_hash = 0 raised this
    issue, 2.2.146.85 is just an example, there are more than 4 million of them.
    And again, wildcard tunnel like
    ip tunnel add ipip0 remote any local 1.1.1.1 mode ipip dev eth0
    wouldn't be ever matched if it was created before explicit tunnel like above.

    Gre & vti tunnels had the same issue.

    3) ip address add 1.1.1.1/24 dev eth0
    ip tunnel add gre1 remote 2.2.146.84 local 1.1.1.1 key 1 mode gre dev eth0
    ip link set gre1 up

    Any incoming gre packet with key = 1 were routed into gre1, no matter what
    src/dst addresses are. Any remote ip address which has ip_tunnel_hash = 0 raised
    the issue, 2.2.146.84 is just an example, there are more than 4 million of them.
    Wildcard tunnel like
    ip tunnel add gre2 remote any local any key 1 mode gre dev eth0
    wouldn't be ever matched if it was created before explicit tunnel like above.

    All this stuff happened because while looking for a wildcard tunnel we didn't
    check that matched tunnel is a wildcard one. Fixed.

    Signed-off-by: Dmitry Popov
    Signed-off-by: David S. Miller
    Signed-off-by: Jiri Slaby

    Dmitry Popov