27 Aug, 2019

1 commit

  • Use the previously-added transmit-phase skbuff private flag to simplify the
    socket buffer tracing a bit. Which phase the skbuff comes from can now be
    divined from the skb rather than having to be guessed from the call state.

    We can also reduce the number of rxrpc_skb_trace values by eliminating the
    difference between Tx and Rx in the symbols.

    Signed-off-by: David Howells

    David Howells
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

04 Oct, 2018

1 commit


01 Aug, 2018

1 commit

  • Trace successful packet transmission (kernel_sendmsg() succeeded, that is)
    in AF_RXRPC. We can share the enum that defines the transmission points
    with the trace_rxrpc_tx_fail() tracepoint, so rename its constants to be
    applicable to both.

    Also, save the internal call->debug_id in the rxrpc_channel struct so that
    it can be used in retransmission trace lines.

    Signed-off-by: David Howells

    David Howells
     

11 May, 2018

1 commit


29 Aug, 2017

1 commit

  • Fix IPv6 support in AF_RXRPC in the following ways:

    (1) When extracting the address from a received IPv4 packet, if the local
    transport socket is open for IPv6 then fill out the sockaddr_rxrpc
    struct for an IPv4-mapped-to-IPv6 AF_INET6 transport address instead
    of an AF_INET one.

    (2) When sending CHALLENGE or RESPONSE packets, the transport length needs
    to be set from the sockaddr_rxrpc::transport_len field rather than
    sizeof() on the IPv4 transport address.

    (3) When processing an IPv4 ICMP packet received by an IPv6 socket, set up
    the address correctly before searching for the affected peer.

    Signed-off-by: David Howells

    David Howells
     

30 Sep, 2016

1 commit


17 Sep, 2016

1 commit

  • Improve sk_buff tracing within AF_RXRPC by the following means:

    (1) Use an enum to note the event type rather than plain integers and use
    an array of event names rather than a big multi ?: list.

    (2) Distinguish Rx from Tx packets and account them separately. This
    requires the call phase to be tracked so that we know what we might
    find in rxtx_buffer[].

    (3) Add a parameter to rxrpc_{new,see,get,free}_skb() to indicate the
    event type.

    (4) A pair of 'rotate' events are added to indicate packets that are about
    to be rotated out of the Rx and Tx windows.

    (5) A pair of 'lost' events are added, along with rxrpc_lose_skb() for
    packet loss injection recording.

    Signed-off-by: David Howells

    David Howells
     

14 Sep, 2016

1 commit


08 Sep, 2016

1 commit

  • Rewrite the data and ack handling code such that:

    (1) Parsing of received ACK and ABORT packets and the distribution and the
    filing of DATA packets happens entirely within the data_ready context
    called from the UDP socket. This allows us to process and discard ACK
    and ABORT packets much more quickly (they're no longer stashed on a
    queue for a background thread to process).

    (2) We avoid calling skb_clone(), pskb_pull() and pskb_trim(). We instead
    keep track of the offset and length of the content of each packet in
    the sk_buff metadata. This means we don't do any allocation in the
    receive path.

    (3) Jumbo DATA packet parsing is now done in data_ready context. Rather
    than cloning the packet once for each subpacket and pulling/trimming
    it, we file the packet multiple times with an annotation for each
    indicating which subpacket is there. From that we can directly
    calculate the offset and length.

    (4) A call's receive queue can be accessed without taking locks (memory
    barriers do have to be used, though).

    (5) Incoming calls are set up from preallocated resources and immediately
    made live. They can than have packets queued upon them and ACKs
    generated. If insufficient resources exist, DATA packet #1 is given a
    BUSY reply and other DATA packets are discarded).

    (6) sk_buffs no longer take a ref on their parent call.

    To make this work, the following changes are made:

    (1) Each call's receive buffer is now a circular buffer of sk_buff
    pointers (rxtx_buffer) rather than a number of sk_buff_heads spread
    between the call and the socket. This permits each sk_buff to be in
    the buffer multiple times. The receive buffer is reused for the
    transmit buffer.

    (2) A circular buffer of annotations (rxtx_annotations) is kept parallel
    to the data buffer. Transmission phase annotations indicate whether a
    buffered packet has been ACK'd or not and whether it needs
    retransmission.

    Receive phase annotations indicate whether a slot holds a whole packet
    or a jumbo subpacket and, if the latter, which subpacket. They also
    note whether the packet has been decrypted in place.

    (3) DATA packet window tracking is much simplified. Each phase has just
    two numbers representing the window (rx_hard_ack/rx_top and
    tx_hard_ack/tx_top).

    The hard_ack number is the sequence number before base of the window,
    representing the last packet the other side says it has consumed.
    hard_ack starts from 0 and the first packet is sequence number 1.

    The top number is the sequence number of the highest-numbered packet
    residing in the buffer. Packets between hard_ack+1 and top are
    soft-ACK'd to indicate they've been received, but not yet consumed.

    Four macros, before(), before_eq(), after() and after_eq() are added
    to compare sequence numbers within the window. This allows for the
    top of the window to wrap when the hard-ack sequence number gets close
    to the limit.

    Two flags, RXRPC_CALL_RX_LAST and RXRPC_CALL_TX_LAST, are added also
    to indicate when rx_top and tx_top point at the packets with the
    LAST_PACKET bit set, indicating the end of the phase.

    (4) Calls are queued on the socket 'receive queue' rather than packets.
    This means that we don't need have to invent dummy packets to queue to
    indicate abnormal/terminal states and we don't have to keep metadata
    packets (such as ABORTs) around

    (5) The offset and length of a (sub)packet's content are now passed to
    the verify_packet security op. This is currently expected to decrypt
    the packet in place and validate it.

    However, there's now nowhere to store the revised offset and length of
    the actual data within the decrypted blob (there may be a header and
    padding to skip) because an sk_buff may represent multiple packets, so
    a locate_data security op is added to retrieve these details from the
    sk_buff content when needed.

    (6) recvmsg() now has to handle jumbo subpackets, where each subpacket is
    individually secured and needs to be individually decrypted. The code
    to do this is broken out into rxrpc_recvmsg_data() and shared with the
    kernel API. It now iterates over the call's receive buffer rather
    than walking the socket receive queue.

    Additional changes:

    (1) The timers are condensed to a single timer that is set for the soonest
    of three timeouts (delayed ACK generation, DATA retransmission and
    call lifespan).

    (2) Transmission of ACK and ABORT packets is effected immediately from
    process-context socket ops/kernel API calls that cause them instead of
    them being punted off to a background work item. The data_ready
    handler still has to defer to the background, though.

    (3) A shutdown op is added to the AF_RXRPC socket so that the AFS
    filesystem can shut down the socket and flush its own work items
    before closing the socket to deal with any in-progress service calls.

    Future additional changes that will need to be considered:

    (1) Make sure that a call doesn't hog the front of the queue by receiving
    data from the network as fast as userspace is consuming it to the
    exclusion of other calls.

    (2) Transmit delayed ACKs from within recvmsg() when we've consumed
    sufficiently more packets to avoid the background work item needing to
    run.

    Signed-off-by: David Howells

    David Howells
     

23 Aug, 2016

1 commit


15 Jun, 2016

2 commits

  • Rework the local RxRPC endpoint management.

    Local endpoint objects are maintained in a flat list as before. This
    should be okay as there shouldn't be more than one per open AF_RXRPC socket
    (there can be fewer as local endpoints can be shared if their local service
    ID is 0 and they share the same local transport parameters).

    Changes:

    (1) Local endpoints may now only be shared if they have local service ID 0
    (ie. they're not being used for listening).

    This prevents a scenario where process A is listening of the Cache
    Manager port and process B contacts a fileserver - which may then
    attempt to send CM requests back to B. But if A and B are sharing a
    local endpoint, A will get the CM requests meant for B.

    (2) We use a mutex to handle lookups and don't provide RCU-only lookups
    since we only expect to access the list when opening a socket or
    destroying an endpoint.

    The local endpoint object is pointed to by the transport socket's
    sk_user_data for the life of the transport socket - allowing us to
    refer to it directly from the sk_data_ready and sk_error_report
    callbacks.

    (3) atomic_inc_not_zero() now exists and can be used to only share a local
    endpoint if the last reference hasn't yet gone.

    (4) We can remove rxrpc_local_lock - a spinlock that had to be taken with
    BH processing disabled given that we assume sk_user_data won't change
    under us.

    (5) The transport socket is shut down before we clear the sk_user_data
    pointer so that we can be sure that the transport socket's callbacks
    won't be invoked once the RCU destruction is scheduled.

    (6) Local endpoints have a work item that handles both destruction and
    event processing. The means that destruction doesn't then need to
    wait for event processing. The event queues can then be cleared after
    the transport socket is shut down.

    (7) Local endpoints are no longer available for resurrection beyond the
    life of the sockets that had them open. As soon as their last ref
    goes, they are scheduled for destruction and may not have their usage
    count moved from 0.

    Signed-off-by: David Howells

    David Howells
     
  • Separate local endpoint event handling out into its own file preparatory to
    overhauling the object management aspect (which remains in the original
    file).

    Signed-off-by: David Howells

    David Howells