30 Dec, 2020

1 commit

  • [ Upstream commit e4c72201b6ec3173dfe13fa2e2335a3ad78d4921 ]

    Currently, we wake up the tasks by priority queue ordering, which means
    that we ignore the batching that is supposed to help with QoS issues.

    Fixes: c049f8ea9a0d ("SUNRPC: Remove the bh-safe lock requirement on the rpc_wait_queue->lock")
    Signed-off-by: Trond Myklebust
    Signed-off-by: Sasha Levin

    Trond Myklebust
     

21 Sep, 2020

5 commits


05 Apr, 2020

1 commit


16 Mar, 2020

1 commit


15 Jan, 2020

1 commit


23 Nov, 2019

1 commit

  • RPC tasks on the backchannel never invoke xprt_complete_rqst(), so
    there is no way to report their tk_status at completion. Also, any
    RPC task that exits via rpc_exit_task() before it is replied to will
    also disappear without a trace.

    Introduce a trace point that is symmetrical with rpc_task_begin that
    captures the termination status of each RPC task.

    Sample trace output for callback requests initiated on the server:
    kworker/u8:12-448 [003] 127.025240: rpc_task_end: task:50@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
    kworker/u8:12-448 [002] 127.567310: rpc_task_end: task:51@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
    kworker/u8:12-448 [001] 130.506817: rpc_task_end: task:52@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task

    Odd, though, that I never see trace_rpc_task_complete, either in the
    forward or backchannel. Should it be removed?

    Signed-off-by: Chuck Lever
    Signed-off-by: Trond Myklebust

    Chuck Lever
     

06 Nov, 2019

1 commit

  • Jon Hunter: "I have been tracking down another suspend/NFS related
    issue where again I am seeing random delays exiting suspend. The delays
    can be up to a couple minutes in the worst case and this is causing a
    suspend test we have to fail."

    Change the use of a deferrable work to a standard delayed one.

    Reported-by: Jon Hunter
    Tested-by: Jon Hunter
    Fixes: 7e0a0e38fcfea ("SUNRPC: Replace the queue timer with a delayed work function")
    Signed-off-by: Trond Myklebust

    Trond Myklebust
     

18 Sep, 2019

1 commit


20 Aug, 2019

1 commit


13 Jul, 2019

2 commits


09 Jul, 2019

1 commit

  • Adapt and apply changes that were made to the TCP socket connect
    code. See the following commits for details on the purpose of
    these changes:

    Commit 7196dbb02ea0 ("SUNRPC: Allow changing of the TCP timeout parameters on the fly")
    Commit 3851f1cdb2b8 ("SUNRPC: Limit the reconnect backoff timer to the max RPC message timeout")
    Commit 02910177aede ("SUNRPC: Fix reconnection timeouts")

    Some common transport code is moved to xprt.c to satisfy the code
    duplication police.

    Signed-off-by: Chuck Lever
    Signed-off-by: Anna Schumaker

    Chuck Lever
     

07 Jul, 2019

4 commits


22 Jun, 2019

1 commit

  • Jon Hunter reports:
    "I have been noticing intermittent failures with a system suspend test on
    some of our machines that have a NFS mounted root file-system. Bisecting
    this issue points to your commit 431235818bc3 ("SUNRPC: Declare RPC
    timers as TIMER_DEFERRABLE") and reverting this on top of v5.2-rc3 does
    appear to resolve the problem.

    The cause of the suspend failure appears to be a long delay observed
    sometimes when resuming from suspend, and this is causing our test to
    timeout."

    This reverts commit 431235818bc3a919ca7487500c67c3144feece80.

    Reported-by: Jon Hunter
    Signed-off-by: Anna Schumaker

    Anna Schumaker
     

21 May, 2019

1 commit

  • Add SPDX license identifiers to all files which:

    - Have no license information of any form

    - Have EXPORT_.*_SYMBOL_GPL inside which was used in the
    initial scan/conversion to ignore the file

    These files fall under the project license, GPL v2 only. The resulting SPDX
    license identifier is:

    GPL-2.0-only

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

26 Apr, 2019

7 commits


10 Mar, 2019

1 commit


03 Mar, 2019

1 commit


21 Feb, 2019

1 commit


20 Dec, 2018

2 commits

  • SUNRPC has two sorts of credentials, both of which appear as
    "struct rpc_cred".
    There are "generic credentials" which are supplied by clients
    such as NFS and passed in 'struct rpc_message' to indicate
    which user should be used to authorize the request, and there
    are low-level credentials such as AUTH_NULL, AUTH_UNIX, AUTH_GSS
    which describe the credential to be sent over the wires.

    This patch replaces all the generic credentials by 'struct cred'
    pointers - the credential structure used throughout Linux.

    For machine credentials, there is a special 'struct cred *' pointer
    which is statically allocated and recognized where needed as
    having a special meaning. A look-up of a low-level cred will
    map this to a machine credential.

    Signed-off-by: NeilBrown
    Acked-by: J. Bruce Fields
    Signed-off-by: Anna Schumaker

    NeilBrown
     
  • The credential passed in rpc_message.rpc_cred is always a
    generic credential except in one instance.
    When gss_destroying_context() calls rpc_call_null(), it passes
    a specific credential that it needs to destroy.
    In this case the RPC acts *on* the credential rather than
    being authorized by it.

    This special case deserves explicit support and providing that will
    mean that rpc_message.rpc_cred is *always* generic, allowing
    some optimizations.

    So add "tk_op_cred" to rpc_task and "rpc_op_cred" to the setup data.
    Use this to pass the cred down from rpc_call_null(), and have
    rpcauth_bindcred() notice it and bind it in place.

    Credit to kernel test robot for finding
    a bug in earlier version of this patch.

    Signed-off-by: NeilBrown
    Signed-off-by: Anna Schumaker

    NeilBrown
     

01 Oct, 2018

3 commits


11 Apr, 2018

1 commit


09 Feb, 2018

1 commit

  • Hi folks,

    On a multi-core machine, is it expected that we can have parallel RPCs
    handled by each of the per-core workqueue?

    In testing a read workload, observing via "top" command that a single
    "kworker" thread is running servicing the requests (no parallelism).
    It's more prominent while doing these operations over krb5p mount.

    What has been suggested by Bruce is to try this and in my testing I
    see then the read workload spread among all the kworker threads.

    Signed-off-by: Olga Kornievskaia
    Signed-off-by: Trond Myklebust

    Olga Kornievskaia
     

08 Feb, 2018

1 commit