23 Sep, 2006

3 commits

  • The attached patch makes NFS share superblocks between mounts from the same
    server and FSID over the same protocol.

    It does this by creating each superblock with a false root and returning the
    real root dentry in the vfsmount presented by get_sb(). The root dentry set
    starts off as an anonymous dentry if we don't already have the dentry for its
    inode, otherwise it simply returns the dentry we already have.

    We may thus end up with several trees of dentries in the superblock, and if at
    some later point one of anonymous tree roots is discovered by normal filesystem
    activity to be located in another tree within the superblock, the anonymous
    root is named and materialises attached to the second tree at the appropriate
    point.

    Why do it this way? Why not pass an extra argument to the mount() syscall to
    indicate the subpath and then pathwalk from the server root to the desired
    directory? You can't guarantee this will work for two reasons:

    (1) The root and intervening nodes may not be accessible to the client.

    With NFS2 and NFS3, for instance, mountd is called on the server to get
    the filehandle for the tip of a path. mountd won't give us handles for
    anything we don't have permission to access, and so we can't set up NFS
    inodes for such nodes, and so can't easily set up dentries (we'd have to
    have ghost inodes or something).

    With this patch we don't actually create dentries until we get handles
    from the server that we can use to set up their inodes, and we don't
    actually bind them into the tree until we know for sure where they go.

    (2) Inaccessible symbolic links.

    If we're asked to mount two exports from the server, eg:

    mount warthog:/warthog/aaa/xxx /mmm
    mount warthog:/warthog/bbb/yyy /nnn

    We may not be able to access anything nearer the root than xxx and yyy,
    but we may find out later that /mmm/www/yyy, say, is actually the same
    directory as the one mounted on /nnn. What we might then find out, for
    example, is that /warthog/bbb was actually a symbolic link to
    /warthog/aaa/xxx/www, but we can't actually determine that by talking to
    the server until /warthog is made available by NFS.

    This would lead to having constructed an errneous dentry tree which we
    can't easily fix. We can end up with a dentry marked as a directory when
    it should actually be a symlink, or we could end up with an apparently
    hardlinked directory.

    With this patch we need not make assumptions about the type of a dentry
    for which we can't retrieve information, nor need we assume we know its
    place in the grand scheme of things until we actually see that place.

    This patch reduces the possibility of aliasing in the inode and page caches for
    inodes that may be accessed by more than one NFS export. It also reduces the
    number of superblocks required for NFS where there are many NFS exports being
    used from a server (home directory server + autofs for example).

    This in turn makes it simpler to do local caching of network filesystems, as it
    can then be guaranteed that there won't be links from multiple inodes in
    separate superblocks to the same cache file.

    Obviously, cache aliasing between different levels of NFS protocol could still
    be a problem, but at least that gives us another key to use when indexing the
    cache.

    This patch makes the following changes:

    (1) The server record construction/destruction has been abstracted out into
    its own set of functions to make things easier to get right. These have
    been moved into fs/nfs/client.c.

    All the code in fs/nfs/client.c has to do with the management of
    connections to servers, and doesn't touch superblocks in any way; the
    remaining code in fs/nfs/super.c has to do with VFS superblock management.

    (2) The sequence of events undertaken by NFS mount is now reordered:

    (a) A volume representation (struct nfs_server) is allocated.

    (b) A server representation (struct nfs_client) is acquired. This may be
    allocated or shared, and is keyed on server address, port and NFS
    version.

    (c) If allocated, the client representation is initialised. The state
    member variable of nfs_client is used to prevent a race during
    initialisation from two mounts.

    (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
    the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
    are given the root FH in advance.

    (e) The volume FSID is probed for on the root FH.

    (f) The volume representation is initialised from the FSINFO record
    retrieved on the root FH.

    (g) sget() is called to acquire a superblock. This may be allocated or
    shared, keyed on client pointer and FSID.

    (h) If allocated, the superblock is initialised.

    (i) If the superblock is shared, then the new nfs_server record is
    discarded.

    (j) The root dentry for this mount is looked up from the root FH.

    (k) The root dentry for this mount is assigned to the vfsmount.

    (3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
    returns; this function now attaches disconnected trees from alternate
    roots that happen to be discovered attached to a directory being read (in
    the same way nfs_lookup() is made to do for lookup ops).

    The new d_materialise_unique() function is now used to do this, thus
    permitting the whole thing to be done under one set of locks, and thus
    avoiding any race between mount and lookup operations on the same
    directory.

    (4) The client management code uses a new debug facility: NFSDBG_CLIENT which
    is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.

    (5) Clone mounts are now called xdev mounts.

    (6) Use the dentry passed to the statfs() op as the handle for retrieving fs
    statistics rather than the root dentry of the superblock (which is now a
    dummy).

    Signed-Off-By: David Howells
    Signed-off-by: Trond Myklebust

    David Howells
     
  • Generalise the nfs_client structure by:

    (1) Moving nfs_client to a more general place (nfs_fs_sb.h).

    (2) Renaming its maintenance routines to be non-NFS4 specific.

    (3) Move those maintenance routines to a new non-NFS4 specific file (client.c)
    and move the declarations to internal.h.

    (4) Make nfs_find/get_client() take a full sockaddr_in to include the port
    number (will be required for NFS2/3).

    (5) Make nfs_find/get_client() take the NFS protocol version (again will be
    required to differentiate NFS2, 3 & 4 client records).

    Also:

    (6) Make nfs_client construction proceed akin to inodes, marking them as under
    construction and providing a function to indicate completion.

    (7) Make nfs_get_client() wait interruptibly if it finds a client that it can
    share, but that client is currently being constructed.

    (8) Make nfs4_create_client() use (6) and (7) instead of locking cl_sem.

    Signed-Off-By: David Howells
    Signed-off-by: Trond Myklebust

    David Howells
     
  • Rename struct nfs4_client to struct nfs_client so that it can become the basis
    for a general client record for NFS2 and NFS3 in addition to NFS4.

    Signed-Off-By: David Howells
    Signed-off-by: Trond Myklebust

    David Howells
     

09 Jun, 2006

4 commits


07 Jan, 2006

5 commits


05 Nov, 2005

2 commits

  • RFC 3530 states that for OPEN_DOWNGRADE "The share_access and share_deny
    bits specified must be exactly equal to the union of the share_access and
    share_deny bits specified for some subset of the OPENs in effect for
    current openowner on the current file.

    Setattr is currently violating the NFSv4 rules for OPEN_DOWNGRADE in that
    it may cause a downgrade from OPEN4_SHARE_ACCESS_BOTH to
    OPEN4_SHARE_ACCESS_WRITE despite the fact that there exists no open file
    with O_WRONLY access mode.

    Fix the problem by replacing nfs4_find_state() with a modified version of
    nfs_find_open_context().

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • We must not remove the nfs4_state structure from the inode open lists
    before we are in sequence lock.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     

21 Oct, 2005

2 commits


19 Oct, 2005

5 commits

  • Storing a pointer to the struct rpc_task in the nfs_seqid is broken
    since the nfs_seqid may be freed well after the task has been destroyed.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • Make NFSv4 return the fully initialized file pointer with the
    stateid that it created in the lookup w/intent.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • Currently we fail to do so if the process was signalled.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • OPEN, CLOSE, etc no longer need these semaphores to ensure ordering of
    requests.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
    labelled with "sequence identifiers" in order to prevent the server from
    reordering RPC requests, as this could cause its file state to
    become out of sync with the client.

    Currently the NFS client code enforces this ordering locally using
    semaphores to restrict access to structures until the RPC call is done.
    This, of course, only works with synchronous RPC calls, since the
    user process must first grab the semaphore.
    By dropping semaphores, and instead teaching the RPC engine to hold
    the RPC calls until they are ready to be sent, we can extend this
    process to work nicely with asynchronous RPC calls too.

    This patch adds a new list called "rpc_sequence" that defines the order
    of the RPC calls to be sent. We add one such list for each state_owner.
    When an RPC call is ready to be sent, it checks if it is top of the
    rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
    and loops until it hits top of the list.
    Once the RPC call has completed, it can then bump the sequence id counter,
    and remove itself from the rpc_sequence list, and then wake up the next
    sleeper.

    Note that the state_owner sequence ids and lock_owner sequence ids are
    all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
    are all ordered w.r.t. each other.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     

23 Jun, 2005

4 commits