15 May, 2007

1 commit

  • - fs/nfs/dir.c:610:8: warning: symbol 'nfs_llseek_dir' was not declared.
    Should it be static?
    - fs/nfs/dir.c:636:5: warning: symbol 'nfs_fsync_dir' was not declared.
    Should it be static?
    - fs/nfs/write.c:925:19: warning: symbol 'req' shadows an earlier one
    - fs/nfs/write.c:61:6: warning: symbol 'nfs_commit_rcu_free' was not
    declared. Should it be static?
    - fs/nfs/nfs4proc.c:793:5: warning: symbol 'nfs4_recover_expired_lease'
    was not declared. Should it be static?

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     

10 May, 2007

3 commits


08 May, 2007

1 commit

  • Ensure pages are uptodate after returning from read_cache_page, which allows
    us to cut out most of the filesystem-internal PageUptodate calls.

    I didn't have a great look down the call chains, but this appears to fixes 7
    possible use-before uptodate in hfs, 2 in hfsplus, 1 in jfs, a few in
    ecryptfs, 1 in jffs2, and a possible cleared data overwritten with readpage in
    block2mtd. All depending on whether the filler is async and/or can return
    with a !uptodate page.

    Signed-off-by: Nick Piggin
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

01 May, 2007

2 commits

  • Try running this script in an NFS mounted directory (Client relatively
    recent - 2.6.18 has the problem as does 2.6.20).

    ------------------------------------------------------
    #!/bin/bash
    #
    # This script will produce the following errormessage from tar:
    #
    # tar: newdir/innerdir/innerfile: file changed as we read it

    # create dirs
    rm -rf nfstest
    mkdir -p nfstest/dir/innerdir

    # create files (should not be empty)
    echo "Hello World!" >nfstest/dir/file
    echo "Hello World!" >nfstest/dir/innerdir/innerfile

    # problem only happens if we sleep before chmod
    sleep 1

    # change file modes
    chmod -R a+r nfstest

    # rename dir
    mv nfstest/dir nfstest/newdir

    # tar it
    tar -cf nfstest/nfstest.tar -C nfstest newdir

    # restore old dir name
    mv nfstest/newdir nfstest/dir
    --------------------------------------------------------

    What happens:

    The 'chmod -R' does a readdir_plus in each directory and the results
    get cached in the page cache. It then updates the ctime on each file
    by one second. When this happens, the post-op attributes are used to
    update the ctime stored on the client to match the value in the kernel.

    The 'mv' calls shrink_dcache_parent on the directory tree which
    flushes all the dentries (so a new lookup will be required) but
    doesn't flush the inodes or pagecache.

    The 'tar' does a readdir on each directory, but (in the case of
    'innerdir' at least) satisfies it from the pagecache and uses the
    READDIRPLUS data to update all the inodes. In the case of
    'innerdir/innerfile', the ctime is out of date.

    'tar' then calls 'lstat' on innerdir/innerfile getting an old ctime.
    It then opens the file (triggering a GETATTR), reads the content, and
    then calls fstat to see if anything has changed. It finds that ctime
    has changed and so complains.

    The problem seems to be that the cache readdirplus info is kept around
    for too long.

    My patch below discards pagecache data for directories when
    dentry_iput is called on them. This effectively removes the symptom
    which convinces me that I correctly understand the problem. However
    I'm not convinced that is a proper solution, as there could easily be
    other races that trigger the same problem without being affected by
    this 'fix'.

    One possibility would be to require that readdirplus pagecache data be
    only used *once* to instantiate an inode. Somehow it should then be
    invalidated so that if the dentry subsequently disappears, it will
    cause a new request to the server to fill in the stat data.

    Another possibility is to compare the cache_change_attribute on the
    inode with something similar for the readdirplus info and reject the
    info from readdirplus if it is too old.

    I haven't tried to implement these and would value other opinions
    before I do.

    Thanks,
    NeilBrown

    Signed-off-by: Neil Brown
    Signed-off-by: Trond Myklebust

    Neil Brown
     
  • Don't use uninitialsed value for fattr->time_start in readdirplus results.

    The 'fattr' structure filled in by nfs3_decode_direct does not get a
    value for ->time_start set.
    Thus if an entry is for an inode that we already have in cache,
    when nfs_readdir_lookup calls nfs_fhget, it will call nfs_refresh_inode
    and may update the inode with out-of-date information.

    Directories are read a page at a time, so each page could have a
    different timestamp that "should" be used to set the time_start for
    the fattr for info in that page. However storing the timestamp per
    page is awkward. (We could stick in the first 4 bytes and only read 4092
    bytes, but that is a bigger code change than I am interested it).

    This patch ignores the readdir_plus attributes if a readdir finds the
    information already in cache, and otherwise sets ->time_start to the time
    the readdir request was sent to the server.

    It might be nice to store - in the directory inode - the time stamp for
    the earliest readdir request that is still in the page cache, so that we
    don't ignore attribute data that we don't have to. This patch doesn't do
    that.

    Signed-off-by: Neil Brown
    Signed-off-by: Trond Myklebust

    Neil Brown
     

15 Apr, 2007

1 commit


13 Feb, 2007

2 commits


04 Feb, 2007

4 commits


25 Jan, 2007

1 commit


09 Dec, 2006

1 commit


22 Oct, 2006

2 commits

  • If someone has renamed a directory on the server, triggering the d_move
    code in d_materialise_unique(), then we need to invalidate the cached
    directory information in the source parent directory.

    Signed-off-by: Trond Myklebust
    Cc: Miklos Szeredi
    Cc: Maneesh Soni
    Cc: Dipankar Sarma
    Cc: Neil Brown
    Cc: Al Viro
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Trond Myklebust
     
  • If the caller tries to instantiate a directory using an inode that already
    has a dentry alias, then we attempt to rename the existing dentry instead
    of instantiating a new one. Fail with an ELOOP error if the rename would
    affect one of our parent directories.

    This behaviour is needed in order to avoid issues such as

    http://bugzilla.kernel.org/show_bug.cgi?id=7178

    Signed-off-by: Trond Myklebust
    Cc: Miklos Szeredi
    Cc: Maneesh Soni
    Cc: Dipankar Sarma
    Cc: Neil Brown
    Cc: Al Viro
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Trond Myklebust
     

21 Oct, 2006

3 commits

  • on-the-wire data is big-endian

    [in large part pulled from Alexey's patch]

    Signed-off-by: Al Viro
    Acked-by: Trond Myklebust
    Acked-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Al Viro
     
  • The original code confused a zero return code from pagevec_add() as success.

    Test plan:
    None.

    Signed-off-by: Chuck Lever
    Signed-off-by: Trond Myklebust
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chuck Lever
     
  • If invalidate_inode_pages2() fails, then it should in principle just be
    because the current process was signalled. In that case, we just want to
    ensure that the inode's page cache remains marked as invalid.

    Also add a helper to allow the O_DIRECT code to simply mark the page cache as
    invalid once it is finished writing, instead of calling
    invalidate_inode_pages2() itself.

    Signed-off-by: Trond Myklebust
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Trond Myklebust
     

01 Oct, 2006

2 commits

  • Some filesystems, instead of simply decrementing i_nlink, simply zero it
    during an unlink operation. We need to catch these in addition to the
    decrement operations.

    Signed-off-by: Dave Hansen
    Acked-by: Christoph Hellwig
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen
     
  • When a filesystem decrements i_nlink to zero, it means that a write must be
    performed in order to drop the inode from the filesystem.

    We're shortly going to have keep filesystems from being remounted r/o between
    the time that this i_nlink decrement and that write occurs.

    So, add a little helper function to do the decrements. We'll tie into it in a
    bit to note when i_nlink hits zero.

    Signed-off-by: Dave Hansen
    Acked-by: Christoph Hellwig
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen
     

25 Sep, 2006

1 commit

  • Some file systems want to manually d_move() the dentries involved in a
    rename. We can do this by making use of the FS_ODD_RENAME flag if we just
    have nfs_rename() unconditionally do the d_move(). While there, we rename
    the flag to be more descriptive.

    OCFS2 uses this to protect that part of the rename operation with a cluster
    lock.

    Signed-off-by: Mark Fasheh
    Cc: Trond Myklebust
    Cc: Al Viro
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton

    Mark Fasheh
     

23 Sep, 2006

10 commits

  • If the open intents tell us that a given lookup is going to result in a,
    exclusive create, we currently optimize away the lookup call itself. The
    reason is that the lookup would not be atomic with the create RPC call, so
    why do it in the first place?

    A problem occurs, however, if the VFS aborts the exclusive create operation
    after the lookup, but before the call to create the file/directory: in this
    case we will end up with a hashed negative dentry in the dcache that has
    never been looked up.
    Fix this by only actually hashing the dentry once the create operation has
    been successfully completed.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • Now that we have a copy of the symlink path in the page cache, we can pass
    a struct page down to the XDR routines instead of a string buffer.

    Test plan:
    Connectathon, all NFS versions.

    Signed-off-by: Chuck Lever
    Signed-off-by: Trond Myklebust

    Chuck Lever
     
  • Currently the NFS client does not cache symlinks it creates. They get
    cached only when the NFS client reads them back from the server.

    Copy the symlink into the page cache before sending it.

    Test plan:
    Connectathon, all NFS versions.

    Signed-off-by: Chuck Lever
    Signed-off-by: Trond Myklebust

    Chuck Lever
     
  • If the LOOKUP or GETATTR in nfs_instantiate fail, nfs_instantiate will do a
    d_drop before returning. But some callers already do a d_drop in the case
    of an error return. Make certain we do only one d_drop in all error paths.

    This issue was introduced because over time, the symlink proc API diverged
    slightly from the create/mkdir/mknod proc API. To prevent other coding
    mistakes of this type, change the symlink proc API to be more like
    create/mkdir/mknod and move the nfs_instantiate call into the symlink proc
    routines so it is used in exactly the same way for create, mkdir, mknod,
    and symlink.

    Test plan:
    Connectathon, all versions of NFS.

    Signed-off-by: Chuck Lever
    Signed-off-by: Trond Myklebust

    Chuck Lever
     
  • In the early days of NFS, there was no duplicate reply cache on the server.
    Thus retransmitted non-idempotent requests often found that the request had
    already completed on the server. To avoid passing an unanticipated return
    code to unsuspecting applications, NFS clients would often shunt error
    codes that implied the request had been retried but already completed.

    Thanks to NFS over TCP, duplicate reply caches on the server, and network
    performance and reliability improvements, it is safe to remove such checks.

    Test plan:
    None.

    Signed-off-by: Chuck Lever
    Signed-off-by: Trond Myklebust

    Chuck Lever
     
  • The attached patch makes NFS share superblocks between mounts from the same
    server and FSID over the same protocol.

    It does this by creating each superblock with a false root and returning the
    real root dentry in the vfsmount presented by get_sb(). The root dentry set
    starts off as an anonymous dentry if we don't already have the dentry for its
    inode, otherwise it simply returns the dentry we already have.

    We may thus end up with several trees of dentries in the superblock, and if at
    some later point one of anonymous tree roots is discovered by normal filesystem
    activity to be located in another tree within the superblock, the anonymous
    root is named and materialises attached to the second tree at the appropriate
    point.

    Why do it this way? Why not pass an extra argument to the mount() syscall to
    indicate the subpath and then pathwalk from the server root to the desired
    directory? You can't guarantee this will work for two reasons:

    (1) The root and intervening nodes may not be accessible to the client.

    With NFS2 and NFS3, for instance, mountd is called on the server to get
    the filehandle for the tip of a path. mountd won't give us handles for
    anything we don't have permission to access, and so we can't set up NFS
    inodes for such nodes, and so can't easily set up dentries (we'd have to
    have ghost inodes or something).

    With this patch we don't actually create dentries until we get handles
    from the server that we can use to set up their inodes, and we don't
    actually bind them into the tree until we know for sure where they go.

    (2) Inaccessible symbolic links.

    If we're asked to mount two exports from the server, eg:

    mount warthog:/warthog/aaa/xxx /mmm
    mount warthog:/warthog/bbb/yyy /nnn

    We may not be able to access anything nearer the root than xxx and yyy,
    but we may find out later that /mmm/www/yyy, say, is actually the same
    directory as the one mounted on /nnn. What we might then find out, for
    example, is that /warthog/bbb was actually a symbolic link to
    /warthog/aaa/xxx/www, but we can't actually determine that by talking to
    the server until /warthog is made available by NFS.

    This would lead to having constructed an errneous dentry tree which we
    can't easily fix. We can end up with a dentry marked as a directory when
    it should actually be a symlink, or we could end up with an apparently
    hardlinked directory.

    With this patch we need not make assumptions about the type of a dentry
    for which we can't retrieve information, nor need we assume we know its
    place in the grand scheme of things until we actually see that place.

    This patch reduces the possibility of aliasing in the inode and page caches for
    inodes that may be accessed by more than one NFS export. It also reduces the
    number of superblocks required for NFS where there are many NFS exports being
    used from a server (home directory server + autofs for example).

    This in turn makes it simpler to do local caching of network filesystems, as it
    can then be guaranteed that there won't be links from multiple inodes in
    separate superblocks to the same cache file.

    Obviously, cache aliasing between different levels of NFS protocol could still
    be a problem, but at least that gives us another key to use when indexing the
    cache.

    This patch makes the following changes:

    (1) The server record construction/destruction has been abstracted out into
    its own set of functions to make things easier to get right. These have
    been moved into fs/nfs/client.c.

    All the code in fs/nfs/client.c has to do with the management of
    connections to servers, and doesn't touch superblocks in any way; the
    remaining code in fs/nfs/super.c has to do with VFS superblock management.

    (2) The sequence of events undertaken by NFS mount is now reordered:

    (a) A volume representation (struct nfs_server) is allocated.

    (b) A server representation (struct nfs_client) is acquired. This may be
    allocated or shared, and is keyed on server address, port and NFS
    version.

    (c) If allocated, the client representation is initialised. The state
    member variable of nfs_client is used to prevent a race during
    initialisation from two mounts.

    (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
    the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
    are given the root FH in advance.

    (e) The volume FSID is probed for on the root FH.

    (f) The volume representation is initialised from the FSINFO record
    retrieved on the root FH.

    (g) sget() is called to acquire a superblock. This may be allocated or
    shared, keyed on client pointer and FSID.

    (h) If allocated, the superblock is initialised.

    (i) If the superblock is shared, then the new nfs_server record is
    discarded.

    (j) The root dentry for this mount is looked up from the root FH.

    (k) The root dentry for this mount is assigned to the vfsmount.

    (3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
    returns; this function now attaches disconnected trees from alternate
    roots that happen to be discovered attached to a directory being read (in
    the same way nfs_lookup() is made to do for lookup ops).

    The new d_materialise_unique() function is now used to do this, thus
    permitting the whole thing to be done under one set of locks, and thus
    avoiding any race between mount and lookup operations on the same
    directory.

    (4) The client management code uses a new debug facility: NFSDBG_CLIENT which
    is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.

    (5) Clone mounts are now called xdev mounts.

    (6) Use the dentry passed to the statfs() op as the handle for retrieving fs
    statistics rather than the root dentry of the superblock (which is now a
    dummy).

    Signed-Off-By: David Howells
    Signed-off-by: Trond Myklebust

    David Howells
     
  • Move the rpc_ops from the nfs_server struct to the nfs_client struct as they're
    common to all server records of a particular NFS protocol version.

    Signed-Off-By: David Howells
    Signed-off-by: Trond Myklebust

    David Howells
     
  • A pinned inode may in theory end up filling memory with cached ACCESS
    calls. This patch ensures that the VM may shrink away the cache in these
    particular cases.
    The shrinker works by iterating through the list of inodes on the global
    nfs_access_lru_list, and removing the least recently used access
    cache entry until it is done (or until the entire cache is empty).

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • ...in order to allow the addition of a memory shrinker.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • The current access cache only allows one entry at a time to be cached for each
    inode. Add a per-inode red-black tree in order to allow more than one to
    be cached at a time.

    Should significantly cut down the time spent in path traversal for shared
    directories such as ${PATH}, /usr/share, etc.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     

06 Jul, 2006

1 commit


09 Jun, 2006

2 commits


20 Apr, 2006

1 commit


29 Mar, 2006

1 commit

  • This is a conversion to make the various file_operations structs in fs/
    const. Basically a regexp job, with a few manual fixups

    The goal is both to increase correctness (harder to accidentally write to
    shared datastructures) and reducing the false sharing of cachelines with
    things that get dirty in .data (while .rodata is nicely read only and thus
    cache clean)

    Signed-off-by: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arjan van de Ven
     

21 Mar, 2006

1 commit