07 Jan, 2011

36 commits

  • The standard memcmp function on a Westmere system shows up hot in
    profiles in the `git diff` workload (both parallel and single threaded),
    and it is likely due to the costs associated with trapping into
    microcode, and little opportunity to improve memory access (dentry
    name is not likely to take up more than a cacheline).

    So replace it with an open-coded byte comparison. This increases code
    size by 8 bytes in the critical __d_lookup_rcu function, but the
    speedup is huge, averaging 10 runs of each:

    git diff st user sys elapsed CPU
    before 1.15 2.57 3.82 97.1
    after 1.14 2.35 3.61 96.8

    git diff mt user sys elapsed CPU
    before 1.27 3.85 1.46 349
    after 1.26 3.54 1.43 333

    Elapsed time for single threaded git diff at 95.0% confidence:
    -0.21 +/- 0.01
    -5.45% +/- 0.24%

    It's -0.66% +/- 0.06% elapsed time on my Opteron, so rep cmp costs on the
    fam10h seem to be relatively smaller, but there is still a win.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • This makes single threaded git diff -1.25% +/- 0.05% elapsed time on my
    2s12c24t Westmere system, and -0.86% +/- 0.05% on my 2s8c Barcelona, by
    prefetching the important first cacheline of the inode in while we do the
    actual name compare and other operations on the dentry.

    There was no measurable slowdown in the single file stat case, or the creat
    case (where negative dentries would be common).

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Regardless of how much we possibly try to scale dcache, there is likely
    always going to be some fundamental contention when adding or removing children
    under the same parent. Pseudo filesystems do not seem need to have connected
    dentries because by definition they are disconnected.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • dcache_inode_lock can be replaced with per-inode locking. Use existing
    inode->i_lock for this. This is slightly non-trivial because we sometimes
    need to find the inode from the dentry, which requires d_inode to be
    stabilised (either with refcount or d_lock).

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • We can turn the dcache hash locking from a global dcache_hash_lock into
    per-bucket locking.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Put dentry and inode fields into top of data structure. This allows RCU path
    traversal to perform an RCU dentry lookup in a path walk by touching only the
    first 56 bytes of the dentry.

    We also fit in 8 bytes of inline name in the first 64 bytes, so for short
    names, only 64 bytes needs to be touched to perform the lookup. We should
    get rid of the hash->prev pointer from the first 64 bytes, and fit 16 bytes
    of name in there, which will take care of 81% rather than 32% of the kernel
    tree.

    inode is also rearranged so that RCU lookup will only touch a single cacheline
    in the inode, plus one in the i_ops structure.

    This is important for directory component lookups in RCU path walking. In the
    kernel source, directory names average is around 6 chars, so this works.

    When we reach the last element of the lookup, we need to lock it and take its
    refcount which requires another cacheline access.

    Align dentry and inode operations structs, so members will be at predictable
    offsets and we can group common operations into head of structure.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Reduce some branches and memory accesses in dcache lookup by adding dentry
    flags to indicate common d_ops are set, rather than having to check them.
    This saves a pointer memory access (dentry->d_op) in common path lookup
    situations, and saves another pointer load and branch in cases where we
    have d_op but not the particular operation.

    Patched with:

    git grep -E '[.>]([[:space:]])*d_op([[:space:]])*=' | xargs sed -e 's/\([^\t ]*\)->d_op = \(.*\);/d_set_d_op(\1, \2);/' -e 's/\([^\t ]*\)\.d_op = \(.*\);/d_set_d_op(\&\1, \2);/' -i

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Rather than keep a d_mounted count in the dentry, set a dentry flag instead.
    The flag can be cleared by checking the hash table to see if there are any
    mounts left, which is not time critical because it is performed at detach time.

    The mounted state of a dentry is only used to speculatively take a look in the
    mount hash table if it is set -- before following the mount, vfsmount lock is
    taken and mount re-checked without races.

    This saves 4 bytes on 32-bit, nothing on 64-bit but it does provide a hole I
    might use later (and some configs have larger than 32-bit spinlocks which might
    make use of the hole).

    Autofs4 conversion and changelog by Ian Kent :
    In autofs4, when expring direct (or offset) mounts we need to ensure that we
    block user path walks into the autofs mount, which is covered by another mount.
    To do this we clear the mounted status so that follows stop before walking into
    the mount and are essentially blocked until the expire is completed. The
    automount daemon still finds the correct dentry for the umount due to the
    follow mount logic in fs/autofs4/root.c:autofs4_follow_link(), which is set as
    an inode operation for direct and offset mounts only and is called following
    the lookup that stopped at the covered mount.

    At the end of the expire the covering mount probably has gone away so the
    mounted status need not be restored. But we need to check this and only restore
    the mounted status if the expire failed.

    XXX: autofs may not work right if we have other mounts go over the top of it?

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Perform common cases of path lookups without any stores or locking in the
    ancestor dentry elements. This is called rcu-walk, as opposed to the current
    algorithm which is a refcount based walk, or ref-walk.

    This results in far fewer atomic operations on every path element,
    significantly improving path lookup performance. It also avoids cacheline
    bouncing on common dentries, significantly improving scalability.

    The overall design is like this:
    * LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
    * Take the RCU lock for the entire path walk, starting with the acquiring
    of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
    not required for dentry persistence.
    * synchronize_rcu is called when unregistering a filesystem, so we can
    access d_ops and i_ops during rcu-walk.
    * Similarly take the vfsmount lock for the entire path walk. So now mnt
    refcounts are not required for persistence. Also we are free to perform mount
    lookups, and to assume dentry mount points and mount roots are stable up and
    down the path.
    * Have a per-dentry seqlock to protect the dentry name, parent, and inode,
    so we can load this tuple atomically, and also check whether any of its
    members have changed.
    * Dentry lookups (based on parent, candidate string tuple) recheck the parent
    sequence after the child is found in case anything changed in the parent
    during the path walk.
    * inode is also RCU protected so we can load d_inode and use the inode for
    limited things.
    * i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
    * i_op can be loaded.

    When we reach the destination dentry, we lock it, recheck lookup sequence,
    and increment its refcount and mountpoint refcount. RCU and vfsmount locks
    are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
    not match, we can not drop rcu-walk gracefully at the current point in the
    lokup, so instead return -ECHILD (for want of a better errno). This signals the
    path walking code to re-do the entire lookup with a ref-walk.

    Aside from the final dentry, there are other situations that may be encounted
    where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
    a reference on the last good dentry) and continue with a ref-walk. Again, if
    we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
    using ref-walk. But it is very important that we can continue with ref-walk
    for most cases, particularly to avoid the overhead of double lookups, and to
    gain the scalability advantages on common path elements (like cwd and root).

    The cases where rcu-walk cannot continue are:
    * NULL dentry (ie. any uncached path element)
    * parent with d_inode->i_op->permission or ACLs
    * dentries with d_revalidate
    * Following links

    In future patches, permission checks and d_revalidate become rcu-walk aware. It
    may be possible eventually to make following links rcu-walk aware.

    Uncached path elements will always require dropping to ref-walk mode, at the
    very least because i_mutex needs to be grabbed, and objects allocated.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • The tricky locking for disposing of a dentry is duplicated 3 times in the
    dcache (dput, pruning a dentry from the LRU, and pruning its ancestors).
    Consolidate them all into a single function dentry_kill.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Signed-off-by: Nick Piggin

    Nick Piggin
     
  • prune_one_dentry can avoid quite a bit of locking in the common case where
    ancestors have an elevated refcount. Alternatively, we could have gone the
    other way and made fewer trylocks in the case where d_count goes to zero, but
    is probably less common.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Use RCU to simplify locking in dget_parent.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • dget_locked was a shortcut to avoid the lazy lru manipulation when we already
    held dcache_lock (lru manipulation was relatively cheap at that point).
    However, how that the lru lock is an innermost one, we never hold it at any
    caller, so the lock cost can now be avoided. We already have well working lazy
    dcache LRU, so it should be fine to defer LRU manipulations to scan time.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • dcache_inode_lock can be avoided in d_delete() and d_materialise_unique()
    in cases where it is not required.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Signed-off-by: Nick Piggin

    Nick Piggin
     
  • It is possible to run dput without taking data structure locks up-front. In
    many cases where we don't kill the dentry anyway, these locks are not required.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Long lived dcache "multi-step" operations which retry on rename seq can
    be starved with a lot of rename activity. If they fail after the 1st pass,
    take the rename_lock for writing to avoid further starvation.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • dcache_lock no longer protects anything. remove it.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • The remaining usages for dcache_lock is to allow atomic, multi-step read-side
    operations over the directory tree by excluding modifications to the tree.
    Also, to walk in the leaf->root direction in the tree where we don't have
    a natural d_lock ordering.

    This could be accomplished by taking every d_lock, but this would mean a
    huge number of locks and actually gets very tricky.

    Solve this instead by using the rename seqlock for multi-step read-side
    operations, retry in case of a rename so we don't walk up the wrong parent.
    Concurrent dentry insertions are not serialised against. Concurrent deletes
    are tricky when walking up the directory: our parent might have been deleted
    when dropping locks so also need to check and retry for that.

    We can also use the rename lock in cases where livelock is a worry (and it
    is introduced in subsequent patch).

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Cover d_name with d_lock in more cases, where there may be concurrent
    modification to it.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Add a new lock, dcache_inode_lock, to protect the inode's i_dentry list
    from concurrent modification. d_alias is also protected by d_lock.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Protect d_subdirs and d_child with d_lock, except in filesystems that aren't
    using dcache_lock for these anyway (eg. using i_mutex).

    Note: if we change the locking rule in future so that ->d_child protection is
    provided only with ->d_parent->d_lock, it may allow us to reduce some locking.
    But it would be an exception to an otherwise regular locking scheme, so we'd
    have to see some good results. Probably not worthwhile.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Protect d_unhashed(dentry) condition with d_lock. This means keeping
    DCACHE_UNHASHED bit in synch with hash manipulations.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Make d_count non-atomic and protect it with d_lock. This allows us to ensure a
    0 refcount dentry remains 0 without dcache_lock. It is also fairly natural when
    we start protecting many other dentry members with d_lock.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Add a new lock, dcache_lru_lock, to protect the dcache LRU list from concurrent
    modification. d_lru is also protected by d_lock, which allows LRU lists to be
    accessed without the lru lock, using RCU in future patches.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Add a new lock, dcache_hash_lock, to protect the dcache hash table from
    concurrent modification. d_hash is also protected by d_lock.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Remove dcache_lock locking from hostfs filesystem, and move it into dcache
    helpers. All that is required is a coherent path name. Protection from
    concurrent modification of the namespace after path name generation is not
    provided in current code, because dcache_lock is dropped before the path is
    used.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Change d_hash so it may be called from lock-free RCU lookups. See similar
    patch for d_compare for details.

    For in-tree filesystems, this is just a mechanical change.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Change d_compare so it may be called from lock-free RCU lookups. This
    does put significant restrictions on what may be done from the callback,
    however there don't seem to have been any problems with in-tree fses.
    If some strange use case pops up that _really_ cannot cope with the
    rcu-walk rules, we can just add new rcu-unaware callbacks, which would
    cause name lookup to drop out of rcu-walk mode.

    For in-tree filesystems, this is just a mechanical change.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • smpfs and ncpfs want to update a live dentry name in-place. Rather than
    have them open code the locking, provide a documented dcache API.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • Change d_delete from a dentry deletion notification to a dentry caching
    advise, more like ->drop_inode. Require it to be constant and idempotent,
    and not take d_lock. This is how all existing filesystems use the callback
    anyway.

    This makes fine grained dentry locking of dput and dentry lru scanning
    much simpler.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • percpu_counter library generates quite nasty code, so unless you need
    to dynamically allocate counters or take fast approximate value, a
    simple per cpu set of counters is much better.

    The percpu_counter can never be made to work as well, because it has an
    indirection from pointer to percpu memory, and it can't use direct
    this_cpu_inc interfaces because it doesn't use static PER_CPU data, so
    code will always be worse.

    In the fastpath, it is the difference between this:

    incl %gs:nr_dentry # nr_dentry

    and this:

    movl percpu_counter_batch(%rip), %edx # percpu_counter_batch,
    movl $1, %esi #,
    movq $nr_dentry, %rdi #,
    call __percpu_counter_add # (plus I clobber registers)

    __percpu_counter_add:
    pushq %rbp #
    movq %rsp, %rbp #,
    subq $32, %rsp #,
    movq %rbx, -24(%rbp) #,
    movq %r12, -16(%rbp) #,
    movq %r13, -8(%rbp) #,
    movq %rdi, %rbx # fbc, fbc
    #APP
    # 216 "/home/npiggin/usr/src/linux-2.6/arch/x86/include/asm/thread_info.h" 1
    movq %gs:kernel_stack,%rax #, pfo_ret__
    # 0 "" 2
    #NO_APP
    incl -8124(%rax) # .preempt_count
    movq 32(%rdi), %r12 # .counters, tcp_ptr__
    #APP
    # 78 "lib/percpu_counter.c" 1
    add %gs:this_cpu_off, %r12 # this_cpu_off, tcp_ptr__
    # 0 "" 2
    #NO_APP
    movslq (%r12),%r13 #* tcp_ptr__, tmp73
    movslq %edx,%rax # batch, batch
    addq %rsi, %r13 # amount, count
    cmpq %rax, %r13 # batch, count
    jge .L27 #,
    negl %edx # tmp76
    movslq %edx,%rdx # tmp76, tmp77
    cmpq %rdx, %r13 # tmp77, count
    jg .L28 #,
    .L27:
    movq %rbx, %rdi # fbc,
    call _raw_spin_lock #
    addq %r13, 8(%rbx) # count, .count
    movq %rbx, %rdi # fbc,
    movl $0, (%r12) #,* tcp_ptr__
    call _raw_spin_unlock #
    .L29:
    #APP
    # 216 "/home/npiggin/usr/src/linux-2.6/arch/x86/include/asm/thread_info.h" 1
    movq %gs:kernel_stack,%rax #, pfo_ret__
    # 0 "" 2
    #NO_APP
    decl -8124(%rax) # .preempt_count
    movq -8136(%rax), %rax #, D.14625
    testb $8, %al #, D.14625
    jne .L32 #,
    .L31:
    movq -24(%rbp), %rbx #,
    movq -16(%rbp), %r12 #,
    movq -8(%rbp), %r13 #,
    leave
    ret
    .p2align 4,,10
    .p2align 3
    .L28:
    movl %r13d, (%r12) # count,*
    jmp .L29 #
    .L32:
    call preempt_schedule #
    .p2align 4,,6
    jmp .L31 #
    .size __percpu_counter_add, .-__percpu_counter_add
    .p2align 4,,15

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • The nr_unused counters count the number of objects on an LRU, and as such they
    are synchronized with LRU object insertion and removal and scanning, and
    protected under the LRU lock.

    Making it per-cpu does not actually get any concurrency improvements because of
    this lock, and summing the counter is much slower, and
    incrementing/decrementing it costs more code size and is slower too.

    These counters should stay per-LRU, which currently means global.

    Signed-off-by: Nick Piggin

    Nick Piggin
     
  • d_validate has been broken for a long time.

    kmem_ptr_validate does not guarantee that a pointer can be dereferenced
    if it can go away at any time. Even rcu_read_lock doesn't help, because
    the pointer might be queued in RCU callbacks but not executed yet.

    So the parent cannot be checked, nor the name hashed. The dentry pointer
    can not be touched until it can be verified under lock. Hashing simply
    cannot be used.

    Instead, verify the parent/child relationship by traversing parent's
    d_child list. It's slow, but only ncpfs and the destaged smbfs care
    about it, at this point.

    Signed-off-by: Nick Piggin

    Nick Piggin
     

05 Jan, 2011

1 commit


26 Oct, 2010

3 commits

  • d_validate does a purely read lookup in the dentry hash, so use RCU read side
    locking instead of dcache_lock. Split out from a larget patch by
    Nick Piggin .

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     
  • Always do a list_del_init on the LRU to make sure the list_empty invariant for
    not beeing on the LRU always holds true, and fold dentry_lru_del_init into
    dentry_lru_del. Replace the dentry_lru_add_tail primitive with a
    dentry_lru_move_tail operations that simpler when the dentry already is one
    the list, which is always is. Move the list_empty into dentry_lru_add to
    fit the scheme of the other lru helpers, and simplify locking once we
    move to a separate LRU lock.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     
  • Currently __shrink_dcache_sb has an extremly awkward calling convention
    because it tries to please very different callers. Split out the
    main loop into a shrink_dentry_list helper, which gets called directly
    from shrink_dcache_sb for the cases where all dentries need to be pruned,
    or from __shrink_dcache_sb for pruning only a certain number of dentries.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig