Commit b5c84bf6f6fa3a7dfdcb556023a62953574b60ee

Authored by Nick Piggin
1 parent 949854d024

fs: dcache remove dcache_lock

dcache_lock no longer protects anything. remove it.

Signed-off-by: Nick Piggin <npiggin@kernel.dk>

Showing 40 changed files with 109 additions and 307 deletions Side-by-side Diff

Documentation/filesystems/Locking
... ... @@ -21,14 +21,14 @@
21 21 char *(*d_dname)((struct dentry *dentry, char *buffer, int buflen);
22 22  
23 23 locking rules:
24   - dcache_lock rename_lock ->d_lock may block
25   -d_revalidate: no no no yes
26   -d_hash no no no no
27   -d_compare: no yes no no
28   -d_delete: yes no yes no
29   -d_release: no no no yes
30   -d_iput: no no no yes
31   -d_dname: no no no no
  24 + rename_lock ->d_lock may block
  25 +d_revalidate: no no yes
  26 +d_hash no no no
  27 +d_compare: yes no no
  28 +d_delete: no yes no
  29 +d_release: no no yes
  30 +d_iput: no no yes
  31 +d_dname: no no no
32 32  
33 33 --------------------------- inode_operations ---------------------------
34 34 prototypes:
Documentation/filesystems/dentry-locking.txt
... ... @@ -31,6 +31,7 @@
31 31 doesn't acquire the dcache_lock for this and rely on RCU to ensure
32 32 that the dentry has not been *freed*.
33 33  
  34 +dcache_lock no longer exists, dentry locking is explained in fs/dcache.c
34 35  
35 36 Dcache locking details
36 37 ======================
37 38  
... ... @@ -50,14 +51,12 @@
50 51  
51 52 Dcache is a complex data structure with the hash table entries also
52 53 linked together in other lists. In 2.4 kernel, dcache_lock protected
53   -all the lists. We applied RCU only on hash chain walking. The rest of
54   -the lists are still protected by dcache_lock. Some of the important
55   -changes are :
  54 +all the lists. RCU dentry hash walking works like this:
56 55  
57 56 1. The deletion from hash chain is done using hlist_del_rcu() macro
58 57 which doesn't initialize next pointer of the deleted dentry and
59 58 this allows us to walk safely lock-free while a deletion is
60   - happening.
  59 + happening. This is a standard hlist_rcu iteration.
61 60  
62 61 2. Insertion of a dentry into the hash table is done using
63 62 hlist_add_head_rcu() which take care of ordering the writes - the
64 63  
... ... @@ -66,19 +65,18 @@
66 65 which has since been replaced by hlist_for_each_entry_rcu(), while
67 66 walking the hash chain. The only requirement is that all
68 67 initialization to the dentry must be done before
69   - hlist_add_head_rcu() since we don't have dcache_lock protection
70   - while traversing the hash chain. This isn't different from the
71   - existing code.
  68 + hlist_add_head_rcu() since we don't have lock protection
  69 + while traversing the hash chain.
72 70  
73   -3. The dentry looked up without holding dcache_lock by cannot be
74   - returned for walking if it is unhashed. It then may have a NULL
75   - d_inode or other bogosity since RCU doesn't protect the other
76   - fields in the dentry. We therefore use a flag DCACHE_UNHASHED to
77   - indicate unhashed dentries and use this in conjunction with a
78   - per-dentry lock (d_lock). Once looked up without the dcache_lock,
79   - we acquire the per-dentry lock (d_lock) and check if the dentry is
80   - unhashed. If so, the look-up is failed. If not, the reference count
81   - of the dentry is increased and the dentry is returned.
  71 +3. The dentry looked up without holding locks cannot be returned for
  72 + walking if it is unhashed. It then may have a NULL d_inode or other
  73 + bogosity since RCU doesn't protect the other fields in the dentry. We
  74 + therefore use a flag DCACHE_UNHASHED to indicate unhashed dentries
  75 + and use this in conjunction with a per-dentry lock (d_lock). Once
  76 + looked up without locks, we acquire the per-dentry lock (d_lock) and
  77 + check if the dentry is unhashed. If so, the look-up is failed. If not,
  78 + the reference count of the dentry is increased and the dentry is
  79 + returned.
82 80  
83 81 4. Once a dentry is looked up, it must be ensured during the path walk
84 82 for that component it doesn't go away. In pre-2.5.10 code, this was
... ... @@ -86,10 +84,10 @@
86 84 In some sense, dcache_rcu path walking looks like the pre-2.5.10
87 85 version.
88 86  
89   -5. All dentry hash chain updates must take the dcache_lock as well as
90   - the per-dentry lock in that order. dput() does this to ensure that
91   - a dentry that has just been looked up in another CPU doesn't get
92   - deleted before dget() can be done on it.
  87 +5. All dentry hash chain updates must take the per-dentry lock (see
  88 + fs/dcache.c). This excludes dput() to ensure that a dentry that has
  89 + been looked up concurrently does not get deleted before dget() can
  90 + take a ref.
93 91  
94 92 6. There are several ways to do reference counting of RCU protected
95 93 objects. One such example is in ipv4 route cache where deferred
Documentation/filesystems/porting
... ... @@ -216,7 +216,6 @@
216 216 ->d_parent changes are not protected by BKL anymore. Read access is safe
217 217 if at least one of the following is true:
218 218 * filesystem has no cross-directory rename()
219   - * dcache_lock is held
220 219 * we know that parent had been locked (e.g. we are looking at
221 220 ->d_parent of ->lookup() argument).
222 221 * we are called from ->rename().
... ... @@ -340,4 +339,11 @@
340 339 .d_hash() calling convention and locking rules are significantly
341 340 changed. Read updated documentation in Documentation/filesystems/vfs.txt (and
342 341 look at examples of other filesystems) for guidance.
  342 +
  343 +---
  344 +[mandatory]
  345 + dcache_lock is gone, replaced by fine grained locks. See fs/dcache.c
  346 +for details of what locks to replace dcache_lock with in order to protect
  347 +particular things. Most of the time, a filesystem only needs ->d_lock, which
  348 +protects *all* the dcache state of a given dentry.
arch/powerpc/platforms/cell/spufs/inode.c
... ... @@ -159,21 +159,18 @@
159 159  
160 160 mutex_lock(&dir->d_inode->i_mutex);
161 161 list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_u.d_child) {
162   - spin_lock(&dcache_lock);
163 162 spin_lock(&dentry->d_lock);
164 163 if (!(d_unhashed(dentry)) && dentry->d_inode) {
165 164 dget_locked_dlock(dentry);
166 165 __d_drop(dentry);
167 166 spin_unlock(&dentry->d_lock);
168 167 simple_unlink(dir->d_inode, dentry);
169   - /* XXX: what is dcache_lock protecting here? Other
  168 + /* XXX: what was dcache_lock protecting here? Other
170 169 * filesystems (IB, configfs) release dcache_lock
171 170 * before unlink */
172   - spin_unlock(&dcache_lock);
173 171 dput(dentry);
174 172 } else {
175 173 spin_unlock(&dentry->d_lock);
176   - spin_unlock(&dcache_lock);
177 174 }
178 175 }
179 176 shrink_dcache_parent(dir);
drivers/infiniband/hw/ipath/ipath_fs.c
... ... @@ -277,18 +277,14 @@
277 277 goto bail;
278 278 }
279 279  
280   - spin_lock(&dcache_lock);
281 280 spin_lock(&tmp->d_lock);
282 281 if (!(d_unhashed(tmp) && tmp->d_inode)) {
283 282 dget_locked_dlock(tmp);
284 283 __d_drop(tmp);
285 284 spin_unlock(&tmp->d_lock);
286   - spin_unlock(&dcache_lock);
287 285 simple_unlink(parent->d_inode, tmp);
288   - } else {
  286 + } else
289 287 spin_unlock(&tmp->d_lock);
290   - spin_unlock(&dcache_lock);
291   - }
292 288  
293 289 ret = 0;
294 290 bail:
drivers/infiniband/hw/qib/qib_fs.c
... ... @@ -453,17 +453,14 @@
453 453 goto bail;
454 454 }
455 455  
456   - spin_lock(&dcache_lock);
457 456 spin_lock(&tmp->d_lock);
458 457 if (!(d_unhashed(tmp) && tmp->d_inode)) {
459 458 dget_locked_dlock(tmp);
460 459 __d_drop(tmp);
461 460 spin_unlock(&tmp->d_lock);
462   - spin_unlock(&dcache_lock);
463 461 simple_unlink(parent->d_inode, tmp);
464 462 } else {
465 463 spin_unlock(&tmp->d_lock);
466   - spin_unlock(&dcache_lock);
467 464 }
468 465  
469 466 ret = 0;
drivers/staging/pohmelfs/path_entry.c
... ... @@ -101,7 +101,6 @@
101 101 d = first;
102 102 seq = read_seqbegin(&rename_lock);
103 103 rcu_read_lock();
104   - spin_lock(&dcache_lock);
105 104  
106 105 if (!IS_ROOT(d) && d_unhashed(d))
107 106 len += UNHASHED_OBSCURE_STRING_SIZE; /* Obscure " (deleted)" string */
... ... @@ -110,7 +109,6 @@
110 109 len += d->d_name.len + 1; /* Plus slash */
111 110 d = d->d_parent;
112 111 }
113   - spin_unlock(&dcache_lock);
114 112 rcu_read_unlock();
115 113 if (read_seqretry(&rename_lock, seq))
116 114 goto rename_retry;
drivers/staging/smbfs/cache.c
... ... @@ -62,7 +62,6 @@
62 62 struct list_head *next;
63 63 struct dentry *dentry;
64 64  
65   - spin_lock(&dcache_lock);
66 65 spin_lock(&parent->d_lock);
67 66 next = parent->d_subdirs.next;
68 67 while (next != &parent->d_subdirs) {
... ... @@ -72,7 +71,6 @@
72 71 next = next->next;
73 72 }
74 73 spin_unlock(&parent->d_lock);
75   - spin_unlock(&dcache_lock);
76 74 }
77 75  
78 76 /*
... ... @@ -98,7 +96,6 @@
98 96 }
99 97  
100 98 /* If a pointer is invalid, we search the dentry. */
101   - spin_lock(&dcache_lock);
102 99 spin_lock(&parent->d_lock);
103 100 next = parent->d_subdirs.next;
104 101 while (next != &parent->d_subdirs) {
... ... @@ -115,7 +112,6 @@
115 112 dent = NULL;
116 113 out_unlock:
117 114 spin_unlock(&parent->d_lock);
118   - spin_unlock(&dcache_lock);
119 115 return dent;
120 116 }
121 117  
drivers/usb/core/inode.c
... ... @@ -343,7 +343,6 @@
343 343 {
344 344 struct list_head *list;
345 345  
346   - spin_lock(&dcache_lock);
347 346 spin_lock(&dentry->d_lock);
348 347 list_for_each(list, &dentry->d_subdirs) {
349 348 struct dentry *de = list_entry(list, struct dentry, d_u.d_child);
350 349  
... ... @@ -352,13 +351,11 @@
352 351 if (usbfs_positive(de)) {
353 352 spin_unlock(&de->d_lock);
354 353 spin_unlock(&dentry->d_lock);
355   - spin_unlock(&dcache_lock);
356 354 return 0;
357 355 }
358 356 spin_unlock(&de->d_lock);
359 357 }
360 358 spin_unlock(&dentry->d_lock);
361   - spin_unlock(&dcache_lock);
362 359 return 1;
363 360 }
364 361  
... ... @@ -270,13 +270,11 @@
270 270 {
271 271 struct dentry *dentry;
272 272  
273   - spin_lock(&dcache_lock);
274 273 spin_lock(&dcache_inode_lock);
275 274 /* Directory should have only one entry. */
276 275 BUG_ON(S_ISDIR(inode->i_mode) && !list_is_singular(&inode->i_dentry));
277 276 dentry = list_entry(inode->i_dentry.next, struct dentry, d_alias);
278 277 spin_unlock(&dcache_inode_lock);
279   - spin_unlock(&dcache_lock);
280 278 return dentry;
281 279 }
282 280  
... ... @@ -128,7 +128,6 @@
128 128 void *data = dentry->d_fsdata;
129 129 struct list_head *head, *next;
130 130  
131   - spin_lock(&dcache_lock);
132 131 spin_lock(&dcache_inode_lock);
133 132 head = &inode->i_dentry;
134 133 next = head->next;
... ... @@ -141,7 +140,6 @@
141 140 next = next->next;
142 141 }
143 142 spin_unlock(&dcache_inode_lock);
144   - spin_unlock(&dcache_lock);
145 143 }
146 144  
147 145  
fs/autofs4/autofs_i.h
... ... @@ -16,6 +16,7 @@
16 16 #include <linux/auto_fs4.h>
17 17 #include <linux/auto_dev-ioctl.h>
18 18 #include <linux/mutex.h>
  19 +#include <linux/spinlock.h>
19 20 #include <linux/list.h>
20 21  
21 22 /* This is the range of ioctl() numbers we claim as ours */
... ... @@ -59,6 +60,8 @@
59 60 printk(KERN_ERR "pid %d: %s: " fmt "\n", \
60 61 current->pid, __func__, ##args); \
61 62 } while (0)
  63 +
  64 +extern spinlock_t autofs4_lock;
62 65  
63 66 /* Unified info structure. This is pointed to by both the dentry and
64 67 inode structures. Each file in the filesystem has an instance of this
... ... @@ -102,7 +102,7 @@
102 102 if (prev == NULL)
103 103 return dget(prev);
104 104  
105   - spin_lock(&dcache_lock);
  105 + spin_lock(&autofs4_lock);
106 106 relock:
107 107 p = prev;
108 108 spin_lock(&p->d_lock);
... ... @@ -114,7 +114,7 @@
114 114  
115 115 if (p == root) {
116 116 spin_unlock(&p->d_lock);
117   - spin_unlock(&dcache_lock);
  117 + spin_unlock(&autofs4_lock);
118 118 dput(prev);
119 119 return NULL;
120 120 }
... ... @@ -144,7 +144,7 @@
144 144 dget_dlock(ret);
145 145 spin_unlock(&ret->d_lock);
146 146 spin_unlock(&p->d_lock);
147   - spin_unlock(&dcache_lock);
  147 + spin_unlock(&autofs4_lock);
148 148  
149 149 dput(prev);
150 150  
151 151  
... ... @@ -408,13 +408,13 @@
408 408 ino->flags |= AUTOFS_INF_EXPIRING;
409 409 init_completion(&ino->expire_complete);
410 410 spin_unlock(&sbi->fs_lock);
411   - spin_lock(&dcache_lock);
  411 + spin_lock(&autofs4_lock);
412 412 spin_lock(&expired->d_parent->d_lock);
413 413 spin_lock_nested(&expired->d_lock, DENTRY_D_LOCK_NESTED);
414 414 list_move(&expired->d_parent->d_subdirs, &expired->d_u.d_child);
415 415 spin_unlock(&expired->d_lock);
416 416 spin_unlock(&expired->d_parent->d_lock);
417   - spin_unlock(&dcache_lock);
  417 + spin_unlock(&autofs4_lock);
418 418 return expired;
419 419 }
420 420  
... ... @@ -23,6 +23,8 @@
23 23  
24 24 #include "autofs_i.h"
25 25  
  26 +DEFINE_SPINLOCK(autofs4_lock);
  27 +
26 28 static int autofs4_dir_symlink(struct inode *,struct dentry *,const char *);
27 29 static int autofs4_dir_unlink(struct inode *,struct dentry *);
28 30 static int autofs4_dir_rmdir(struct inode *,struct dentry *);
29 31  
30 32  
... ... @@ -142,15 +144,15 @@
142 144 * autofs file system so just let the libfs routines handle
143 145 * it.
144 146 */
145   - spin_lock(&dcache_lock);
  147 + spin_lock(&autofs4_lock);
146 148 spin_lock(&dentry->d_lock);
147 149 if (!d_mountpoint(dentry) && list_empty(&dentry->d_subdirs)) {
148 150 spin_unlock(&dentry->d_lock);
149   - spin_unlock(&dcache_lock);
  151 + spin_unlock(&autofs4_lock);
150 152 return -ENOENT;
151 153 }
152 154 spin_unlock(&dentry->d_lock);
153   - spin_unlock(&dcache_lock);
  155 + spin_unlock(&autofs4_lock);
154 156  
155 157 out:
156 158 return dcache_dir_open(inode, file);
157 159  
... ... @@ -255,11 +257,11 @@
255 257 /* We trigger a mount for almost all flags */
256 258 lookup_type = autofs4_need_mount(nd->flags);
257 259 spin_lock(&sbi->fs_lock);
258   - spin_lock(&dcache_lock);
  260 + spin_lock(&autofs4_lock);
259 261 spin_lock(&dentry->d_lock);
260 262 if (!(lookup_type || ino->flags & AUTOFS_INF_PENDING)) {
261 263 spin_unlock(&dentry->d_lock);
262   - spin_unlock(&dcache_lock);
  264 + spin_unlock(&autofs4_lock);
263 265 spin_unlock(&sbi->fs_lock);
264 266 goto follow;
265 267 }
... ... @@ -272,7 +274,7 @@
272 274 if (ino->flags & AUTOFS_INF_PENDING ||
273 275 (!d_mountpoint(dentry) && list_empty(&dentry->d_subdirs))) {
274 276 spin_unlock(&dentry->d_lock);
275   - spin_unlock(&dcache_lock);
  277 + spin_unlock(&autofs4_lock);
276 278 spin_unlock(&sbi->fs_lock);
277 279  
278 280 status = try_to_fill_dentry(dentry, nd->flags);
... ... @@ -282,7 +284,7 @@
282 284 goto follow;
283 285 }
284 286 spin_unlock(&dentry->d_lock);
285   - spin_unlock(&dcache_lock);
  287 + spin_unlock(&autofs4_lock);
286 288 spin_unlock(&sbi->fs_lock);
287 289 follow:
288 290 /*
289 291  
... ... @@ -353,14 +355,14 @@
353 355 return 0;
354 356  
355 357 /* Check for a non-mountpoint directory with no contents */
356   - spin_lock(&dcache_lock);
  358 + spin_lock(&autofs4_lock);
357 359 spin_lock(&dentry->d_lock);
358 360 if (S_ISDIR(dentry->d_inode->i_mode) &&
359 361 !d_mountpoint(dentry) && list_empty(&dentry->d_subdirs)) {
360 362 DPRINTK("dentry=%p %.*s, emptydir",
361 363 dentry, dentry->d_name.len, dentry->d_name.name);
362 364 spin_unlock(&dentry->d_lock);
363   - spin_unlock(&dcache_lock);
  365 + spin_unlock(&autofs4_lock);
364 366  
365 367 /* The daemon never causes a mount to trigger */
366 368 if (oz_mode)
... ... @@ -377,7 +379,7 @@
377 379 return status;
378 380 }
379 381 spin_unlock(&dentry->d_lock);
380   - spin_unlock(&dcache_lock);
  382 + spin_unlock(&autofs4_lock);
381 383  
382 384 return 1;
383 385 }
... ... @@ -432,7 +434,7 @@
432 434 const unsigned char *str = name->name;
433 435 struct list_head *p, *head;
434 436  
435   - spin_lock(&dcache_lock);
  437 + spin_lock(&autofs4_lock);
436 438 spin_lock(&sbi->lookup_lock);
437 439 head = &sbi->active_list;
438 440 list_for_each(p, head) {
439 441  
... ... @@ -465,14 +467,14 @@
465 467 dget_dlock(active);
466 468 spin_unlock(&active->d_lock);
467 469 spin_unlock(&sbi->lookup_lock);
468   - spin_unlock(&dcache_lock);
  470 + spin_unlock(&autofs4_lock);
469 471 return active;
470 472 }
471 473 next:
472 474 spin_unlock(&active->d_lock);
473 475 }
474 476 spin_unlock(&sbi->lookup_lock);
475   - spin_unlock(&dcache_lock);
  477 + spin_unlock(&autofs4_lock);
476 478  
477 479 return NULL;
478 480 }
... ... @@ -487,7 +489,7 @@
487 489 const unsigned char *str = name->name;
488 490 struct list_head *p, *head;
489 491  
490   - spin_lock(&dcache_lock);
  492 + spin_lock(&autofs4_lock);
491 493 spin_lock(&sbi->lookup_lock);
492 494 head = &sbi->expiring_list;
493 495 list_for_each(p, head) {
494 496  
... ... @@ -520,14 +522,14 @@
520 522 dget_dlock(expiring);
521 523 spin_unlock(&expiring->d_lock);
522 524 spin_unlock(&sbi->lookup_lock);
523   - spin_unlock(&dcache_lock);
  525 + spin_unlock(&autofs4_lock);
524 526 return expiring;
525 527 }
526 528 next:
527 529 spin_unlock(&expiring->d_lock);
528 530 }
529 531 spin_unlock(&sbi->lookup_lock);
530   - spin_unlock(&dcache_lock);
  532 + spin_unlock(&autofs4_lock);
531 533  
532 534 return NULL;
533 535 }
534 536  
... ... @@ -763,12 +765,12 @@
763 765  
764 766 dir->i_mtime = CURRENT_TIME;
765 767  
766   - spin_lock(&dcache_lock);
  768 + spin_lock(&autofs4_lock);
767 769 autofs4_add_expiring(dentry);
768 770 spin_lock(&dentry->d_lock);
769 771 __d_drop(dentry);
770 772 spin_unlock(&dentry->d_lock);
771   - spin_unlock(&dcache_lock);
  773 + spin_unlock(&autofs4_lock);
772 774  
773 775 return 0;
774 776 }
775 777  
776 778  
... ... @@ -785,20 +787,20 @@
785 787 if (!autofs4_oz_mode(sbi))
786 788 return -EACCES;
787 789  
788   - spin_lock(&dcache_lock);
  790 + spin_lock(&autofs4_lock);
789 791 spin_lock(&sbi->lookup_lock);
790 792 spin_lock(&dentry->d_lock);
791 793 if (!list_empty(&dentry->d_subdirs)) {
792 794 spin_unlock(&dentry->d_lock);
793 795 spin_unlock(&sbi->lookup_lock);
794   - spin_unlock(&dcache_lock);
  796 + spin_unlock(&autofs4_lock);
795 797 return -ENOTEMPTY;
796 798 }
797 799 __autofs4_add_expiring(dentry);
798 800 spin_unlock(&sbi->lookup_lock);
799 801 __d_drop(dentry);
800 802 spin_unlock(&dentry->d_lock);
801   - spin_unlock(&dcache_lock);
  803 + spin_unlock(&autofs4_lock);
802 804  
803 805 if (atomic_dec_and_test(&ino->count)) {
804 806 p_ino = autofs4_dentry_ino(dentry->d_parent);
... ... @@ -194,14 +194,15 @@
194 194 rename_retry:
195 195 buf = *name;
196 196 len = 0;
  197 +
197 198 seq = read_seqbegin(&rename_lock);
198 199 rcu_read_lock();
199   - spin_lock(&dcache_lock);
  200 + spin_lock(&autofs4_lock);
200 201 for (tmp = dentry ; tmp != root ; tmp = tmp->d_parent)
201 202 len += tmp->d_name.len + 1;
202 203  
203 204 if (!len || --len > NAME_MAX) {
204   - spin_unlock(&dcache_lock);
  205 + spin_unlock(&autofs4_lock);
205 206 rcu_read_unlock();
206 207 if (read_seqretry(&rename_lock, seq))
207 208 goto rename_retry;
... ... @@ -217,7 +218,7 @@
217 218 p -= tmp->d_name.len;
218 219 strncpy(p, tmp->d_name.name, tmp->d_name.len);
219 220 }
220   - spin_unlock(&dcache_lock);
  221 + spin_unlock(&autofs4_lock);
221 222 rcu_read_unlock();
222 223 if (read_seqretry(&rename_lock, seq))
223 224 goto rename_retry;
... ... @@ -112,7 +112,6 @@
112 112 dout("__dcache_readdir %p at %llu (last %p)\n", dir, filp->f_pos,
113 113 last);
114 114  
115   - spin_lock(&dcache_lock);
116 115 spin_lock(&parent->d_lock);
117 116  
118 117 /* start at beginning? */
... ... @@ -156,7 +155,6 @@
156 155 dget_dlock(dentry);
157 156 spin_unlock(&dentry->d_lock);
158 157 spin_unlock(&parent->d_lock);
159   - spin_unlock(&dcache_lock);
160 158  
161 159 dout(" %llu (%llu) dentry %p %.*s %p\n", di->offset, filp->f_pos,
162 160 dentry, dentry->d_name.len, dentry->d_name.name, dentry->d_inode);
163 161  
164 162  
... ... @@ -182,21 +180,19 @@
182 180  
183 181 filp->f_pos++;
184 182  
185   - /* make sure a dentry wasn't dropped while we didn't have dcache_lock */
  183 + /* make sure a dentry wasn't dropped while we didn't have parent lock */
186 184 if (!ceph_i_test(dir, CEPH_I_COMPLETE)) {
187 185 dout(" lost I_COMPLETE on %p; falling back to mds\n", dir);
188 186 err = -EAGAIN;
189 187 goto out;
190 188 }
191 189  
192   - spin_lock(&dcache_lock);
193 190 spin_lock(&parent->d_lock);
194 191 p = p->prev; /* advance to next dentry */
195 192 goto more;
196 193  
197 194 out_unlock:
198 195 spin_unlock(&parent->d_lock);
199   - spin_unlock(&dcache_lock);
200 196 out:
201 197 if (last)
202 198 dput(last);
... ... @@ -841,7 +841,6 @@
841 841 di->offset = ceph_inode(inode)->i_max_offset++;
842 842 spin_unlock(&inode->i_lock);
843 843  
844   - spin_lock(&dcache_lock);
845 844 spin_lock(&dir->d_lock);
846 845 spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED);
847 846 list_move(&dn->d_u.d_child, &dir->d_subdirs);
... ... @@ -849,7 +848,6 @@
849 848 dn->d_u.d_child.prev, dn->d_u.d_child.next);
850 849 spin_unlock(&dn->d_lock);
851 850 spin_unlock(&dir->d_lock);
852   - spin_unlock(&dcache_lock);
853 851 }
854 852  
855 853 /*
856 854  
... ... @@ -1233,13 +1231,11 @@
1233 1231 goto retry_lookup;
1234 1232 } else {
1235 1233 /* reorder parent's d_subdirs */
1236   - spin_lock(&dcache_lock);
1237 1234 spin_lock(&parent->d_lock);
1238 1235 spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED);
1239 1236 list_move(&dn->d_u.d_child, &parent->d_subdirs);
1240 1237 spin_unlock(&dn->d_lock);
1241 1238 spin_unlock(&parent->d_lock);
1242   - spin_unlock(&dcache_lock);
1243 1239 }
1244 1240  
1245 1241 di = dn->d_fsdata;
... ... @@ -809,17 +809,14 @@
809 809 {
810 810 struct dentry *dentry;
811 811  
812   - spin_lock(&dcache_lock);
813 812 spin_lock(&dcache_inode_lock);
814 813 list_for_each_entry(dentry, &inode->i_dentry, d_alias) {
815 814 if (!d_unhashed(dentry) || IS_ROOT(dentry)) {
816 815 spin_unlock(&dcache_inode_lock);
817   - spin_unlock(&dcache_lock);
818 816 return true;
819 817 }
820 818 }
821 819 spin_unlock(&dcache_inode_lock);
822   - spin_unlock(&dcache_lock);
823 820 return false;
824 821 }
825 822  
... ... @@ -93,7 +93,6 @@
93 93 struct list_head *child;
94 94 struct dentry *de;
95 95  
96   - spin_lock(&dcache_lock);
97 96 spin_lock(&parent->d_lock);
98 97 list_for_each(child, &parent->d_subdirs)
99 98 {
... ... @@ -104,7 +103,6 @@
104 103 coda_flag_inode(de->d_inode, flag);
105 104 }
106 105 spin_unlock(&parent->d_lock);
107   - spin_unlock(&dcache_lock);
108 106 return;
109 107 }
110 108  
fs/configfs/configfs_internal.h
... ... @@ -120,7 +120,6 @@
120 120 {
121 121 struct config_item * item = NULL;
122 122  
123   - spin_lock(&dcache_lock);
124 123 spin_lock(&dentry->d_lock);
125 124 if (!d_unhashed(dentry)) {
126 125 struct configfs_dirent * sd = dentry->d_fsdata;
... ... @@ -131,7 +130,6 @@
131 130 item = config_item_get(sd->s_element);
132 131 }
133 132 spin_unlock(&dentry->d_lock);
134   - spin_unlock(&dcache_lock);
135 133  
136 134 return item;
137 135 }
... ... @@ -250,18 +250,14 @@
250 250 struct dentry * dentry = sd->s_dentry;
251 251  
252 252 if (dentry) {
253   - spin_lock(&dcache_lock);
254 253 spin_lock(&dentry->d_lock);
255 254 if (!(d_unhashed(dentry) && dentry->d_inode)) {
256 255 dget_locked_dlock(dentry);
257 256 __d_drop(dentry);
258 257 spin_unlock(&dentry->d_lock);
259   - spin_unlock(&dcache_lock);
260 258 simple_unlink(parent->d_inode, dentry);
261   - } else {
  259 + } else
262 260 spin_unlock(&dentry->d_lock);
263   - spin_unlock(&dcache_lock);
264   - }
265 261 }
266 262 }
267 263  
... ... @@ -54,11 +54,10 @@
54 54 * - d_alias, d_inode
55 55 *
56 56 * Ordering:
57   - * dcache_lock
58   - * dcache_inode_lock
59   - * dentry->d_lock
60   - * dcache_lru_lock
61   - * dcache_hash_lock
  57 + * dcache_inode_lock
  58 + * dentry->d_lock
  59 + * dcache_lru_lock
  60 + * dcache_hash_lock
62 61 *
63 62 * If there is an ancestor relationship:
64 63 * dentry->d_parent->...->d_parent->d_lock
65 64  
... ... @@ -77,12 +76,10 @@
77 76 __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_inode_lock);
78 77 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_hash_lock);
79 78 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_lru_lock);
80   -__cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_lock);
81 79 __cacheline_aligned_in_smp DEFINE_SEQLOCK(rename_lock);
82 80  
83 81 EXPORT_SYMBOL(rename_lock);
84 82 EXPORT_SYMBOL(dcache_inode_lock);
85   -EXPORT_SYMBOL(dcache_lock);
86 83  
87 84 static struct kmem_cache *dentry_cache __read_mostly;
88 85  
... ... @@ -139,7 +136,7 @@
139 136 }
140 137  
141 138 /*
142   - * no dcache_lock, please.
  139 + * no locks, please.
143 140 */
144 141 static void d_free(struct dentry *dentry)
145 142 {
... ... @@ -162,7 +159,6 @@
162 159 static void dentry_iput(struct dentry * dentry)
163 160 __releases(dentry->d_lock)
164 161 __releases(dcache_inode_lock)
165   - __releases(dcache_lock)
166 162 {
167 163 struct inode *inode = dentry->d_inode;
168 164 if (inode) {
... ... @@ -170,7 +166,6 @@
170 166 list_del_init(&dentry->d_alias);
171 167 spin_unlock(&dentry->d_lock);
172 168 spin_unlock(&dcache_inode_lock);
173   - spin_unlock(&dcache_lock);
174 169 if (!inode->i_nlink)
175 170 fsnotify_inoderemove(inode);
176 171 if (dentry->d_op && dentry->d_op->d_iput)
... ... @@ -180,7 +175,6 @@
180 175 } else {
181 176 spin_unlock(&dentry->d_lock);
182 177 spin_unlock(&dcache_inode_lock);
183   - spin_unlock(&dcache_lock);
184 178 }
185 179 }
186 180  
187 181  
... ... @@ -235,14 +229,13 @@
235 229 *
236 230 * If this is the root of the dentry tree, return NULL.
237 231 *
238   - * dcache_lock and d_lock and d_parent->d_lock must be held by caller, and
239   - * are dropped by d_kill.
  232 + * dentry->d_lock and parent->d_lock must be held by caller, and are dropped by
  233 + * d_kill.
240 234 */
241 235 static struct dentry *d_kill(struct dentry *dentry, struct dentry *parent)
242 236 __releases(dentry->d_lock)
243 237 __releases(parent->d_lock)
244 238 __releases(dcache_inode_lock)
245   - __releases(dcache_lock)
246 239 {
247 240 dentry->d_parent = NULL;
248 241 list_del(&dentry->d_u.d_child);
249 242  
... ... @@ -285,11 +278,9 @@
285 278  
286 279 void d_drop(struct dentry *dentry)
287 280 {
288   - spin_lock(&dcache_lock);
289 281 spin_lock(&dentry->d_lock);
290 282 __d_drop(dentry);
291 283 spin_unlock(&dentry->d_lock);
292   - spin_unlock(&dcache_lock);
293 284 }
294 285 EXPORT_SYMBOL(d_drop);
295 286  
296 287  
... ... @@ -337,22 +328,11 @@
337 328 else
338 329 parent = dentry->d_parent;
339 330 if (dentry->d_count == 1) {
340   - if (!spin_trylock(&dcache_lock)) {
341   - /*
342   - * Something of a livelock possibility we could avoid
343   - * by taking dcache_lock and trying again, but we
344   - * want to reduce dcache_lock anyway so this will
345   - * get improved.
346   - */
347   -drop1:
  331 + if (!spin_trylock(&dcache_inode_lock)) {
  332 +drop2:
348 333 spin_unlock(&dentry->d_lock);
349 334 goto repeat;
350 335 }
351   - if (!spin_trylock(&dcache_inode_lock)) {
352   -drop2:
353   - spin_unlock(&dcache_lock);
354   - goto drop1;
355   - }
356 336 if (parent && !spin_trylock(&parent->d_lock)) {
357 337 spin_unlock(&dcache_inode_lock);
358 338 goto drop2;
... ... @@ -363,7 +343,6 @@
363 343 spin_unlock(&dentry->d_lock);
364 344 if (parent)
365 345 spin_unlock(&parent->d_lock);
366   - spin_unlock(&dcache_lock);
367 346 return;
368 347 }
369 348  
... ... @@ -387,7 +366,6 @@
387 366 if (parent)
388 367 spin_unlock(&parent->d_lock);
389 368 spin_unlock(&dcache_inode_lock);
390   - spin_unlock(&dcache_lock);
391 369 return;
392 370  
393 371 unhash_it:
394 372  
... ... @@ -418,11 +396,9 @@
418 396 /*
419 397 * If it's already been dropped, return OK.
420 398 */
421   - spin_lock(&dcache_lock);
422 399 spin_lock(&dentry->d_lock);
423 400 if (d_unhashed(dentry)) {
424 401 spin_unlock(&dentry->d_lock);
425   - spin_unlock(&dcache_lock);
426 402 return 0;
427 403 }
428 404 /*
429 405  
... ... @@ -431,9 +407,7 @@
431 407 */
432 408 if (!list_empty(&dentry->d_subdirs)) {
433 409 spin_unlock(&dentry->d_lock);
434   - spin_unlock(&dcache_lock);
435 410 shrink_dcache_parent(dentry);
436   - spin_lock(&dcache_lock);
437 411 spin_lock(&dentry->d_lock);
438 412 }
439 413  
440 414  
441 415  
... ... @@ -450,19 +424,17 @@
450 424 if (dentry->d_count > 1) {
451 425 if (dentry->d_inode && S_ISDIR(dentry->d_inode->i_mode)) {
452 426 spin_unlock(&dentry->d_lock);
453   - spin_unlock(&dcache_lock);
454 427 return -EBUSY;
455 428 }
456 429 }
457 430  
458 431 __d_drop(dentry);
459 432 spin_unlock(&dentry->d_lock);
460   - spin_unlock(&dcache_lock);
461 433 return 0;
462 434 }
463 435 EXPORT_SYMBOL(d_invalidate);
464 436  
465   -/* This must be called with dcache_lock and d_lock held */
  437 +/* This must be called with d_lock held */
466 438 static inline struct dentry * __dget_locked_dlock(struct dentry *dentry)
467 439 {
468 440 dentry->d_count++;
... ... @@ -470,7 +442,7 @@
470 442 return dentry;
471 443 }
472 444  
473   -/* This should be called _only_ with dcache_lock held */
  445 +/* This must be called with d_lock held */
474 446 static inline struct dentry * __dget_locked(struct dentry *dentry)
475 447 {
476 448 spin_lock(&dentry->d_lock);
477 449  
... ... @@ -575,11 +547,9 @@
575 547 struct dentry *de = NULL;
576 548  
577 549 if (!list_empty(&inode->i_dentry)) {
578   - spin_lock(&dcache_lock);
579 550 spin_lock(&dcache_inode_lock);
580 551 de = __d_find_alias(inode, 0);
581 552 spin_unlock(&dcache_inode_lock);
582   - spin_unlock(&dcache_lock);
583 553 }
584 554 return de;
585 555 }
... ... @@ -593,7 +563,6 @@
593 563 {
594 564 struct dentry *dentry;
595 565 restart:
596   - spin_lock(&dcache_lock);
597 566 spin_lock(&dcache_inode_lock);
598 567 list_for_each_entry(dentry, &inode->i_dentry, d_alias) {
599 568 spin_lock(&dentry->d_lock);
600 569  
... ... @@ -602,14 +571,12 @@
602 571 __d_drop(dentry);
603 572 spin_unlock(&dentry->d_lock);
604 573 spin_unlock(&dcache_inode_lock);
605   - spin_unlock(&dcache_lock);
606 574 dput(dentry);
607 575 goto restart;
608 576 }
609 577 spin_unlock(&dentry->d_lock);
610 578 }
611 579 spin_unlock(&dcache_inode_lock);
612   - spin_unlock(&dcache_lock);
613 580 }
614 581 EXPORT_SYMBOL(d_prune_aliases);
615 582  
616 583  
617 584  
... ... @@ -625,17 +592,14 @@
625 592 __releases(dentry->d_lock)
626 593 __releases(parent->d_lock)
627 594 __releases(dcache_inode_lock)
628   - __releases(dcache_lock)
629 595 {
630 596 __d_drop(dentry);
631 597 dentry = d_kill(dentry, parent);
632 598  
633 599 /*
634   - * Prune ancestors. Locking is simpler than in dput(),
635   - * because dcache_lock needs to be taken anyway.
  600 + * Prune ancestors.
636 601 */
637 602 while (dentry) {
638   - spin_lock(&dcache_lock);
639 603 spin_lock(&dcache_inode_lock);
640 604 again:
641 605 spin_lock(&dentry->d_lock);
... ... @@ -653,7 +617,6 @@
653 617 spin_unlock(&parent->d_lock);
654 618 spin_unlock(&dentry->d_lock);
655 619 spin_unlock(&dcache_inode_lock);
656   - spin_unlock(&dcache_lock);
657 620 return;
658 621 }
659 622  
... ... @@ -702,8 +665,7 @@
702 665 spin_unlock(&dcache_lru_lock);
703 666  
704 667 prune_one_dentry(dentry, parent);
705   - /* dcache_lock, dcache_inode_lock and dentry->d_lock dropped */
706   - spin_lock(&dcache_lock);
  668 + /* dcache_inode_lock and dentry->d_lock dropped */
707 669 spin_lock(&dcache_inode_lock);
708 670 spin_lock(&dcache_lru_lock);
709 671 }
... ... @@ -725,7 +687,6 @@
725 687 LIST_HEAD(tmp);
726 688 int cnt = *count;
727 689  
728   - spin_lock(&dcache_lock);
729 690 spin_lock(&dcache_inode_lock);
730 691 relock:
731 692 spin_lock(&dcache_lru_lock);
... ... @@ -766,7 +727,6 @@
766 727 list_splice(&referenced, &sb->s_dentry_lru);
767 728 spin_unlock(&dcache_lru_lock);
768 729 spin_unlock(&dcache_inode_lock);
769   - spin_unlock(&dcache_lock);
770 730 }
771 731  
772 732 /**
... ... @@ -788,7 +748,6 @@
788 748  
789 749 if (unused == 0 || count == 0)
790 750 return;
791   - spin_lock(&dcache_lock);
792 751 if (count >= unused)
793 752 prune_ratio = 1;
794 753 else
795 754  
... ... @@ -825,11 +784,9 @@
825 784 if (down_read_trylock(&sb->s_umount)) {
826 785 if ((sb->s_root != NULL) &&
827 786 (!list_empty(&sb->s_dentry_lru))) {
828   - spin_unlock(&dcache_lock);
829 787 __shrink_dcache_sb(sb, &w_count,
830 788 DCACHE_REFERENCED);
831 789 pruned -= w_count;
832   - spin_lock(&dcache_lock);
833 790 }
834 791 up_read(&sb->s_umount);
835 792 }
... ... @@ -845,7 +802,6 @@
845 802 if (p)
846 803 __put_super(p);
847 804 spin_unlock(&sb_lock);
848   - spin_unlock(&dcache_lock);
849 805 }
850 806  
851 807 /**
... ... @@ -859,7 +815,6 @@
859 815 {
860 816 LIST_HEAD(tmp);
861 817  
862   - spin_lock(&dcache_lock);
863 818 spin_lock(&dcache_inode_lock);
864 819 spin_lock(&dcache_lru_lock);
865 820 while (!list_empty(&sb->s_dentry_lru)) {
... ... @@ -868,7 +823,6 @@
868 823 }
869 824 spin_unlock(&dcache_lru_lock);
870 825 spin_unlock(&dcache_inode_lock);
871   - spin_unlock(&dcache_lock);
872 826 }
873 827 EXPORT_SYMBOL(shrink_dcache_sb);
874 828  
875 829  
... ... @@ -885,12 +839,10 @@
885 839 BUG_ON(!IS_ROOT(dentry));
886 840  
887 841 /* detach this root from the system */
888   - spin_lock(&dcache_lock);
889 842 spin_lock(&dentry->d_lock);
890 843 dentry_lru_del(dentry);
891 844 __d_drop(dentry);
892 845 spin_unlock(&dentry->d_lock);
893   - spin_unlock(&dcache_lock);
894 846  
895 847 for (;;) {
896 848 /* descend to the first leaf in the current subtree */
... ... @@ -899,7 +851,6 @@
899 851  
900 852 /* this is a branch with children - detach all of them
901 853 * from the system in one go */
902   - spin_lock(&dcache_lock);
903 854 spin_lock(&dentry->d_lock);
904 855 list_for_each_entry(loop, &dentry->d_subdirs,
905 856 d_u.d_child) {
... ... @@ -910,7 +861,6 @@
910 861 spin_unlock(&loop->d_lock);
911 862 }
912 863 spin_unlock(&dentry->d_lock);
913   - spin_unlock(&dcache_lock);
914 864  
915 865 /* move to the first child */
916 866 dentry = list_entry(dentry->d_subdirs.next,
... ... @@ -977,8 +927,7 @@
977 927  
978 928 /*
979 929 * destroy the dentries attached to a superblock on unmounting
980   - * - we don't need to use dentry->d_lock, and only need dcache_lock when
981   - * removing the dentry from the system lists and hashes because:
  930 + * - we don't need to use dentry->d_lock because:
982 931 * - the superblock is detached from all mountings and open files, so the
983 932 * dentry trees will not be rearranged by the VFS
984 933 * - s_umount is write-locked, so the memory pressure shrinker will ignore
... ... @@ -1029,7 +978,6 @@
1029 978 this_parent = parent;
1030 979 seq = read_seqbegin(&rename_lock);
1031 980  
1032   - spin_lock(&dcache_lock);
1033 981 if (d_mountpoint(parent))
1034 982 goto positive;
1035 983 spin_lock(&this_parent->d_lock);
... ... @@ -1075,7 +1023,6 @@
1075 1023 if (this_parent != child->d_parent ||
1076 1024 read_seqretry(&rename_lock, seq)) {
1077 1025 spin_unlock(&this_parent->d_lock);
1078   - spin_unlock(&dcache_lock);
1079 1026 rcu_read_unlock();
1080 1027 goto rename_retry;
1081 1028 }
1082 1029  
... ... @@ -1084,12 +1031,10 @@
1084 1031 goto resume;
1085 1032 }
1086 1033 spin_unlock(&this_parent->d_lock);
1087   - spin_unlock(&dcache_lock);
1088 1034 if (read_seqretry(&rename_lock, seq))
1089 1035 goto rename_retry;
1090 1036 return 0; /* No mount points found in tree */
1091 1037 positive:
1092   - spin_unlock(&dcache_lock);
1093 1038 if (read_seqretry(&rename_lock, seq))
1094 1039 goto rename_retry;
1095 1040 return 1;
... ... @@ -1121,7 +1066,6 @@
1121 1066 this_parent = parent;
1122 1067 seq = read_seqbegin(&rename_lock);
1123 1068  
1124   - spin_lock(&dcache_lock);
1125 1069 spin_lock(&this_parent->d_lock);
1126 1070 repeat:
1127 1071 next = this_parent->d_subdirs.next;
... ... @@ -1185,7 +1129,6 @@
1185 1129 if (this_parent != child->d_parent ||
1186 1130 read_seqretry(&rename_lock, seq)) {
1187 1131 spin_unlock(&this_parent->d_lock);
1188   - spin_unlock(&dcache_lock);
1189 1132 rcu_read_unlock();
1190 1133 goto rename_retry;
1191 1134 }
... ... @@ -1195,7 +1138,6 @@
1195 1138 }
1196 1139 out:
1197 1140 spin_unlock(&this_parent->d_lock);
1198   - spin_unlock(&dcache_lock);
1199 1141 if (read_seqretry(&rename_lock, seq))
1200 1142 goto rename_retry;
1201 1143 return found;
... ... @@ -1297,7 +1239,6 @@
1297 1239 INIT_LIST_HEAD(&dentry->d_u.d_child);
1298 1240  
1299 1241 if (parent) {
1300   - spin_lock(&dcache_lock);
1301 1242 spin_lock(&parent->d_lock);
1302 1243 spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
1303 1244 dentry->d_parent = dget_dlock(parent);
... ... @@ -1305,7 +1246,6 @@
1305 1246 list_add(&dentry->d_u.d_child, &parent->d_subdirs);
1306 1247 spin_unlock(&dentry->d_lock);
1307 1248 spin_unlock(&parent->d_lock);
1308   - spin_unlock(&dcache_lock);
1309 1249 }
1310 1250  
1311 1251 this_cpu_inc(nr_dentry);
... ... @@ -1325,7 +1265,6 @@
1325 1265 }
1326 1266 EXPORT_SYMBOL(d_alloc_name);
1327 1267  
1328   -/* the caller must hold dcache_lock */
1329 1268 static void __d_instantiate(struct dentry *dentry, struct inode *inode)
1330 1269 {
1331 1270 spin_lock(&dentry->d_lock);
1332 1271  
... ... @@ -1354,11 +1293,9 @@
1354 1293 void d_instantiate(struct dentry *entry, struct inode * inode)
1355 1294 {
1356 1295 BUG_ON(!list_empty(&entry->d_alias));
1357   - spin_lock(&dcache_lock);
1358 1296 spin_lock(&dcache_inode_lock);
1359 1297 __d_instantiate(entry, inode);
1360 1298 spin_unlock(&dcache_inode_lock);
1361   - spin_unlock(&dcache_lock);
1362 1299 security_d_instantiate(entry, inode);
1363 1300 }
1364 1301 EXPORT_SYMBOL(d_instantiate);
1365 1302  
... ... @@ -1422,11 +1359,9 @@
1422 1359  
1423 1360 BUG_ON(!list_empty(&entry->d_alias));
1424 1361  
1425   - spin_lock(&dcache_lock);
1426 1362 spin_lock(&dcache_inode_lock);
1427 1363 result = __d_instantiate_unique(entry, inode);
1428 1364 spin_unlock(&dcache_inode_lock);
1429   - spin_unlock(&dcache_lock);
1430 1365  
1431 1366 if (!result) {
1432 1367 security_d_instantiate(entry, inode);
1433 1368  
... ... @@ -1515,12 +1450,11 @@
1515 1450 }
1516 1451 tmp->d_parent = tmp; /* make sure dput doesn't croak */
1517 1452  
1518   - spin_lock(&dcache_lock);
  1453 +
1519 1454 spin_lock(&dcache_inode_lock);
1520 1455 res = __d_find_alias(inode, 0);
1521 1456 if (res) {
1522 1457 spin_unlock(&dcache_inode_lock);
1523   - spin_unlock(&dcache_lock);
1524 1458 dput(tmp);
1525 1459 goto out_iput;
1526 1460 }
... ... @@ -1538,7 +1472,6 @@
1538 1472 spin_unlock(&tmp->d_lock);
1539 1473 spin_unlock(&dcache_inode_lock);
1540 1474  
1541   - spin_unlock(&dcache_lock);
1542 1475 return tmp;
1543 1476  
1544 1477 out_iput:
1545 1478  
1546 1479  
1547 1480  
... ... @@ -1568,21 +1501,18 @@
1568 1501 struct dentry *new = NULL;
1569 1502  
1570 1503 if (inode && S_ISDIR(inode->i_mode)) {
1571   - spin_lock(&dcache_lock);
1572 1504 spin_lock(&dcache_inode_lock);
1573 1505 new = __d_find_alias(inode, 1);
1574 1506 if (new) {
1575 1507 BUG_ON(!(new->d_flags & DCACHE_DISCONNECTED));
1576 1508 spin_unlock(&dcache_inode_lock);
1577   - spin_unlock(&dcache_lock);
1578 1509 security_d_instantiate(new, inode);
1579 1510 d_move(new, dentry);
1580 1511 iput(inode);
1581 1512 } else {
1582   - /* already taking dcache_lock, so d_add() by hand */
  1513 + /* already taking dcache_inode_lock, so d_add() by hand */
1583 1514 __d_instantiate(dentry, inode);
1584 1515 spin_unlock(&dcache_inode_lock);
1585   - spin_unlock(&dcache_lock);
1586 1516 security_d_instantiate(dentry, inode);
1587 1517 d_rehash(dentry);
1588 1518 }
1589 1519  
... ... @@ -1655,12 +1585,10 @@
1655 1585 * Negative dentry: instantiate it unless the inode is a directory and
1656 1586 * already has a dentry.
1657 1587 */
1658   - spin_lock(&dcache_lock);
1659 1588 spin_lock(&dcache_inode_lock);
1660 1589 if (!S_ISDIR(inode->i_mode) || list_empty(&inode->i_dentry)) {
1661 1590 __d_instantiate(found, inode);
1662 1591 spin_unlock(&dcache_inode_lock);
1663   - spin_unlock(&dcache_lock);
1664 1592 security_d_instantiate(found, inode);
1665 1593 return found;
1666 1594 }
... ... @@ -1672,7 +1600,6 @@
1672 1600 new = list_entry(inode->i_dentry.next, struct dentry, d_alias);
1673 1601 dget_locked(new);
1674 1602 spin_unlock(&dcache_inode_lock);
1675   - spin_unlock(&dcache_lock);
1676 1603 security_d_instantiate(found, inode);
1677 1604 d_move(new, found);
1678 1605 iput(inode);
... ... @@ -1843,7 +1770,6 @@
1843 1770 {
1844 1771 struct dentry *child;
1845 1772  
1846   - spin_lock(&dcache_lock);
1847 1773 spin_lock(&dparent->d_lock);
1848 1774 list_for_each_entry(child, &dparent->d_subdirs, d_u.d_child) {
1849 1775 if (dentry == child) {
1850 1776  
... ... @@ -1851,12 +1777,10 @@
1851 1777 __dget_locked_dlock(dentry);
1852 1778 spin_unlock(&dentry->d_lock);
1853 1779 spin_unlock(&dparent->d_lock);
1854   - spin_unlock(&dcache_lock);
1855 1780 return 1;
1856 1781 }
1857 1782 }
1858 1783 spin_unlock(&dparent->d_lock);
1859   - spin_unlock(&dcache_lock);
1860 1784  
1861 1785 return 0;
1862 1786 }
... ... @@ -1889,7 +1813,6 @@
1889 1813 /*
1890 1814 * Are we the only user?
1891 1815 */
1892   - spin_lock(&dcache_lock);
1893 1816 spin_lock(&dcache_inode_lock);
1894 1817 spin_lock(&dentry->d_lock);
1895 1818 isdir = S_ISDIR(dentry->d_inode->i_mode);
... ... @@ -1905,7 +1828,6 @@
1905 1828  
1906 1829 spin_unlock(&dentry->d_lock);
1907 1830 spin_unlock(&dcache_inode_lock);
1908   - spin_unlock(&dcache_lock);
1909 1831  
1910 1832 fsnotify_nameremove(dentry, isdir);
1911 1833 }
1912 1834  
... ... @@ -1932,13 +1854,11 @@
1932 1854  
1933 1855 void d_rehash(struct dentry * entry)
1934 1856 {
1935   - spin_lock(&dcache_lock);
1936 1857 spin_lock(&entry->d_lock);
1937 1858 spin_lock(&dcache_hash_lock);
1938 1859 _d_rehash(entry);
1939 1860 spin_unlock(&dcache_hash_lock);
1940 1861 spin_unlock(&entry->d_lock);
1941   - spin_unlock(&dcache_lock);
1942 1862 }
1943 1863 EXPORT_SYMBOL(d_rehash);
1944 1864  
1945 1865  
... ... @@ -1961,11 +1881,9 @@
1961 1881 BUG_ON(!mutex_is_locked(&dentry->d_inode->i_mutex));
1962 1882 BUG_ON(dentry->d_name.len != name->len); /* d_lookup gives this */
1963 1883  
1964   - spin_lock(&dcache_lock);
1965 1884 spin_lock(&dentry->d_lock);
1966 1885 memcpy((unsigned char *)dentry->d_name.name, name->name, name->len);
1967 1886 spin_unlock(&dentry->d_lock);
1968   - spin_unlock(&dcache_lock);
1969 1887 }
1970 1888 EXPORT_SYMBOL(dentry_update_name_case);
1971 1889  
1972 1890  
... ... @@ -2058,14 +1976,14 @@
2058 1976 * The hash value has to match the hash queue that the dentry is on..
2059 1977 */
2060 1978 /*
2061   - * d_move_locked - move a dentry
  1979 + * d_move - move a dentry
2062 1980 * @dentry: entry to move
2063 1981 * @target: new dentry
2064 1982 *
2065 1983 * Update the dcache to reflect the move of a file name. Negative
2066 1984 * dcache entries should not be moved in this way.
2067 1985 */
2068   -static void d_move_locked(struct dentry * dentry, struct dentry * target)
  1986 +void d_move(struct dentry * dentry, struct dentry * target)
2069 1987 {
2070 1988 if (!dentry->d_inode)
2071 1989 printk(KERN_WARNING "VFS: moving negative dcache entry\n");
... ... @@ -2114,22 +2032,6 @@
2114 2032 spin_unlock(&dentry->d_lock);
2115 2033 write_sequnlock(&rename_lock);
2116 2034 }
2117   -
2118   -/**
2119   - * d_move - move a dentry
2120   - * @dentry: entry to move
2121   - * @target: new dentry
2122   - *
2123   - * Update the dcache to reflect the move of a file name. Negative
2124   - * dcache entries should not be moved in this way.
2125   - */
2126   -
2127   -void d_move(struct dentry * dentry, struct dentry * target)
2128   -{
2129   - spin_lock(&dcache_lock);
2130   - d_move_locked(dentry, target);
2131   - spin_unlock(&dcache_lock);
2132   -}
2133 2035 EXPORT_SYMBOL(d_move);
2134 2036  
2135 2037 /**
2136 2038  
... ... @@ -2155,13 +2057,12 @@
2155 2057 * This helper attempts to cope with remotely renamed directories
2156 2058 *
2157 2059 * It assumes that the caller is already holding
2158   - * dentry->d_parent->d_inode->i_mutex and the dcache_lock
  2060 + * dentry->d_parent->d_inode->i_mutex and the dcache_inode_lock
2159 2061 *
2160 2062 * Note: If ever the locking in lock_rename() changes, then please
2161 2063 * remember to update this too...
2162 2064 */
2163 2065 static struct dentry *__d_unalias(struct dentry *dentry, struct dentry *alias)
2164   - __releases(dcache_lock)
2165 2066 __releases(dcache_inode_lock)
2166 2067 {
2167 2068 struct mutex *m1 = NULL, *m2 = NULL;
2168 2069  
... ... @@ -2185,11 +2086,10 @@
2185 2086 goto out_err;
2186 2087 m2 = &alias->d_parent->d_inode->i_mutex;
2187 2088 out_unalias:
2188   - d_move_locked(alias, dentry);
  2089 + d_move(alias, dentry);
2189 2090 ret = alias;
2190 2091 out_err:
2191 2092 spin_unlock(&dcache_inode_lock);
2192   - spin_unlock(&dcache_lock);
2193 2093 if (m2)
2194 2094 mutex_unlock(m2);
2195 2095 if (m1)
... ... @@ -2249,7 +2149,6 @@
2249 2149  
2250 2150 BUG_ON(!d_unhashed(dentry));
2251 2151  
2252   - spin_lock(&dcache_lock);
2253 2152 spin_lock(&dcache_inode_lock);
2254 2153  
2255 2154 if (!inode) {
... ... @@ -2295,7 +2194,6 @@
2295 2194 spin_unlock(&dcache_hash_lock);
2296 2195 spin_unlock(&actual->d_lock);
2297 2196 spin_unlock(&dcache_inode_lock);
2298   - spin_unlock(&dcache_lock);
2299 2197 out_nolock:
2300 2198 if (actual == dentry) {
2301 2199 security_d_instantiate(dentry, inode);
... ... @@ -2307,7 +2205,6 @@
2307 2205  
2308 2206 shouldnt_be_hashed:
2309 2207 spin_unlock(&dcache_inode_lock);
2310   - spin_unlock(&dcache_lock);
2311 2208 BUG();
2312 2209 }
2313 2210 EXPORT_SYMBOL_GPL(d_materialise_unique);
2314 2211  
... ... @@ -2421,11 +2318,9 @@
2421 2318 int error;
2422 2319  
2423 2320 prepend(&res, &buflen, "\0", 1);
2424   - spin_lock(&dcache_lock);
2425 2321 write_seqlock(&rename_lock);
2426 2322 error = prepend_path(path, root, &res, &buflen);
2427 2323 write_sequnlock(&rename_lock);
2428   - spin_unlock(&dcache_lock);
2429 2324  
2430 2325 if (error)
2431 2326 return ERR_PTR(error);
2432 2327  
... ... @@ -2487,14 +2382,12 @@
2487 2382 return path->dentry->d_op->d_dname(path->dentry, buf, buflen);
2488 2383  
2489 2384 get_fs_root(current->fs, &root);
2490   - spin_lock(&dcache_lock);
2491 2385 write_seqlock(&rename_lock);
2492 2386 tmp = root;
2493 2387 error = path_with_deleted(path, &tmp, &res, &buflen);
2494 2388 if (error)
2495 2389 res = ERR_PTR(error);
2496 2390 write_sequnlock(&rename_lock);
2497   - spin_unlock(&dcache_lock);
2498 2391 path_put(&root);
2499 2392 return res;
2500 2393 }
2501 2394  
... ... @@ -2520,14 +2413,12 @@
2520 2413 return path->dentry->d_op->d_dname(path->dentry, buf, buflen);
2521 2414  
2522 2415 get_fs_root(current->fs, &root);
2523   - spin_lock(&dcache_lock);
2524 2416 write_seqlock(&rename_lock);
2525 2417 tmp = root;
2526 2418 error = path_with_deleted(path, &tmp, &res, &buflen);
2527 2419 if (!error && !path_equal(&tmp, &root))
2528 2420 error = prepend_unreachable(&res, &buflen);
2529 2421 write_sequnlock(&rename_lock);
2530   - spin_unlock(&dcache_lock);
2531 2422 path_put(&root);
2532 2423 if (error)
2533 2424 res = ERR_PTR(error);
2534 2425  
... ... @@ -2594,11 +2485,9 @@
2594 2485 {
2595 2486 char *retval;
2596 2487  
2597   - spin_lock(&dcache_lock);
2598 2488 write_seqlock(&rename_lock);
2599 2489 retval = __dentry_path(dentry, buf, buflen);
2600 2490 write_sequnlock(&rename_lock);
2601   - spin_unlock(&dcache_lock);
2602 2491  
2603 2492 return retval;
2604 2493 }
... ... @@ -2609,7 +2498,6 @@
2609 2498 char *p = NULL;
2610 2499 char *retval;
2611 2500  
2612   - spin_lock(&dcache_lock);
2613 2501 write_seqlock(&rename_lock);
2614 2502 if (d_unlinked(dentry)) {
2615 2503 p = buf + buflen;
2616 2504  
... ... @@ -2619,12 +2507,10 @@
2619 2507 }
2620 2508 retval = __dentry_path(dentry, buf, buflen);
2621 2509 write_sequnlock(&rename_lock);
2622   - spin_unlock(&dcache_lock);
2623 2510 if (!IS_ERR(retval) && p)
2624 2511 *p = '/'; /* restore '/' overriden with '\0' */
2625 2512 return retval;
2626 2513 Elong:
2627   - spin_unlock(&dcache_lock);
2628 2514 return ERR_PTR(-ENAMETOOLONG);
2629 2515 }
2630 2516  
... ... @@ -2658,7 +2544,6 @@
2658 2544 get_fs_root_and_pwd(current->fs, &root, &pwd);
2659 2545  
2660 2546 error = -ENOENT;
2661   - spin_lock(&dcache_lock);
2662 2547 write_seqlock(&rename_lock);
2663 2548 if (!d_unlinked(pwd.dentry)) {
2664 2549 unsigned long len;
... ... @@ -2669,7 +2554,6 @@
2669 2554 prepend(&cwd, &buflen, "\0", 1);
2670 2555 error = prepend_path(&pwd, &tmp, &cwd, &buflen);
2671 2556 write_sequnlock(&rename_lock);
2672   - spin_unlock(&dcache_lock);
2673 2557  
2674 2558 if (error)
2675 2559 goto out;
... ... @@ -2690,7 +2574,6 @@
2690 2574 }
2691 2575 } else {
2692 2576 write_sequnlock(&rename_lock);
2693   - spin_unlock(&dcache_lock);
2694 2577 }
2695 2578  
2696 2579 out:
... ... @@ -2776,7 +2659,6 @@
2776 2659 rename_retry:
2777 2660 this_parent = root;
2778 2661 seq = read_seqbegin(&rename_lock);
2779   - spin_lock(&dcache_lock);
2780 2662 spin_lock(&this_parent->d_lock);
2781 2663 repeat:
2782 2664 next = this_parent->d_subdirs.next;
... ... @@ -2823,7 +2705,6 @@
2823 2705 if (this_parent != child->d_parent ||
2824 2706 read_seqretry(&rename_lock, seq)) {
2825 2707 spin_unlock(&this_parent->d_lock);
2826   - spin_unlock(&dcache_lock);
2827 2708 rcu_read_unlock();
2828 2709 goto rename_retry;
2829 2710 }
... ... @@ -2832,7 +2713,6 @@
2832 2713 goto resume;
2833 2714 }
2834 2715 spin_unlock(&this_parent->d_lock);
2835   - spin_unlock(&dcache_lock);
2836 2716 if (read_seqretry(&rename_lock, seq))
2837 2717 goto rename_retry;
2838 2718 }
... ... @@ -47,24 +47,20 @@
47 47 if (acceptable(context, result))
48 48 return result;
49 49  
50   - spin_lock(&dcache_lock);
51 50 spin_lock(&dcache_inode_lock);
52 51 list_for_each_entry(dentry, &result->d_inode->i_dentry, d_alias) {
53 52 dget_locked(dentry);
54 53 spin_unlock(&dcache_inode_lock);
55   - spin_unlock(&dcache_lock);
56 54 if (toput)
57 55 dput(toput);
58 56 if (dentry != result && acceptable(context, dentry)) {
59 57 dput(result);
60 58 return dentry;
61 59 }
62   - spin_lock(&dcache_lock);
63 60 spin_lock(&dcache_inode_lock);
64 61 toput = dentry;
65 62 }
66 63 spin_unlock(&dcache_inode_lock);
67   - spin_unlock(&dcache_lock);
68 64  
69 65 if (toput)
70 66 dput(toput);
... ... @@ -100,7 +100,6 @@
100 100 struct dentry *cursor = file->private_data;
101 101 loff_t n = file->f_pos - 2;
102 102  
103   - spin_lock(&dcache_lock);
104 103 spin_lock(&dentry->d_lock);
105 104 /* d_lock not required for cursor */
106 105 list_del(&cursor->d_u.d_child);
... ... @@ -116,7 +115,6 @@
116 115 }
117 116 list_add_tail(&cursor->d_u.d_child, p);
118 117 spin_unlock(&dentry->d_lock);
119   - spin_unlock(&dcache_lock);
120 118 }
121 119 }
122 120 mutex_unlock(&dentry->d_inode->i_mutex);
... ... @@ -159,7 +157,6 @@
159 157 i++;
160 158 /* fallthrough */
161 159 default:
162   - spin_lock(&dcache_lock);
163 160 spin_lock(&dentry->d_lock);
164 161 if (filp->f_pos == 2)
165 162 list_move(q, &dentry->d_subdirs);
166 163  
... ... @@ -175,13 +172,11 @@
175 172  
176 173 spin_unlock(&next->d_lock);
177 174 spin_unlock(&dentry->d_lock);
178   - spin_unlock(&dcache_lock);
179 175 if (filldir(dirent, next->d_name.name,
180 176 next->d_name.len, filp->f_pos,
181 177 next->d_inode->i_ino,
182 178 dt_type(next->d_inode)) < 0)
183 179 return 0;
184   - spin_lock(&dcache_lock);
185 180 spin_lock(&dentry->d_lock);
186 181 spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED);
187 182 /* next is still alive */
... ... @@ -191,7 +186,6 @@
191 186 filp->f_pos++;
192 187 }
193 188 spin_unlock(&dentry->d_lock);
194   - spin_unlock(&dcache_lock);
195 189 }
196 190 return 0;
197 191 }
... ... @@ -285,7 +279,6 @@
285 279 struct dentry *child;
286 280 int ret = 0;
287 281  
288   - spin_lock(&dcache_lock);
289 282 spin_lock(&dentry->d_lock);
290 283 list_for_each_entry(child, &dentry->d_subdirs, d_u.d_child) {
291 284 spin_lock_nested(&child->d_lock, DENTRY_D_LOCK_NESTED);
... ... @@ -298,7 +291,6 @@
298 291 ret = 1;
299 292 out:
300 293 spin_unlock(&dentry->d_lock);
301   - spin_unlock(&dcache_lock);
302 294 return ret;
303 295 }
304 296  
... ... @@ -612,8 +612,8 @@
612 612 return 1;
613 613 }
614 614  
615   -/* no need for dcache_lock, as serialization is taken care in
616   - * namespace.c
  615 +/*
  616 + * serialization is taken care of in namespace.c
617 617 */
618 618 static int __follow_mount(struct path *path)
619 619 {
... ... @@ -645,9 +645,6 @@
645 645 }
646 646 }
647 647  
648   -/* no need for dcache_lock, as serialization is taken care in
649   - * namespace.c
650   - */
651 648 int follow_down(struct path *path)
652 649 {
653 650 struct vfsmount *mounted;
654 651  
... ... @@ -2131,12 +2128,10 @@
2131 2128 {
2132 2129 dget(dentry);
2133 2130 shrink_dcache_parent(dentry);
2134   - spin_lock(&dcache_lock);
2135 2131 spin_lock(&dentry->d_lock);
2136 2132 if (dentry->d_count == 2)
2137 2133 __d_drop(dentry);
2138 2134 spin_unlock(&dentry->d_lock);
2139   - spin_unlock(&dcache_lock);
2140 2135 }
2141 2136  
2142 2137 int vfs_rmdir(struct inode *dir, struct dentry *dentry)
... ... @@ -391,7 +391,6 @@
391 391 }
392 392  
393 393 /* If a pointer is invalid, we search the dentry. */
394   - spin_lock(&dcache_lock);
395 394 spin_lock(&parent->d_lock);
396 395 next = parent->d_subdirs.next;
397 396 while (next != &parent->d_subdirs) {
398 397  
... ... @@ -402,13 +401,11 @@
402 401 else
403 402 dent = NULL;
404 403 spin_unlock(&parent->d_lock);
405   - spin_unlock(&dcache_lock);
406 404 goto out;
407 405 }
408 406 next = next->next;
409 407 }
410 408 spin_unlock(&parent->d_lock);
411   - spin_unlock(&dcache_lock);
412 409 return NULL;
413 410  
414 411 out:
fs/ncpfs/ncplib_kernel.h
... ... @@ -193,7 +193,6 @@
193 193 struct list_head *next;
194 194 struct dentry *dentry;
195 195  
196   - spin_lock(&dcache_lock);
197 196 spin_lock(&parent->d_lock);
198 197 next = parent->d_subdirs.next;
199 198 while (next != &parent->d_subdirs) {
... ... @@ -207,7 +206,6 @@
207 206 next = next->next;
208 207 }
209 208 spin_unlock(&parent->d_lock);
210   - spin_unlock(&dcache_lock);
211 209 }
212 210  
213 211 static inline void
... ... @@ -217,7 +215,6 @@
217 215 struct list_head *next;
218 216 struct dentry *dentry;
219 217  
220   - spin_lock(&dcache_lock);
221 218 spin_lock(&parent->d_lock);
222 219 next = parent->d_subdirs.next;
223 220 while (next != &parent->d_subdirs) {
... ... @@ -227,7 +224,6 @@
227 224 next = next->next;
228 225 }
229 226 spin_unlock(&parent->d_lock);
230   - spin_unlock(&dcache_lock);
231 227 }
232 228  
233 229 struct ncp_cache_head {
... ... @@ -1718,11 +1718,9 @@
1718 1718 dfprintk(VFS, "NFS: unlink(%s/%ld, %s)\n", dir->i_sb->s_id,
1719 1719 dir->i_ino, dentry->d_name.name);
1720 1720  
1721   - spin_lock(&dcache_lock);
1722 1721 spin_lock(&dentry->d_lock);
1723 1722 if (dentry->d_count > 1) {
1724 1723 spin_unlock(&dentry->d_lock);
1725   - spin_unlock(&dcache_lock);
1726 1724 /* Start asynchronous writeout of the inode */
1727 1725 write_inode_now(dentry->d_inode, 0);
1728 1726 error = nfs_sillyrename(dir, dentry);
... ... @@ -1733,7 +1731,6 @@
1733 1731 need_rehash = 1;
1734 1732 }
1735 1733 spin_unlock(&dentry->d_lock);
1736   - spin_unlock(&dcache_lock);
1737 1734 error = nfs_safe_remove(dentry);
1738 1735 if (!error || error == -ENOENT) {
1739 1736 nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
... ... @@ -63,13 +63,11 @@
63 63 * This again causes shrink_dcache_for_umount_subtree() to
64 64 * Oops, since the test for IS_ROOT() will fail.
65 65 */
66   - spin_lock(&dcache_lock);
67 66 spin_lock(&dcache_inode_lock);
68 67 spin_lock(&sb->s_root->d_lock);
69 68 list_del_init(&sb->s_root->d_alias);
70 69 spin_unlock(&sb->s_root->d_lock);
71 70 spin_unlock(&dcache_inode_lock);
72   - spin_unlock(&dcache_lock);
73 71 }
74 72 return 0;
75 73 }
... ... @@ -60,7 +60,6 @@
60 60  
61 61 seq = read_seqbegin(&rename_lock);
62 62 rcu_read_lock();
63   - spin_lock(&dcache_lock);
64 63 while (!IS_ROOT(dentry) && dentry != droot) {
65 64 namelen = dentry->d_name.len;
66 65 buflen -= namelen + 1;
... ... @@ -71,7 +70,6 @@
71 70 *--end = '/';
72 71 dentry = dentry->d_parent;
73 72 }
74   - spin_unlock(&dcache_lock);
75 73 rcu_read_unlock();
76 74 if (read_seqretry(&rename_lock, seq))
77 75 goto rename_retry;
... ... @@ -91,7 +89,6 @@
91 89 memcpy(end, base, namelen);
92 90 return end;
93 91 Elong_unlock:
94   - spin_unlock(&dcache_lock);
95 92 rcu_read_unlock();
96 93 if (read_seqretry(&rename_lock, seq))
97 94 goto rename_retry;
fs/notify/fsnotify.c
... ... @@ -59,7 +59,6 @@
59 59 /* determine if the children should tell inode about their events */
60 60 watched = fsnotify_inode_watches_children(inode);
61 61  
62   - spin_lock(&dcache_lock);
63 62 spin_lock(&dcache_inode_lock);
64 63 /* run all of the dentries associated with this inode. Since this is a
65 64 * directory, there damn well better only be one item on this list */
... ... @@ -84,7 +83,6 @@
84 83 spin_unlock(&alias->d_lock);
85 84 }
86 85 spin_unlock(&dcache_inode_lock);
87   - spin_unlock(&dcache_lock);
88 86 }
89 87  
90 88 /* Notify this dentry's parent about a child's events. */
... ... @@ -169,7 +169,6 @@
169 169 struct list_head *p;
170 170 struct dentry *dentry = NULL;
171 171  
172   - spin_lock(&dcache_lock);
173 172 spin_lock(&dcache_inode_lock);
174 173 list_for_each(p, &inode->i_dentry) {
175 174 dentry = list_entry(p, struct dentry, d_alias);
... ... @@ -189,7 +188,6 @@
189 188 }
190 189  
191 190 spin_unlock(&dcache_inode_lock);
192   - spin_unlock(&dcache_lock);
193 191  
194 192 return dentry;
195 193 }
include/linux/dcache.h
... ... @@ -183,7 +183,6 @@
183 183 #define DCACHE_GENOCIDE 0x0200
184 184  
185 185 extern spinlock_t dcache_inode_lock;
186   -extern spinlock_t dcache_lock;
187 186 extern seqlock_t rename_lock;
188 187  
189 188 static inline int dname_external(struct dentry *dentry)
... ... @@ -296,8 +295,8 @@
296 295 * destroyed when it has references. dget() should never be
297 296 * called for dentries with zero reference counter. For these cases
298 297 * (preferably none, functions in dcache.c are sufficient for normal
299   - * needs and they take necessary precautions) you should hold dcache_lock
300   - * and call dget_locked() instead of dget().
  298 + * needs and they take necessary precautions) you should hold d_lock
  299 + * and call dget_dlock() instead of dget().
301 300 */
302 301 static inline struct dentry *dget_dlock(struct dentry *dentry)
303 302 {
... ... @@ -1378,7 +1378,7 @@
1378 1378 #else
1379 1379 struct list_head s_files;
1380 1380 #endif
1381   - /* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
  1381 + /* s_dentry_lru, s_nr_dentry_unused protected by dcache.c lru locks */
1382 1382 struct list_head s_dentry_lru; /* unused dentry lru */
1383 1383 int s_nr_dentry_unused; /* # of dentry on lru */
1384 1384  
... ... @@ -2446,6 +2446,10 @@
2446 2446 {
2447 2447 ino_t res;
2448 2448  
  2449 + /*
  2450 + * Don't strictly need d_lock here? If the parent ino could change
  2451 + * then surely we'd have a deeper race in the caller?
  2452 + */
2449 2453 spin_lock(&dentry->d_lock);
2450 2454 res = dentry->d_parent->d_inode->i_ino;
2451 2455 spin_unlock(&dentry->d_lock);
include/linux/fsnotify.h
... ... @@ -17,7 +17,6 @@
17 17  
18 18 /*
19 19 * fsnotify_d_instantiate - instantiate a dentry for inode
20   - * Called with dcache_lock held.
21 20 */
22 21 static inline void fsnotify_d_instantiate(struct dentry *dentry,
23 22 struct inode *inode)
... ... @@ -62,7 +61,6 @@
62 61  
63 62 /*
64 63 * fsnotify_d_move - dentry has been moved
65   - * Called with dcache_lock and dentry->d_lock held.
66 64 */
67 65 static inline void fsnotify_d_move(struct dentry *dentry)
68 66 {
include/linux/fsnotify_backend.h
... ... @@ -329,9 +329,15 @@
329 329 {
330 330 struct dentry *parent;
331 331  
332   - assert_spin_locked(&dcache_lock);
333 332 assert_spin_locked(&dentry->d_lock);
334 333  
  334 + /*
  335 + * Serialisation of setting PARENT_WATCHED on the dentries is provided
  336 + * by d_lock. If inotify_inode_watched changes after we have taken
  337 + * d_lock, the following __fsnotify_update_child_dentry_flags call will
  338 + * find our entry, so it will spin until we complete here, and update
  339 + * us with the new state.
  340 + */
335 341 parent = dentry->d_parent;
336 342 if (parent->d_inode && fsnotify_inode_watches_children(parent->d_inode))
337 343 dentry->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED;
338 344  
... ... @@ -341,14 +347,11 @@
341 347  
342 348 /*
343 349 * fsnotify_d_instantiate - instantiate a dentry for inode
344   - * Called with dcache_lock held.
345 350 */
346 351 static inline void __fsnotify_d_instantiate(struct dentry *dentry, struct inode *inode)
347 352 {
348 353 if (!inode)
349 354 return;
350   -
351   - assert_spin_locked(&dcache_lock);
352 355  
353 356 spin_lock(&dentry->d_lock);
354 357 __fsnotify_update_dcache_flags(dentry);
include/linux/namei.h
... ... @@ -41,7 +41,6 @@
41 41 * - require a directory
42 42 * - ending slashes ok even for nonexistent files
43 43 * - internal "there are more path components" flag
44   - * - locked when lookup done with dcache_lock held
45 44 * - dentry cache is untrusted; force a real lookup
46 45 */
47 46 #define LOOKUP_FOLLOW 1
... ... @@ -876,7 +876,6 @@
876 876 struct list_head *node;
877 877  
878 878 BUG_ON(!mutex_is_locked(&dentry->d_inode->i_mutex));
879   - spin_lock(&dcache_lock);
880 879 spin_lock(&dentry->d_lock);
881 880 node = dentry->d_subdirs.next;
882 881 while (node != &dentry->d_subdirs) {
883 882  
884 883  
... ... @@ -891,18 +890,15 @@
891 890 dget_locked_dlock(d);
892 891 spin_unlock(&d->d_lock);
893 892 spin_unlock(&dentry->d_lock);
894   - spin_unlock(&dcache_lock);
895 893 d_delete(d);
896 894 simple_unlink(dentry->d_inode, d);
897 895 dput(d);
898   - spin_lock(&dcache_lock);
899 896 spin_lock(&dentry->d_lock);
900 897 } else
901 898 spin_unlock(&d->d_lock);
902 899 node = dentry->d_subdirs.next;
903 900 }
904 901 spin_unlock(&dentry->d_lock);
905   - spin_unlock(&dcache_lock);
906 902 }
907 903  
908 904 /*
909 905  
... ... @@ -914,14 +910,12 @@
914 910  
915 911 cgroup_clear_directory(dentry);
916 912  
917   - spin_lock(&dcache_lock);
918 913 parent = dentry->d_parent;
919 914 spin_lock(&parent->d_lock);
920 915 spin_lock(&dentry->d_lock);
921 916 list_del_init(&dentry->d_u.d_child);
922 917 spin_unlock(&dentry->d_lock);
923 918 spin_unlock(&parent->d_lock);
924   - spin_unlock(&dcache_lock);
925 919 remove_dir(dentry);
926 920 }
927 921  
... ... @@ -102,9 +102,6 @@
102 102 * ->inode_lock (zap_pte_range->set_page_dirty)
103 103 * ->private_lock (zap_pte_range->__set_page_dirty_buffers)
104 104 *
105   - * ->task->proc_lock
106   - * ->dcache_lock (proc_pid_lookup)
107   - *
108 105 * (code doesn't rely on that order, so you could switch it around)
109 106 * ->tasklist_lock (memory_failure, collect_procs_ao)
110 107 * ->i_mmap_lock
security/selinux/selinuxfs.c
... ... @@ -1145,7 +1145,6 @@
1145 1145 {
1146 1146 struct list_head *node;
1147 1147  
1148   - spin_lock(&dcache_lock);
1149 1148 spin_lock(&de->d_lock);
1150 1149 node = de->d_subdirs.next;
1151 1150 while (node != &de->d_subdirs) {
1152 1151  
... ... @@ -1158,11 +1157,9 @@
1158 1157 dget_locked_dlock(d);
1159 1158 spin_unlock(&de->d_lock);
1160 1159 spin_unlock(&d->d_lock);
1161   - spin_unlock(&dcache_lock);
1162 1160 d_delete(d);
1163 1161 simple_unlink(de->d_inode, d);
1164 1162 dput(d);
1165   - spin_lock(&dcache_lock);
1166 1163 spin_lock(&de->d_lock);
1167 1164 } else
1168 1165 spin_unlock(&d->d_lock);
... ... @@ -1170,7 +1167,6 @@
1170 1167 }
1171 1168  
1172 1169 spin_unlock(&de->d_lock);
1173   - spin_unlock(&dcache_lock);
1174 1170 }
1175 1171  
1176 1172 #define BOOL_DIR_NAME "booleans"