Blame view

include/linux/lru_cache.h 12.4 KB
b411b3637   Philipp Reisner   The DRBD driver
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
  /*
     lru_cache.c
  
     This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
  
     Copyright (C) 2003-2008, LINBIT Information Technologies GmbH.
     Copyright (C) 2003-2008, Philipp Reisner <philipp.reisner@linbit.com>.
     Copyright (C) 2003-2008, Lars Ellenberg <lars.ellenberg@linbit.com>.
  
     drbd is free software; you can redistribute it and/or modify
     it under the terms of the GNU General Public License as published by
     the Free Software Foundation; either version 2, or (at your option)
     any later version.
  
     drbd is distributed in the hope that it will be useful,
     but WITHOUT ANY WARRANTY; without even the implied warranty of
     MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
     GNU General Public License for more details.
  
     You should have received a copy of the GNU General Public License
     along with drbd; see the file COPYING.  If not, write to
     the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
  
   */
  
  #ifndef LRU_CACHE_H
  #define LRU_CACHE_H
  
  #include <linux/list.h>
  #include <linux/slab.h>
  #include <linux/bitops.h>
  #include <linux/string.h> /* for memset */
  #include <linux/seq_file.h>
  
  /*
  This header file (and its .c file; kernel-doc of functions see there)
    define a helper framework to easily keep track of index:label associations,
    and changes to an "active set" of objects, as well as pending transactions,
    to persistently record those changes.
  
    We use an LRU policy if it is necessary to "cool down" a region currently in
    the active set before we can "heat" a previously unused region.
  
    Because of this later property, it is called "lru_cache".
    As it actually Tracks Objects in an Active SeT, we could also call it
    toast (incidentally that is what may happen to the data on the
    backend storage uppon next resync, if we don't get it right).
  
  What for?
  
  We replicate IO (more or less synchronously) to local and remote disk.
  
  For crash recovery after replication node failure,
    we need to resync all regions that have been target of in-flight WRITE IO
48fc7f7e7   Adam Buchbinder   Fix misspellings ...
55
56
    (in use, or "hot", regions), as we don't know whether or not those WRITEs
    have made it to stable storage.
b411b3637   Philipp Reisner   The DRBD driver
57
58
59
60
61
62
63
64
65
66
  
    To avoid a "full resync", we need to persistently track these regions.
  
    This is known as "write intent log", and can be implemented as on-disk
    (coarse or fine grained) bitmap, or other meta data.
  
    To avoid the overhead of frequent extra writes to this meta data area,
    usually the condition is softened to regions that _may_ have been target of
    in-flight WRITE IO, e.g. by only lazily clearing the on-disk write-intent
    bitmap, trading frequency of meta data transactions against amount of
3ad2f3fbb   Daniel Mack   tree-wide: Assort...
67
    (possibly unnecessary) resync traffic.
b411b3637   Philipp Reisner   The DRBD driver
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
  
    If we set a hard limit on the area that may be "hot" at any given time, we
    limit the amount of resync traffic needed for crash recovery.
  
  For recovery after replication link failure,
    we need to resync all blocks that have been changed on the other replica
    in the mean time, or, if both replica have been changed independently [*],
    all blocks that have been changed on either replica in the mean time.
    [*] usually as a result of a cluster split-brain and insufficient protection.
        but there are valid use cases to do this on purpose.
  
    Tracking those blocks can be implemented as "dirty bitmap".
    Having it fine-grained reduces the amount of resync traffic.
    It should also be persistent, to allow for reboots (or crashes)
    while the replication link is down.
  
  There are various possible implementations for persistently storing
  write intent log information, three of which are mentioned here.
  
  "Chunk dirtying"
    The on-disk "dirty bitmap" may be re-used as "write-intent" bitmap as well.
    To reduce the frequency of bitmap updates for write-intent log purposes,
    one could dirty "chunks" (of some size) at a time of the (fine grained)
    on-disk bitmap, while keeping the in-memory "dirty" bitmap as clean as
    possible, flushing it to disk again when a previously "hot" (and on-disk
    dirtied as full chunk) area "cools down" again (no IO in flight anymore,
    and none expected in the near future either).
  
  "Explicit (coarse) write intent bitmap"
    An other implementation could chose a (probably coarse) explicit bitmap,
    for write-intent log purposes, additionally to the fine grained dirty bitmap.
  
  "Activity log"
    Yet an other implementation may keep track of the hot regions, by starting
    with an empty set, and writing down a journal of region numbers that have
    become "hot", or have "cooled down" again.
  
    To be able to use a ring buffer for this journal of changes to the active
    set, we not only record the actual changes to that set, but also record the
    not changing members of the set in a round robin fashion. To do so, we use a
    fixed (but configurable) number of slots which we can identify by index, and
    associate region numbers (labels) with these indices.
    For each transaction recording a change to the active set, we record the
    change itself (index: -old_label, +new_label), and which index is associated
    with which label (index: current_label) within a certain sliding window that
    is moved further over the available indices with each such transaction.
  
    Thus, for crash recovery, if the ringbuffer is sufficiently large, we can
    accurately reconstruct the active set.
  
    Sufficiently large depends only on maximum number of active objects, and the
    size of the sliding window recording "index: current_label" associations within
    each transaction.
  
    This is what we call the "activity log".
  
    Currently we need one activity log transaction per single label change, which
    does not give much benefit over the "dirty chunks of bitmap" approach, other
    than potentially less seeks.
  
    We plan to change the transaction format to support multiple changes per
    transaction, which then would reduce several (disjoint, "random") updates to
    the bitmap into one transaction to the activity log ring buffer.
  */
  
  /* this defines an element in a tracked set
   * .colision is for hash table lookup.
   * When we process a new IO request, we know its sector, thus can deduce the
   * region number (label) easily.  To do the label -> object lookup without a
   * full list walk, we use a simple hash table.
   *
   * .list is on one of three lists:
   *  in_use: currently in use (refcnt > 0, lc_number != LC_FREE)
   *     lru: unused but ready to be reused or recycled
600942e0f   Lars Ellenberg   lru_cache.h: fix ...
142
   *          (lc_refcnt == 0, lc_number != LC_FREE),
b411b3637   Philipp Reisner   The DRBD driver
143
   *    free: unused but ready to be recycled
600942e0f   Lars Ellenberg   lru_cache.h: fix ...
144
   *          (lc_refcnt == 0, lc_number == LC_FREE),
b411b3637   Philipp Reisner   The DRBD driver
145
146
147
148
149
150
   *
   * an element is said to be "in the active set",
   * if either on "in_use" or "lru", i.e. lc_number != LC_FREE.
   *
   * DRBD currently (May 2009) only uses 61 elements on the resync lru_cache
   * (total memory usage 2 pages), and up to 3833 elements on the act_log
25985edce   Lucas De Marchi   Fix common misspe...
151
   * lru_cache, totalling ~215 kB for 64bit architecture, ~53 pages.
b411b3637   Philipp Reisner   The DRBD driver
152
153
154
155
156
157
158
159
160
161
162
   *
   * We usually do not actually free these objects again, but only "recycle"
   * them, as the change "index: -old_label, +LC_FREE" would need a transaction
   * as well.  Which also means that using a kmem_cache to allocate the objects
   * from wastes some resources.
   * But it avoids high order page allocations in kmalloc.
   */
  struct lc_element {
  	struct hlist_node colision;
  	struct list_head list;		 /* LRU list or free list */
  	unsigned refcnt;
600942e0f   Lars Ellenberg   lru_cache.h: fix ...
163
164
  	/* back "pointer" into lc_cache->element[index],
  	 * for paranoia, and for "lc_element_to_index" */
b411b3637   Philipp Reisner   The DRBD driver
165
166
167
168
  	unsigned lc_index;
  	/* if we want to track a larger set of objects,
  	 * it needs to become arch independend u64 */
  	unsigned lc_number;
b411b3637   Philipp Reisner   The DRBD driver
169
170
  	/* special label when on free list */
  #define LC_FREE (~0U)
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
171
172
173
  
  	/* for pending changes */
  	unsigned lc_new_number;
b411b3637   Philipp Reisner   The DRBD driver
174
175
176
177
178
179
180
  };
  
  struct lru_cache {
  	/* the least recently used item is kept at lru->prev */
  	struct list_head lru;
  	struct list_head free;
  	struct list_head in_use;
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
181
  	struct list_head to_be_changed;
b411b3637   Philipp Reisner   The DRBD driver
182
183
184
185
186
187
188
189
190
191
  
  	/* the pre-created kmem cache to allocate the objects from */
  	struct kmem_cache *lc_cache;
  
  	/* size of tracked objects, used to memset(,0,) them in lc_reset */
  	size_t element_size;
  	/* offset of struct lc_element member in the tracked object */
  	size_t element_off;
  
  	/* number of elements (indices) */
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
192
  	unsigned int nr_elements;
b411b3637   Philipp Reisner   The DRBD driver
193
194
195
  	/* Arbitrary limit on maximum tracked objects. Practical limit is much
  	 * lower due to allocation failures, probably. For typical use cases,
  	 * nr_elements should be a few thousand at most.
600942e0f   Lars Ellenberg   lru_cache.h: fix ...
196
197
  	 * This also limits the maximum value of lc_element.lc_index, allowing the
  	 * 8 high bits of .lc_index to be overloaded with flags in the future. */
b411b3637   Philipp Reisner   The DRBD driver
198
  #define LC_MAX_ACTIVE	(1<<24)
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
199
200
201
202
203
  	/* allow to accumulate a few (index:label) changes,
  	 * but no more than max_pending_changes */
  	unsigned int max_pending_changes;
  	/* number of elements currently on to_be_changed list */
  	unsigned int pending_changes;
b411b3637   Philipp Reisner   The DRBD driver
204
  	/* statistics */
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
205
206
  	unsigned used; /* number of elements currently on in_use list */
  	unsigned long hits, misses, starving, locked, changed;
b411b3637   Philipp Reisner   The DRBD driver
207
208
209
  
  	/* see below: flag-bits for lru_cache */
  	unsigned long flags;
b411b3637   Philipp Reisner   The DRBD driver
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
  
  	void  *lc_private;
  	const char *name;
  
  	/* nr_elements there */
  	struct hlist_head *lc_slot;
  	struct lc_element **lc_element;
  };
  
  
  /* flag-bits for lru_cache */
  enum {
  	/* debugging aid, to catch concurrent access early.
  	 * user needs to guarantee exclusive access by proper locking! */
  	__LC_PARANOIA,
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
225
226
227
  
  	/* annotate that the set is "dirty", possibly accumulating further
  	 * changes, until a transaction is finally triggered */
b411b3637   Philipp Reisner   The DRBD driver
228
  	__LC_DIRTY,
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
229
230
231
232
  
  	/* Locked, no further changes allowed.
  	 * Also used to serialize changing transactions. */
  	__LC_LOCKED,
b411b3637   Philipp Reisner   The DRBD driver
233
234
235
236
237
238
239
240
241
242
243
  	/* if we need to change the set, but currently there is no free nor
  	 * unused element available, we are "starving", and must not give out
  	 * further references, to guarantee that eventually some refcnt will
  	 * drop to zero and we will be able to make progress again, changing
  	 * the set, writing the transaction.
  	 * if the statistics say we are frequently starving,
  	 * nr_elements is too small. */
  	__LC_STARVING,
  };
  #define LC_PARANOIA (1<<__LC_PARANOIA)
  #define LC_DIRTY    (1<<__LC_DIRTY)
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
244
  #define LC_LOCKED   (1<<__LC_LOCKED)
b411b3637   Philipp Reisner   The DRBD driver
245
246
247
  #define LC_STARVING (1<<__LC_STARVING)
  
  extern struct lru_cache *lc_create(const char *name, struct kmem_cache *cache,
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
248
  		unsigned max_pending_changes,
b411b3637   Philipp Reisner   The DRBD driver
249
250
251
252
253
  		unsigned e_count, size_t e_size, size_t e_off);
  extern void lc_reset(struct lru_cache *lc);
  extern void lc_destroy(struct lru_cache *lc);
  extern void lc_set(struct lru_cache *lc, unsigned int enr, int index);
  extern void lc_del(struct lru_cache *lc, struct lc_element *element);
cbe5e6109   Lars Ellenberg   lru_cache: introd...
254
  extern struct lc_element *lc_get_cumulative(struct lru_cache *lc, unsigned int enr);
b411b3637   Philipp Reisner   The DRBD driver
255
256
257
258
  extern struct lc_element *lc_try_get(struct lru_cache *lc, unsigned int enr);
  extern struct lc_element *lc_find(struct lru_cache *lc, unsigned int enr);
  extern struct lc_element *lc_get(struct lru_cache *lc, unsigned int enr);
  extern unsigned int lc_put(struct lru_cache *lc, struct lc_element *e);
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
259
  extern void lc_committed(struct lru_cache *lc);
b411b3637   Philipp Reisner   The DRBD driver
260
261
262
263
264
265
266
267
  
  struct seq_file;
  extern size_t lc_seq_printf_stats(struct seq_file *seq, struct lru_cache *lc);
  
  extern void lc_seq_dump_details(struct seq_file *seq, struct lru_cache *lc, char *utext,
  				void (*detail) (struct seq_file *, struct lc_element *));
  
  /**
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
268
   * lc_try_lock_for_transaction - can be used to stop lc_get() from changing the tracked set
b411b3637   Philipp Reisner   The DRBD driver
269
270
   * @lc: the lru cache to operate on
   *
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
271
272
273
   * Allows (expects) the set to be "dirty".  Note that the reference counts and
   * order on the active and lru lists may still change.  Used to serialize
   * changing transactions.  Returns true if we aquired the lock.
b411b3637   Philipp Reisner   The DRBD driver
274
   */
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
275
  static inline int lc_try_lock_for_transaction(struct lru_cache *lc)
b411b3637   Philipp Reisner   The DRBD driver
276
  {
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
277
  	return !test_and_set_bit(__LC_LOCKED, &lc->flags);
b411b3637   Philipp Reisner   The DRBD driver
278
279
280
  }
  
  /**
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
281
282
283
284
285
286
287
288
289
290
291
   * lc_try_lock - variant to stop lc_get() from changing the tracked set
   * @lc: the lru cache to operate on
   *
   * Note that the reference counts and order on the active and lru lists may
   * still change.  Only works on a "clean" set.  Returns true if we aquired the
   * lock, which means there are no pending changes, and any further attempt to
   * change the set will not succeed until the next lc_unlock().
   */
  extern int lc_try_lock(struct lru_cache *lc);
  
  /**
b411b3637   Philipp Reisner   The DRBD driver
292
293
294
295
296
297
   * lc_unlock - unlock @lc, allow lc_get() to change the set again
   * @lc: the lru cache to operate on
   */
  static inline void lc_unlock(struct lru_cache *lc)
  {
  	clear_bit(__LC_DIRTY, &lc->flags);
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
298
  	clear_bit_unlock(__LC_LOCKED, &lc->flags);
b411b3637   Philipp Reisner   The DRBD driver
299
  }
46a15bc3e   Lars Ellenberg   lru_cache: allow ...
300
  extern bool lc_is_used(struct lru_cache *lc, unsigned int enr);
b411b3637   Philipp Reisner   The DRBD driver
301
302
303
304
305
306
307
308
  
  #define lc_entry(ptr, type, member) \
  	container_of(ptr, type, member)
  
  extern struct lc_element *lc_element_by_index(struct lru_cache *lc, unsigned i);
  extern unsigned int lc_index_of(struct lru_cache *lc, struct lc_element *e);
  
  #endif