Commit a82b8db02f33b38a24fc3858968153940f7c4740

Authored by Thomas Hellstrom
Committed by Dave Airlie
1 parent 384cc2f968

kref: Add kref_get_unless_zero documentation

Document how kref_get_unless_zero should be used and how it helps
solve a typical kref / locking problem.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>

Showing 1 changed file with 88 additions and 0 deletions Side-by-side Diff

Documentation/kref.txt
... ... @@ -212,4 +212,93 @@
212 212 http://www.kroah.com/linux/talks/ols_2004_kref_paper/Reprint-Kroah-Hartman-OLS2004.pdf
213 213 and:
214 214 http://www.kroah.com/linux/talks/ols_2004_kref_talk/
  215 +
  216 +
  217 +The above example could also be optimized using kref_get_unless_zero() in
  218 +the following way:
  219 +
  220 +static struct my_data *get_entry()
  221 +{
  222 + struct my_data *entry = NULL;
  223 + mutex_lock(&mutex);
  224 + if (!list_empty(&q)) {
  225 + entry = container_of(q.next, struct my_data, link);
  226 + if (!kref_get_unless_zero(&entry->refcount))
  227 + entry = NULL;
  228 + }
  229 + mutex_unlock(&mutex);
  230 + return entry;
  231 +}
  232 +
  233 +static void release_entry(struct kref *ref)
  234 +{
  235 + struct my_data *entry = container_of(ref, struct my_data, refcount);
  236 +
  237 + mutex_lock(&mutex);
  238 + list_del(&entry->link);
  239 + mutex_unlock(&mutex);
  240 + kfree(entry);
  241 +}
  242 +
  243 +static void put_entry(struct my_data *entry)
  244 +{
  245 + kref_put(&entry->refcount, release_entry);
  246 +}
  247 +
  248 +Which is useful to remove the mutex lock around kref_put() in put_entry(), but
  249 +it's important that kref_get_unless_zero is enclosed in the same critical
  250 +section that finds the entry in the lookup table,
  251 +otherwise kref_get_unless_zero may reference already freed memory.
  252 +Note that it is illegal to use kref_get_unless_zero without checking its
  253 +return value. If you are sure (by already having a valid pointer) that
  254 +kref_get_unless_zero() will return true, then use kref_get() instead.
  255 +
  256 +The function kref_get_unless_zero also makes it possible to use rcu
  257 +locking for lookups in the above example:
  258 +
  259 +struct my_data
  260 +{
  261 + struct rcu_head rhead;
  262 + .
  263 + struct kref refcount;
  264 + .
  265 + .
  266 +};
  267 +
  268 +static struct my_data *get_entry_rcu()
  269 +{
  270 + struct my_data *entry = NULL;
  271 + rcu_read_lock();
  272 + if (!list_empty(&q)) {
  273 + entry = container_of(q.next, struct my_data, link);
  274 + if (!kref_get_unless_zero(&entry->refcount))
  275 + entry = NULL;
  276 + }
  277 + rcu_read_unlock();
  278 + return entry;
  279 +}
  280 +
  281 +static void release_entry_rcu(struct kref *ref)
  282 +{
  283 + struct my_data *entry = container_of(ref, struct my_data, refcount);
  284 +
  285 + mutex_lock(&mutex);
  286 + list_del_rcu(&entry->link);
  287 + mutex_unlock(&mutex);
  288 + kfree_rcu(entry, rhead);
  289 +}
  290 +
  291 +static void put_entry(struct my_data *entry)
  292 +{
  293 + kref_put(&entry->refcount, release_entry_rcu);
  294 +}
  295 +
  296 +But note that the struct kref member needs to remain in valid memory for a
  297 +rcu grace period after release_entry_rcu was called. That can be accomplished
  298 +by using kfree_rcu(entry, rhead) as done above, or by calling synchronize_rcu()
  299 +before using kfree, but note that synchronize_rcu() may sleep for a
  300 +substantial amount of time.
  301 +
  302 +
  303 +Thomas Hellstrom <thellstrom@vmware.com>