16 Jun, 2020

1 commit

  • There is a regular need in the kernel to provide a way to declare having a
    dynamically sized set of trailing elements in a structure. Kernel code should
    always use “flexible array members”[1] for these cases. The older style of
    one-element or zero-length arrays should no longer be used[2].

    [1] https://en.wikipedia.org/wiki/Flexible_array_member
    [2] https://github.com/KSPP/linux/issues/21

    Signed-off-by: Gustavo A. R. Silva

    Gustavo A. R. Silva
     

05 May, 2020

1 commit

  • - Add a SPDX header;
    - Adjust document and section titles;
    - Some whitespace fixes and new line breaks;
    - Mark literal blocks as such;
    - Add table markups;
    - Add it to filesystems/caching/index.rst.

    Signed-off-by: Mauro Carvalho Chehab
    Link: https://lore.kernel.org/r/5d0a61abaa87bfe913b9e2f321e74ef7af0f3dfc.1588021877.git.mchehab+huawei@kernel.org
    Signed-off-by: Jonathan Corbet

    Mauro Carvalho Chehab
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

28 Nov, 2018

1 commit

  • The code in fscache_retrieval_complete is using atomic_sub followed by an
    atomic_read:

    atomic_sub(n_pages, &op->n_pages);
    if (atomic_read(&op->n_pages) op, true);

    This causes two threads doing a decrement of n_pages to race with each
    other seeing the op->refcount 0 at same time - and they end up calling
    fscache_op_complete() in both the threads leading to an assertion failure.

    Fix this by using atomic_sub_return_relaxed() instead of two calls. Note
    that I'm using 'relaxed' rather than, say, 'release' as there aren't
    multiple variables that appear to need ordering across the release.

    The oops looks something like:

    FS-Cache: Assertion failed
    FS-Cache: 0 > 0 is false
    ...
    kernel BUG at /usr/src/linux-4.4.0/fs/fscache/operation.c:449!
    ...
    Workqueue: fscache_operation fscache_op_work_func [fscache]
    ...
    RIP: 0010:[] fscache_op_complete+0x10d/0x180 [fscache]
    ...
    Call Trace:
    [] cachefiles_read_copier+0x3a9/0x410 [cachefiles]
    [] fscache_op_work_func+0x22/0x50 [fscache]
    [] process_one_work+0x150/0x3f0
    [] worker_thread+0x11a/0x470
    [] ? __schedule+0x359/0x980
    [] ? rescuer_thread+0x310/0x310
    [] kthread+0xd6/0xf0
    [] ? kthread_park+0x60/0x60
    [] ret_from_fork+0x3f/0x70
    [] ? kthread_park+0x60/0x60

    This seen this in 4.4.x kernels and the same bug affects fscache in latest
    upstreams kernels.

    Fixes: 1bb4b7f98f36 ("FS-Cache: The retrieval remaining-pages counter needs to be atomic_t")
    Signed-off-by: Kiran Kumar Modukuri
    Signed-off-by: David Howells

    kiran.modukuri
     

06 Apr, 2018

1 commit

  • Pass the object size in to fscache_acquire_cookie() and
    fscache_write_page() rather than the netfs providing a callback by which it
    can be received. This makes it easier to update the size of the object
    when a new page is written that extends the object.

    The current object size is also passed by fscache to the check_aux
    function, obviating the need to store it in the aux data.

    Signed-off-by: David Howells
    Acked-by: Anna Schumaker
    Tested-by: Steve Dickson

    David Howells
     

04 Apr, 2018

3 commits

  • Add more tracepoints to fscache, including:

    (*) fscache_page - Tracks netfs pages known to fscache.

    (*) fscache_check_page - Tracks the netfs querying whether a page is
    pending storage.

    (*) fscache_wake_cookie - Tracks cookies being woken up after a page
    completes/aborts storage in the cache.

    (*) fscache_op - Tracks operations being initialised.

    (*) fscache_wrote_page - Tracks return of the backend write_page op.

    (*) fscache_gang_lookup - Tracks lookup of pages to be stored in the write
    operation.

    Signed-off-by: David Howells

    David Howells
     
  • Add some tracepoints to fscache:

    (*) fscache_cookie - Tracks a cookie's usage count.

    (*) fscache_netfs - Logs registration of a network filesystem, including
    the pointer to the cookie allocated.

    (*) fscache_acquire - Logs cookie acquisition.

    (*) fscache_relinquish - Logs cookie relinquishment.

    (*) fscache_enable - Logs enablement of a cookie.

    (*) fscache_disable - Logs disablement of a cookie.

    (*) fscache_osm - Tracks execution of states in the object state machine.

    and cachefiles:

    (*) cachefiles_ref - Tracks a cachefiles object's usage count.

    (*) cachefiles_lookup - Logs result of lookup_one_len().

    (*) cachefiles_mkdir - Logs result of vfs_mkdir().

    (*) cachefiles_create - Logs result of vfs_create().

    (*) cachefiles_unlink - Logs calls to vfs_unlink().

    (*) cachefiles_rename - Logs calls to vfs_rename().

    (*) cachefiles_mark_active - Logs an object becoming active.

    (*) cachefiles_wait_active - Logs a wait for an old object to be
    destroyed.

    (*) cachefiles_mark_inactive - Logs an object becoming inactive.

    (*) cachefiles_mark_buried - Logs the burial of an object.

    Signed-off-by: David Howells

    David Howells
     
  • The last parameter to fscache_op_complete() is a bool indicating whether or
    not the operation was cancelled. A lot of the time the inverse value is
    given or no differentiation is made. Fix this.

    Signed-off-by: David Howells

    David Howells
     

20 Mar, 2018

1 commit


01 Feb, 2017

1 commit

  • Under some circumstances, an fscache object can become queued such that it
    fscache_object_work_func() can be called once the object is in the
    OBJECT_DEAD state. This results in the kernel oopsing when it tries to
    invoke the handler for the state (which is hard coded to 0x2).

    The way this comes about is something like the following:

    (1) The object dispatcher is processing a work state for an object. This
    is done in workqueue context.

    (2) An out-of-band event comes in that isn't masked, causing the object to
    be queued, say EV_KILL.

    (3) The object dispatcher finishes processing the current work state on
    that object and then sees there's another event to process, so,
    without returning to the workqueue core, it processes that event too.
    It then follows the chain of events that initiates until we reach
    OBJECT_DEAD without going through a wait state (such as
    WAIT_FOR_CLEARANCE).

    At this point, object->events may be 0, object->event_mask will be 0
    and oob_event_mask will be 0.

    (4) The object dispatcher returns to the workqueue processor, and in due
    course, this sees that the object's work item is still queued and
    invokes it again.

    (5) The current state is a work state (OBJECT_DEAD), so the dispatcher
    jumps to it - resulting in an OOPS.

    When I'm seeing this, the work state in (1) appears to have been either
    LOOK_UP_OBJECT or CREATE_OBJECT (object->oob_table is
    fscache_osm_lookup_oob).

    The window for (2) is very small:

    (A) object->event_mask is cleared whilst the event dispatch process is
    underway - though there's no memory barrier to force this to the top
    of the function.

    The window, therefore is from the time the object was selected by the
    workqueue processor and made requeueable to the time the mask was
    cleared.

    (B) fscache_raise_event() will only queue the object if it manages to set
    the event bit and the corresponding event_mask bit was set.

    The enqueuement is then deferred slightly whilst we get a ref on the
    object and get the per-CPU variable for workqueue congestion. This
    slight deferral slightly increases the probability by allowing extra
    time for the workqueue to make the item requeueable.

    Handle this by giving the dead state a processor function and checking the
    for the dead state address rather than seeing if the processor function is
    address 0x2. The dead state processor function can then set a flag to
    indicate that it's occurred and give a warning if it occurs more than once
    per object.

    If this race occurs, an oops similar to the following is seen (note the RIP
    value):

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000002
    IP: [] 0x1
    PGD 0
    Oops: 0010 [#1] SMP
    Modules linked in: ...
    CPU: 17 PID: 16077 Comm: kworker/u48:9 Not tainted 3.10.0-327.18.2.el7.x86_64 #1
    Hardware name: HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 12/27/2015
    Workqueue: fscache_object fscache_object_work_func [fscache]
    task: ffff880302b63980 ti: ffff880717544000 task.ti: ffff880717544000
    RIP: 0010:[] [] 0x1
    RSP: 0018:ffff880717547df8 EFLAGS: 00010202
    RAX: ffffffffa0368640 RBX: ffff880edf7a4480 RCX: dead000000200200
    RDX: 0000000000000002 RSI: 00000000ffffffff RDI: ffff880edf7a4480
    RBP: ffff880717547e18 R08: 0000000000000000 R09: dfc40a25cb3a4510
    R10: dfc40a25cb3a4510 R11: 0000000000000400 R12: 0000000000000000
    R13: ffff880edf7a4510 R14: ffff8817f6153400 R15: 0000000000000600
    FS: 0000000000000000(0000) GS:ffff88181f420000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 0000000000000002 CR3: 000000000194a000 CR4: 00000000001407e0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    Stack:
    ffffffffa0363695 ffff880edf7a4510 ffff88093f16f900 ffff8817faa4ec00
    ffff880717547e60 ffffffff8109d5db 00000000faa4ec18 0000000000000000
    ffff8817faa4ec18 ffff88093f16f930 ffff880302b63980 ffff88093f16f900
    Call Trace:
    [] ? fscache_object_work_func+0xa5/0x200 [fscache]
    [] process_one_work+0x17b/0x470
    [] worker_thread+0x21c/0x400
    [] ? rescuer_thread+0x400/0x400
    [] kthread+0xcf/0xe0
    [] ? kthread_create_on_node+0x140/0x140
    [] ret_from_fork+0x58/0x90
    [] ? kthread_create_on_node+0x140/0x140

    Signed-off-by: David Howells
    Acked-by: Jeremy McNicoll
    Tested-by: Frank Sorenson
    Tested-by: Benjamin Coddington
    Reviewed-by: Benjamin Coddington
    Signed-off-by: Al Viro

    David Howells
     

01 Jun, 2016

1 commit

  • __fscache_check_consistency() calls check_consistency() callback
    and return the callback's return value. But the return type of
    check_consistency() is bool. So __fscache_check_consistency()
    return 1 if the cache is inconsistent. This is inconsistent with
    the document.

    Signed-off-by: Yan, Zheng
    Acked-by: David Howells

    Yan, Zheng
     

02 Apr, 2015

5 commits

  • Now that the retrieval operation may be disposed of by fscache_put_operation()
    before we actually set the context, the retrieval-specific cleanup operation
    can produce a NULL-pointer dereference when it tries to unconditionally clean
    up the netfs context.

    Given that it is expected that we'll get at least as far as the place where we
    currently set the context pointer and it is unlikely we'll go through the
    error handling paths prior to that point, retain the context right from the
    point that the retrieval op is allocated.

    Concomitant to this, we need to retain the cookie pointer in the retrieval op
    also so that we can call the netfs to release its context in the release
    method.

    In addition, we might now get into fscache_release_retrieval_op() with the op
    only initialised. To this end, set the operation to DEAD only after the
    release method has been called and skip the n_pages test upon cleanup if the
    op is still in the INITIALISED state.

    Without these changes, the following oops might be seen:

    BUG: unable to handle kernel NULL pointer dereference at 00000000000000b8
    ...
    RIP: 0010:[] fscache_release_retrieval_op+0xae/0x100
    ...
    Call Trace:
    [] fscache_put_operation+0x117/0x2e0
    [] __fscache_read_or_alloc_pages+0x351/0x3ac
    [] __nfs_readpages_from_fscache+0x59/0xbf [nfs]
    [] nfs_readpages+0x10c/0x185 [nfs]
    [] ? alloc_pages_current+0x119/0x13e
    [] ? __page_cache_alloc+0xfb/0x10a
    [] __do_page_cache_readahead+0x188/0x22c
    [] ondemand_readahead+0x29e/0x2af
    [] page_cache_sync_readahead+0x38/0x3a
    [] generic_file_read_iter+0x1a2/0x55a
    [] ? nfs_revalidate_mapping+0xd6/0x288 [nfs]
    [] nfs_file_read+0x49/0x70 [nfs]
    [] new_sync_read+0x78/0x9c
    [] __vfs_read+0x13/0x38
    [] vfs_read+0x95/0x121
    [] SyS_read+0x4c/0x8a
    [] system_call_fastpath+0x12/0x17

    Signed-off-by: David Howells
    Reviewed-by: Steve Dickson
    Acked-by: Jeff Layton

    David Howells
     
  • Any time an incomplete operation is cancelled, the operation cancellation
    function needs to be called to clean up. This is currently being passed
    directly to some of the functions that might want to call it, but not all.

    Instead, pass the cancellation method pointer to the fscache_operation_init()
    and have that cache it in the operation struct. Further, plug in a dummy
    cancellation handler if the caller declines to set one as this allows us to
    call the function unconditionally (the extra overhead isn't worth bothering
    about as we don't expect to be calling this typically).

    The cancellation method must thence be called everywhere the CANCELLED state
    is set. Note that we call it *before* setting the CANCELLED state such that
    the method can use the old state value to guide its operation.

    fscache_do_cancel_retrieval() needs moving higher up in the sources so that
    the init function can use it now.

    Without this, the following oops may be seen:

    FS-Cache: Assertion failed
    FS-Cache: 3 == 0 is false
    ------------[ cut here ]------------
    kernel BUG at ../fs/fscache/page.c:261!
    ...
    RIP: 0010:[] fscache_release_retrieval_op+0x77/0x100
    [] fscache_put_operation+0x114/0x2da
    [] __fscache_read_or_alloc_pages+0x358/0x3b3
    [] __nfs_readpages_from_fscache+0x59/0xbf [nfs]
    [] nfs_readpages+0x10c/0x185 [nfs]
    [] ? alloc_pages_current+0x119/0x13e
    [] ? __page_cache_alloc+0xfb/0x10a
    [] __do_page_cache_readahead+0x188/0x22c
    [] ondemand_readahead+0x29e/0x2af
    [] page_cache_sync_readahead+0x38/0x3a
    [] generic_file_read_iter+0x1a2/0x55a
    [] ? nfs_revalidate_mapping+0xd6/0x288 [nfs]
    [] nfs_file_read+0x49/0x70 [nfs]
    [] new_sync_read+0x78/0x9c
    [] __vfs_read+0x13/0x38
    [] vfs_read+0x95/0x121
    [] SyS_read+0x4c/0x8a
    [] system_call_fastpath+0x12/0x17

    The assertion is showing that the remaining number of pages (n_pages) is not 0
    when the operation is being released.

    Signed-off-by: David Howells
    Reviewed-by: Steve Dickson
    Acked-by: Jeff Layton

    David Howells
     
  • Out of line fscache_operation_init() so that it can access internal FS-Cache
    features, such as stats, in a later commit.

    Signed-off-by: David Howells
    Reviewed-by: Steve Dickson
    Acked-by: Jeff Layton

    David Howells
     
  • fscache_object_is_dead() returns true only if the object is marked dead and
    the cache got an I/O error. This should be a logical OR instead. Since two
    of the callers got split up into handling for separate subcases, expand the
    other callers and kill the function. This is probably the right thing to do
    anyway since one of the subcases isn't about the object at all, but rather
    about the cache.

    Signed-off-by: David Howells
    Reviewed-by: Steve Dickson
    Acked-by: Jeff Layton

    David Howells
     
  • When submitting an operation, prefer to cancel the operation immediately
    rather than queuing it for later processing if the object is marked as dying
    (ie. the object state machine has reached the KILL_OBJECT state).

    Whilst we're at it, change the series of related test_bit() calls into a
    READ_ONCE() and bitwise-AND operators to reduce the number of load
    instructions (test_bit() has a volatile address).

    Signed-off-by: David Howells
    Reviewed-by: Steve Dickson
    Acked-by: Jeff Layton

    David Howells
     

24 Feb, 2015

1 commit


28 Sep, 2013

2 commits

  • Provide the ability to enable and disable fscache cookies. A disabled cookie
    will reject or ignore further requests to:

    Acquire a child cookie
    Invalidate and update backing objects
    Check the consistency of a backing object
    Allocate storage for backing page
    Read backing pages
    Write to backing pages

    but still allows:

    Checks/waits on the completion of already in-progress objects
    Uncaching of pages
    Relinquishment of cookies

    Two new operations are provided:

    (1) Disable a cookie:

    void fscache_disable_cookie(struct fscache_cookie *cookie,
    bool invalidate);

    If the cookie is not already disabled, this locks the cookie against other
    dis/enablement ops, marks the cookie as being disabled, discards or
    invalidates any backing objects and waits for cessation of activity on any
    associated object.

    This is a wrapper around a chunk split out of fscache_relinquish_cookie(),
    but it reinitialises the cookie such that it can be reenabled.

    All possible failures are handled internally. The caller should consider
    calling fscache_uncache_all_inode_pages() afterwards to make sure all page
    markings are cleared up.

    (2) Enable a cookie:

    void fscache_enable_cookie(struct fscache_cookie *cookie,
    bool (*can_enable)(void *data),
    void *data)

    If the cookie is not already enabled, this locks the cookie against other
    dis/enablement ops, invokes can_enable() and, if the cookie is not an
    index cookie, will begin the procedure of acquiring backing objects.

    The optional can_enable() function is passed the data argument and returns
    a ruling as to whether or not enablement should actually be permitted to
    begin.

    All possible failures are handled internally. The cookie will only be
    marked as enabled if provisional backing objects are allocated.

    A later patch will introduce these to NFS. Cookie enablement during nfs_open()
    is then contingent on i_writecount <dhowells@redhat.com

    David Howells
     
  • Add wrapper functions for dealing with cookie->n_active:

    (*) __fscache_use_cookie() to increment it.

    (*) __fscache_unuse_cookie() to decrement and test against zero.

    (*) __fscache_wake_unused_cookie() to wake up anyone waiting for it to reach
    zero.

    The second and third are split so that the third can be done after cookie->lock
    has been released in case the waiter wakes up whilst we're still holding it and
    tries to get it.

    We will need to wake-on-zero once the cookie disablement patch is applied
    because it will then be possible to see n_active become zero without the cookie
    being relinquished.

    Also move the cookie usement out of fscache_attr_changed_op() and into
    fscache_attr_changed() and the operation struct so that cookie disablement
    will be able to track it.

    Whilst we're at it, only increment n_active if we're about to do
    fscache_submit_op() so that we don't have to deal with undoing it if anything
    earlier fails. Possibly this should be moved into fscache_submit_op() which
    could look at FSCACHE_OP_UNUSE_COOKIE.

    Signed-off-by: David Howells

    David Howells
     

06 Sep, 2013

1 commit

  • Extend the fscache netfs API so that the netfs can ask as to whether a cache
    object is up to date with respect to its corresponding netfs object:

    int fscache_check_consistency(struct fscache_cookie *cookie)

    This will call back to the netfs to check whether the auxiliary data associated
    with a cookie is correct. It returns 0 if it is and -ESTALE if it isn't; it
    may also return -ENOMEM and -ERESTARTSYS.

    The backends now have to implement a mandatory operation pointer:

    int (*check_consistency)(struct fscache_object *object)

    that corresponds to the above API call. FS-Cache takes care of pinning the
    object and the cookie in memory and managing this call with respect to the
    object state.

    Original-author: Hongyi Jia
    Signed-off-by: David Howells
    cc: Hongyi Jia
    cc: Milosz Tanski

    David Howells
     

19 Jun, 2013

5 commits

  • struct fscache_retrieval contains a count of the number of pages that still
    need some processing (n_pages). This is decremented as the pages are
    processed.

    However, this needs to be atomic as fscache_retrieval_complete() (I think) just
    occasionally may be called from cachefiles_read_backing_file() and
    cachefiles_read_copier() simultaneously.

    This happens when an fscache_read_or_alloc_pages() request containing a lot of
    pages (say a couple of hundred) is being processed. The read on each backing
    page is dispatched individually because we need to insert a monitor into the
    waitqueue to catch when the read completes. However, under low-memory
    conditions, we might be forced to wait in the allocator - and this gives the
    I/O on the backing page a chance to complete first.

    When the I/O completes, fscache_enqueue_retrieval() chucks the retrieval onto
    the workqueue without waiting for the operation to finish the initial I/O
    dispatch (we want to release any pages we can as soon as we can), thus both can
    end up running simultaneously and potentially attempting to partially complete
    the retrieval simultaneously (ENOMEM may occur, backing pages may already be in
    the page cache).

    This was demonstrated by parallelling the non-atomic counter with an atomic
    counter and printing both of them when the assertion fails. At this point, the
    atomic counter has reached zero, but the non-atomic counter has not.

    To fix this, make the counter an atomic_t.

    This results in the following bug appearing

    FS-Cache: Assertion failed
    3 == 5 is false
    ------------[ cut here ]------------
    kernel BUG at fs/fscache/operation.c:421!

    or

    FS-Cache: Assertion failed
    3 == 5 is false
    ------------[ cut here ]------------
    kernel BUG at fs/fscache/operation.c:414!

    With a backtrace like the following:

    RIP: 0010:[] fscache_put_operation+0x1ad/0x240 [fscache]
    Call Trace:
    [] fscache_retrieval_work+0x55/0x270 [fscache]
    [] ? fscache_retrieval_work+0x0/0x270 [fscache]
    [] worker_thread+0x170/0x2a0
    [] ? autoremove_wake_function+0x0/0x40
    [] ? worker_thread+0x0/0x2a0
    [] kthread+0x96/0xa0
    [] child_rip+0xa/0x20
    [] ? kthread+0x0/0xa0
    [] ? child_rip+0x0/0x20

    Signed-off-by: David Howells
    Reviewed-and-tested-By: Milosz Tanski
    Acked-by: Jeff Layton

    David Howells
     
  • Simplify the way fscache cache objects retain their cookie. The way I
    implemented the cookie storage handling made synchronisation a pain (ie. the
    object state machine can't rely on the cookie actually still being there).

    Instead of the the object being detached from the cookie and the cookie being
    freed in __fscache_relinquish_cookie(), we defer both operations:

    (*) The detachment of the object from the list in the cookie now takes place
    in fscache_drop_object() and is thus governed by the object state machine
    (fscache_detach_from_cookie() has been removed).

    (*) The release of the cookie is now in fscache_object_destroy() - which is
    called by the cache backend just before it frees the object.

    This means that the fscache_cookie struct is now available to the cache all the
    way through from ->alloc_object() to ->drop_object() and ->put_object() -
    meaning that it's no longer necessary to take object->lock to guarantee access.

    However, __fscache_relinquish_cookie() doesn't wait for the object to go all
    the way through to destruction before letting the netfs proceed. That would
    massively slow down the netfs. Since __fscache_relinquish_cookie() leaves the
    cookie around, in must therefore break all attachments to the netfs - which
    includes ->def, ->netfs_data and any outstanding page read/writes.

    To handle this, struct fscache_cookie now has an n_active counter:

    (1) This starts off initialised to 1.

    (2) Any time the cache needs to get at the netfs data, it calls
    fscache_use_cookie() to increment it - if it is not zero. If it was zero,
    then access is not permitted.

    (3) When the cache has finished with the data, it calls fscache_unuse_cookie()
    to decrement it. This does a wake-up on it if it reaches 0.

    (4) __fscache_relinquish_cookie() decrements n_active and then waits for it to
    reach 0. The initialisation to 1 in step (1) ensures that we only get
    wake ups when we're trying to get rid of the cookie.

    This leaves __fscache_relinquish_cookie() a lot simpler.

    ***
    This fixes a problem in the current code whereby if fscache_invalidate() is
    followed sufficiently quickly by fscache_relinquish_cookie() then it is
    possible for __fscache_relinquish_cookie() to have detached the cookie from the
    object and cleared the pointer before a thread is dispatched to process the
    invalidation state in the object state machine.

    Since the pending write clearance was deferred to the invalidation state to
    make it asynchronous, we need to either wait in relinquishment for the stores
    tree to be cleared in the invalidation state or we need to handle the clearance
    in relinquishment.

    Further, if the relinquishment code does clear the tree, then the invalidation
    state need to make the clearance contingent on still having the cookie to hand
    (since that's where the tree is rooted) and we have to prevent the cookie from
    disappearing for the duration.

    This can lead to an oops like the following:

    BUG: unable to handle kernel NULL pointer dereference at 000000000000000c
    ...
    RIP: 0010:[] _spin_lock+0xe/0x30
    ...
    CR2: 000000000000000c ...
    ...
    Process kslowd002 (...)
    ....
    Call Trace:
    [] fscache_invalidate_writes+0x38/0xd0 [fscache]
    [] ? __switch_to+0xd0/0x320
    [] ? find_busiest_queue+0x69/0x150
    [] ? slow_work_enqueue+0x104/0x180
    [] fscache_object_slow_work_execute+0x5e3/0x9d0 [fscache]
    [] ? bit_waitqueue+0x17/0xd0
    [] slow_work_execute+0x233/0x310
    [] slow_work_thread+0x205/0x360
    [] ? autoremove_wake_function+0x0/0x40
    [] ? slow_work_thread+0x0/0x360
    [] kthread+0x96/0xa0
    [] child_rip+0xa/0x20
    [] ? kthread+0x0/0xa0
    [] ? child_rip+0x0/0x20

    The parameter to fscache_invalidate_writes() was object->cookie which is NULL.

    Signed-off-by: David Howells
    Tested-By: Milosz Tanski
    Acked-by: Jeff Layton

    David Howells
     
  • Fix object state machine to have separate work and wait states as that makes
    it easier to envision.

    There are now three kinds of state:

    (1) Work state. This is an execution state. No event processing is performed
    by a work state. The function attached to a work state returns a pointer
    indicating the next state to which the OSM should transition. Returning
    NO_TRANSIT repeats the current state, but goes back to the scheduler
    first.

    (2) Wait state. This is an event processing state. No execution is
    performed by a wait state. Wait states are just tables of "if event X
    occurs, clear it and transition to state Y". The dispatcher returns to
    the scheduler if none of the events in which the wait state has an
    interest are currently pending.

    (3) Out-of-band state. This is a special work state. Transitions to normal
    states can be overridden when an unexpected event occurs (eg. I/O error).
    Instead the dispatcher disables and clears the OOB event and transits to
    the specified work state. This then acts as an ordinary work state,
    though object->state points to the overridden destination. Returning
    NO_TRANSIT resumes the overridden transition.

    In addition, the states have names in their definitions, so there's no need for
    tables of state names. Further, the EV_REQUEUE event is no longer necessary as
    that is automatic for work states.

    Since the states are now separate structs rather than values in an enum, it's
    not possible to use comparisons other than (non-)equality between them, so use
    some object->flags to indicate what phase an object is in.

    The EV_RELEASE, EV_RETIRE and EV_WITHDRAW events have been squished into one
    (EV_KILL). An object flag now carries the information about retirement.

    Similarly, the RELEASING, RECYCLING and WITHDRAWING states have been merged
    into an KILL_OBJECT state and additional states have been added for handling
    waiting dependent objects (JUMPSTART_DEPS and KILL_DEPENDENTS).

    A state has also been added for synchronising with parent object initialisation
    (WAIT_FOR_PARENT) and another for initiating look up (PARENT_READY).

    Signed-off-by: David Howells
    Tested-By: Milosz Tanski
    Acked-by: Jeff Layton

    David Howells
     
  • Wrap checks on object state (mostly outside of fs/fscache/object.c) with
    inline functions so that the mechanism can be replaced.

    Some of the state checks within object.c are left as-is as they will be
    replaced.

    Signed-off-by: David Howells
    Tested-By: Milosz Tanski
    Acked-by: Jeff Layton

    David Howells
     
  • Uninline fscache_object_init() so as not to expose some of the FS-Cache
    internals to the cache backend.

    Signed-off-by: David Howells
    Tested-By: Milosz Tanski
    Acked-by: Jeff Layton

    David Howells
     

21 Dec, 2012

6 commits

  • Mark as cancelled an operation that is in progress rather than pending at the
    time it is cancelled, and call fscache_complete_op() to cancel an operation so
    that blocked ops can be started.

    Signed-off-by: David Howells

    David Howells
     
  • Convert the fscache_object event IDs from #defines into an enum. Also add an
    extra label to the enum to carry the event count and redefine the event mask
    in terms of that.

    Signed-off-by: David Howells

    David Howells
     
  • Provide a proper invalidation method rather than relying on the netfs retiring
    the cookie it has and getting a new one. The problem with this is that isn't
    easy for the netfs to make sure that it has completed/cancelled all its
    outstanding storage and retrieval operations on the cookie it is retiring.

    Instead, have the cache provide an invalidation method that will cancel or wait
    for all currently outstanding operations before invalidating the cache, and
    will cause new operations to queue up behind that. Whilst invalidation is in
    progress, some requests will be rejected until the cache can stack a barrier on
    the operation queue to cause new operations to be deferred behind it.

    Signed-off-by: David Howells

    David Howells
     
  • Fix the state management of internal fscache operations and the accounting of
    what operations are in what states.

    This is done by:

    (1) Give struct fscache_operation a enum variable that directly represents the
    state it's currently in, rather than spreading this knowledge over a bunch
    of flags, who's processing the operation at the moment and whether it is
    queued or not.

    This makes it easier to write assertions to check the state at various
    points and to prevent invalid state transitions.

    (2) Add an 'operation complete' state and supply a function to indicate the
    completion of an operation (fscache_op_complete()) and make things call
    it. The final call to fscache_put_operation() can then check that an op
    in the appropriate state (complete or cancelled).

    (3) Adjust the use of object->n_ops, ->n_in_progress, ->n_exclusive to better
    govern the state of an object:

    (a) The ->n_ops is now the number of extant operations on the object
    and is now decremented by fscache_put_operation() only.

    (b) The ->n_in_progress is simply the number of objects that have been
    taken off of the object's pending queue for the purposes of being
    run. This is decremented by fscache_op_complete() only.

    (c) The ->n_exclusive is the number of exclusive ops that have been
    submitted and queued or are in progress. It is decremented by
    fscache_op_complete() and by fscache_cancel_op().

    fscache_put_operation() and fscache_operation_gc() now no longer try to
    clean up ->n_exclusive and ->n_in_progress. That was leading to double
    decrements against fscache_cancel_op().

    fscache_cancel_op() now no longer decrements ->n_ops. That was leading to
    double decrements against fscache_put_operation().

    fscache_submit_exclusive_op() now decides whether it has to queue an op
    based on ->n_in_progress being > 0 rather than ->n_ops > 0 as the latter
    will persist in being true even after all preceding operations have been
    cancelled or completed. Furthermore, if an object is active and there are
    runnable ops against it, there must be at least one op running.

    (4) Add a remaining-pages counter (n_pages) to struct fscache_retrieval and
    provide a function to record completion of the pages as they complete.

    When n_pages reaches 0, the operation is deemed to be complete and
    fscache_op_complete() is called.

    Add calls to fscache_retrieval_complete() anywhere we've finished with a
    page we've been given to read or allocate for. This includes places where
    we just return pages to the netfs for reading from the server and where
    accessing the cache fails and we discard the proposed netfs page.

    The bugs in the unfixed state management manifest themselves as oopses like the
    following where the operation completion gets out of sync with return of the
    cookie by the netfs. This is possible because the cache unlocks and returns
    all the netfs pages before recording its completion - which means that there's
    nothing to stop the netfs discarding them and returning the cookie.

    FS-Cache: Cookie 'NFS.fh' still has outstanding reads
    ------------[ cut here ]------------
    kernel BUG at fs/fscache/cookie.c:519!
    invalid opcode: 0000 [#1] SMP
    CPU 1
    Modules linked in: cachefiles nfs fscache auth_rpcgss nfs_acl lockd sunrpc

    Pid: 400, comm: kswapd0 Not tainted 3.1.0-rc7-fsdevel+ #1090 /DG965RY
    RIP: 0010:[] [] __fscache_relinquish_cookie+0x170/0x343 [fscache]
    RSP: 0018:ffff8800368cfb00 EFLAGS: 00010282
    RAX: 000000000000003c RBX: ffff880023cc8790 RCX: 0000000000000000
    RDX: 0000000000002f2e RSI: 0000000000000001 RDI: ffffffff813ab86c
    RBP: ffff8800368cfb50 R08: 0000000000000002 R09: 0000000000000000
    R10: ffff88003a1b7890 R11: ffff88001df6e488 R12: ffff880023d8ed98
    R13: ffff880023cc8798 R14: 0000000000000004 R15: ffff88003b8bf370
    FS: 0000000000000000(0000) GS:ffff88003bd00000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    CR2: 00000000008ba008 CR3: 0000000023d93000 CR4: 00000000000006e0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    Process kswapd0 (pid: 400, threadinfo ffff8800368ce000, task ffff88003b8bf040)
    Stack:
    ffff88003b8bf040 ffff88001df6e528 ffff88001df6e528 ffffffffa00b46b0
    ffff88003b8bf040 ffff88001df6e488 ffff88001df6e620 ffffffffa00b46b0
    ffff88001ebd04c8 0000000000000004 ffff8800368cfb70 ffffffffa00b2c91
    Call Trace:
    [] nfs_fscache_release_inode_cookie+0x3b/0x47 [nfs]
    [] nfs_clear_inode+0x3c/0x41 [nfs]
    [] nfs4_evict_inode+0x2f/0x33 [nfs]
    [] evict+0xa1/0x15c
    [] dispose_list+0x2c/0x38
    [] prune_icache_sb+0x28c/0x29b
    [] prune_super+0xd5/0x140
    [] shrink_slab+0x102/0x1ab
    [] balance_pgdat+0x2f2/0x595
    [] ? process_timeout+0xb/0xb
    [] kswapd+0x270/0x289
    [] ? __init_waitqueue_head+0x46/0x46
    [] ? balance_pgdat+0x595/0x595
    [] kthread+0x7f/0x87
    [] kernel_thread_helper+0x4/0x10
    [] ? finish_task_switch+0x45/0xc0
    [] ? retint_restore_args+0xe/0xe
    [] ? __init_kthread_worker+0x53/0x53
    [] ? gs_change+0xb/0xb

    Signed-off-by: David Howells

    David Howells
     
  • Make fscache_relinquish_cookie() log a warning and wait if there are any
    outstanding reads left on the cookie it was given.

    Signed-off-by: David Howells

    David Howells
     
  • Under some circumstances CacheFiles defers the marking of pages with PG_fscache
    so that it can take advantage of pagevecs to reduce the number of calls to
    fscache_mark_pages_cached() and the netfs's hook to keep track of this.

    There are, however, two problems with this:

    (1) It can lead to the PG_fscache mark being applied _after_ the page is set
    PG_uptodate and unlocked (by the call to fscache_end_io()).

    (2) CacheFiles's ref on the page is dropped immediately following
    fscache_end_io() - and so may not still be held when the mark is applied.
    This can lead to the page being passed back to the allocator before the
    mark is applied.

    Fix this by, where appropriate, marking the page before calling
    fscache_end_io() and releasing the page. This means that we can't take
    advantage of pagevecs and have to make a separate call for each page to the
    marking routines.

    The symptoms of this are Bad Page state errors cropping up under memory
    pressure, for example:

    BUG: Bad page state in process tar pfn:002da
    page:ffffea0000009fb0 count:0 mapcount:0 mapping: (null) index:0x1447
    page flags: 0x1000(private_2)
    Pid: 4574, comm: tar Tainted: G W 3.1.0-rc4-fsdevel+ #1064
    Call Trace:
    [] ? dump_page+0xb9/0xbe
    [] bad_page+0xd5/0xea
    [] get_page_from_freelist+0x35b/0x46a
    [] __alloc_pages_nodemask+0x362/0x662
    [] __do_page_cache_readahead+0x13a/0x267
    [] ? __do_page_cache_readahead+0xa2/0x267
    [] ra_submit+0x1c/0x20
    [] ondemand_readahead+0x28b/0x29a
    [] ? ondemand_readahead+0x163/0x29a
    [] page_cache_sync_readahead+0x38/0x3a
    [] generic_file_aio_read+0x2ab/0x67e
    [] nfs_file_read+0xa4/0xc9 [nfs]
    [] do_sync_read+0xba/0xfa
    [] ? security_file_permission+0x7b/0x84
    [] ? rw_verify_area+0xab/0xc8
    [] vfs_read+0xaa/0x13a
    [] sys_read+0x45/0x6c
    [] system_call_fastpath+0x16/0x1b

    As can be seen, PG_private_2 (== PG_fscache) is set in the page flags.

    Instrumenting fscache_mark_pages_cached() to verify whether page->mapping was
    set appropriately showed that sometimes it wasn't. This led to the discovery
    that sometimes the page has apparently been reclaimed by the time the marker
    got to see it.

    Reported-by: M. Stevens
    Signed-off-by: David Howells
    Reviewed-by: Jeff Layton

    David Howells
     

01 Nov, 2011

1 commit

  • Standardize the style for compiler based printf format verification.
    Standardized the location of __printf too.

    Done via script and a little typing.

    $ grep -rPl --include=*.[ch] -w "__attribute__" * | \
    grep -vP "^(tools|scripts|include/linux/compiler-gcc.h)" | \
    xargs perl -n -i -e 'local $/; while (<>) { s/\b__attribute__\s*\(\s*\(\s*format\s*\(\s*printf\s*,\s*(.+)\s*,\s*(.+)\s*\)\s*\)\s*\)/__printf($1, $2)/g ; print; }'

    [akpm@linux-foundation.org: revert arch bits]
    Signed-off-by: Joe Perches
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

25 May, 2011

1 commit


31 Mar, 2011

1 commit


23 Jul, 2010

3 commits

  • fscache no longer uses slow-work. Drop references to it.

    Signed-off-by: Tejun Heo
    Acked-by: David Howells

    Tejun Heo
     
  • Make fscache operation to use only workqueue instead of combination of
    workqueue and slow-work. FSCACHE_OP_SLOW is dropped and
    FSCACHE_OP_FAST is renamed to FSCACHE_OP_ASYNC and uses newly added
    fscache_op_wq workqueue to execute op->processor().
    fscache_operation_init_slow() is dropped and fscache_operation_init()
    now takes @processor argument directly.

    * Unbound workqueue is used.

    * fscache_retrieval_work() is no longer necessary as OP_ASYNC now does
    the equivalent thing.

    * sysctl fscache.operation_max_active added to control concurrency.
    The default value is nr_cpus clamped between 2 and
    WQ_UNBOUND_MAX_ACTIVE.

    * debugfs support is dropped for now. Tracing API based debug
    facility is planned to be added.

    Signed-off-by: Tejun Heo
    Acked-by: David Howells

    Tejun Heo
     
  • Make fscache object state transition callbacks use workqueue instead
    of slow-work. New dedicated unbound CPU workqueue fscache_object_wq
    is created. get/put callbacks are renamed and modified to take
    @object and called directly from the enqueue wrapper and the work
    function. While at it, make all open coded instances of get/put to
    use fscache_get/put_object().

    * Unbound workqueue is used.

    * work_busy() output is printed instead of slow-work flags in object
    debugging outputs. They mean basically the same thing bit-for-bit.

    * sysctl fscache.object_max_active added to control concurrency. The
    default value is nr_cpus clamped between 4 and
    WQ_UNBOUND_MAX_ACTIVE.

    * slow_work_sleep_till_thread_needed() is replaced with fscache
    private implementation fscache_object_sleep_till_congested() which
    waits on fscache_object_wq congestion.

    * debugfs support is dropped for now. Tracing API based debug
    facility is planned to be added.

    Signed-off-by: Tejun Heo
    Acked-by: David Howells

    Tejun Heo
     

30 Mar, 2010

1 commit


20 Nov, 2009

2 commits

  • Catch an overly long wait for an old, dying active object when we want to
    replace it with a new one. The probability is that all the slow-work threads
    are hogged, and the delete can't get a look in.

    What we do instead is:

    (1) if there's nothing in the slow work queue, we sleep until either the dying
    object has finished dying or there is something in the slow work queue
    behind which we can queue our object.

    (2) if there is something in the slow work queue, we return ETIMEDOUT to
    fscache_lookup_object(), which then puts us back on the slow work queue,
    presumably behind the deletion that we're blocked by. We are then
    deferred for a while until we work our way back through the queue -
    without blocking a slow-work thread unnecessarily.

    A backtrace similar to the following may appear in the log without this patch:

    INFO: task kslowd004:5711 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kslowd004 D 0000000000000000 0 5711 2 0x00000080
    ffff88000340bb80 0000000000000046 ffff88002550d000 0000000000000000
    ffff88002550d000 0000000000000007 ffff88000340bfd8 ffff88002550d2a8
    000000000000ddf0 00000000000118c0 00000000000118c0 ffff88002550d2a8
    Call Trace:
    [] ? trace_hardirqs_on+0xd/0xf
    [] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
    [] cachefiles_wait_bit+0x9/0xd [cachefiles]
    [] __wait_on_bit+0x43/0x76
    [] ? ext3_xattr_get+0x1ec/0x270
    [] out_of_line_wait_on_bit+0x69/0x74
    [] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
    [] ? wake_bit_function+0x0/0x2e
    [] cachefiles_mark_object_active+0x203/0x23b [cachefiles]
    [] cachefiles_walk_to_object+0x558/0x827 [cachefiles]
    [] cachefiles_lookup_object+0xac/0x12a [cachefiles]
    [] fscache_lookup_object+0x1c7/0x214 [fscache]
    [] fscache_object_state_machine+0xa5/0x52d [fscache]
    [] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]
    [] slow_work_execute+0x18f/0x2d1
    [] slow_work_thread+0x1c5/0x308
    [] ? autoremove_wake_function+0x0/0x34
    [] ? slow_work_thread+0x0/0x308
    [] kthread+0x7a/0x82
    [] child_rip+0xa/0x20
    [] ? restore_args+0x0/0x30
    [] ? kthread+0x0/0x82
    [] ? child_rip+0x0/0x20
    1 lock held by kslowd004/5711:
    #0: (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [] cachefiles_walk_to_object+0x1b3/0x827 [cachefiles]

    Signed-off-by: David Howells

    David Howells
     
  • cachefiles_write_page() writes a full page to the backing file for the last
    page of the netfs file, even if the netfs file's last page is only a partial
    page.

    This causes the EOF on the backing file to be extended beyond the EOF of the
    netfs, and thus the backing file will be truncated by cachefiles_attr_changed()
    called from cachefiles_lookup_object().

    So we need to limit the write we make to the backing file on that last page
    such that it doesn't push the EOF too far.

    Also, if a backing file that has a partial page at the end is expanded, we
    discard the partial page and refetch it on the basis that we then have a hole
    in the file with invalid data, and should the power go out... A better way to
    deal with this could be to record a note that the partial page contains invalid
    data until the correct data is written into it.

    This isn't a problem for netfs's that discard the whole backing file if the
    file size changes (such as NFS).

    Signed-off-by: David Howells

    David Howells