09 Jan, 2016

1 commit


24 Nov, 2013

1 commit


28 Sep, 2013

1 commit

  • Provide the ability to enable and disable fscache cookies. A disabled cookie
    will reject or ignore further requests to:

    Acquire a child cookie
    Invalidate and update backing objects
    Check the consistency of a backing object
    Allocate storage for backing page
    Read backing pages
    Write to backing pages

    but still allows:

    Checks/waits on the completion of already in-progress objects
    Uncaching of pages
    Relinquishment of cookies

    Two new operations are provided:

    (1) Disable a cookie:

    void fscache_disable_cookie(struct fscache_cookie *cookie,
    bool invalidate);

    If the cookie is not already disabled, this locks the cookie against other
    dis/enablement ops, marks the cookie as being disabled, discards or
    invalidates any backing objects and waits for cessation of activity on any
    associated object.

    This is a wrapper around a chunk split out of fscache_relinquish_cookie(),
    but it reinitialises the cookie such that it can be reenabled.

    All possible failures are handled internally. The caller should consider
    calling fscache_uncache_all_inode_pages() afterwards to make sure all page
    markings are cleared up.

    (2) Enable a cookie:

    void fscache_enable_cookie(struct fscache_cookie *cookie,
    bool (*can_enable)(void *data),
    void *data)

    If the cookie is not already enabled, this locks the cookie against other
    dis/enablement ops, invokes can_enable() and, if the cookie is not an
    index cookie, will begin the procedure of acquiring backing objects.

    The optional can_enable() function is passed the data argument and returns
    a ruling as to whether or not enablement should actually be permitted to
    begin.

    All possible failures are handled internally. The cookie will only be
    marked as enabled if provisional backing objects are allocated.

    A later patch will introduce these to NFS. Cookie enablement during nfs_open()
    is then contingent on i_writecount <dhowells@redhat.com

    David Howells
     

06 Jan, 2012

1 commit

  • Reduce object size by deduplicating formats.

    Use vsprintf extension %pV.
    Rename P9_DPRINTK uses to p9_debug, align arguments.
    Add function for _p9_debug and macro to add __func__.
    Add missing "\n"s to p9_debug uses.
    Remove embedded function names as p9_debug adds it.
    Remove P9_EPRINTK macro and convert use to pr_.
    Add and use pr_fmt and pr_.

    $ size fs/9p/built-in.o*
    text data bss dec hex filename
    62133 984 16000 79117 1350d fs/9p/built-in.o.new
    67342 984 16928 85254 14d06 fs/9p/built-in.o.old
    $ size net/9p/built-in.o*
    text data bss dec hex filename
    88792 4148 22024 114964 1c114 net/9p/built-in.o.new
    94072 4148 23232 121452 1da6c net/9p/built-in.o.old

    Signed-off-by: Joe Perches
    Signed-off-by: Eric Van Hensbergen

    Joe Perches
     

23 Jul, 2011

1 commit

  • This make sure we don't use wrong inode from the inode hash. The inode number
    of the file deleted is reused by the next file system object created
    and if we only use inode number for inode hash lookup we could end up
    with wrong struct inode.

    Also compare inode generation number. Not all Linux file system provide
    st_gen in userspace. So it could be 0;

    Signed-off-by: Aneesh Kumar K.V
    Signed-off-by: Eric Van Hensbergen

    Aneesh Kumar K.V
     

15 Mar, 2011

2 commits

  • Switch to the fscache code to v9fs_inode. We will later use
    v9fs_inode in cache=loose mode to track the inode cache
    validity timeout. Ie if we find an inode in cache older
    that a specific jiffie range we will consider it stale

    Signed-off-by: Aneesh Kumar K.V
    Signed-off-by: Venkateswararao Jujjuri
    Signed-off-by: Eric Van Hensbergen

    Aneesh Kumar K.V
     
  • We need to call fscache_wait_on_page_write in launder_page
    for fscache

    Signed-off-by: Aneesh Kumar K.V
    Signed-off-by: Venkateswararao Jujjuri
    Signed-off-by: Eric Van Hensbergen

    Aneesh Kumar K.V
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

01 Dec, 2009

1 commit

  • While building 2.6.32-rc8-git2 for Fedora I noticed the following thinko
    in commit 201a15428bd54f83eccec8b7c64a04b8f9431204 ("FS-Cache: Handle
    pages pending storage that get evicted under OOM conditions"):

    fs/9p/cache.c: In function '__v9fs_fscache_release_page':
    fs/9p/cache.c:346: error: 'vnode' undeclared (first use in this function)
    fs/9p/cache.c:346: error: (Each undeclared identifier is reported only once
    fs/9p/cache.c:346: error: for each function it appears in.)
    make[2]: *** [fs/9p/cache.o] Error 1

    Fix the 9P filesystem to correctly construct the argument to
    fscache_maybe_release_page().

    Signed-off-by: Kyle McMartin
    Signed-off-by: Xiaotian Feng [from identical patch]
    Signed-off-by: Stefan Lippers-Hollmann [from identical patch]
    Signed-off-by: David Howells
    Signed-off-by: Linus Torvalds

    David Howells
     

20 Nov, 2009

1 commit

  • Handle netfs pages that the vmscan algorithm wants to evict from the pagecache
    under OOM conditions, but that are waiting for write to the cache. Under these
    conditions, vmscan calls the releasepage() function of the netfs, asking if a
    page can be discarded.

    The problem is typified by the following trace of a stuck process:

    kslowd005 D 0000000000000000 0 4253 2 0x00000080
    ffff88001b14f370 0000000000000046 ffff880020d0d000 0000000000000007
    0000000000000006 0000000000000001 ffff88001b14ffd8 ffff880020d0d2a8
    000000000000ddf0 00000000000118c0 00000000000118c0 ffff880020d0d2a8
    Call Trace:
    [] __fscache_wait_on_page_write+0x8b/0xa7 [fscache]
    [] ? autoremove_wake_function+0x0/0x34
    [] ? __fscache_check_page_write+0x63/0x70 [fscache]
    [] nfs_fscache_release_page+0x4e/0xc4 [nfs]
    [] nfs_release_page+0x3c/0x41 [nfs]
    [] try_to_release_page+0x32/0x3b
    [] shrink_page_list+0x316/0x4ac
    [] shrink_inactive_list+0x392/0x67c
    [] ? __mutex_unlock_slowpath+0x100/0x10b
    [] ? trace_hardirqs_on_caller+0x10c/0x130
    [] ? mutex_unlock+0x9/0xb
    [] shrink_list+0x8d/0x8f
    [] shrink_zone+0x278/0x33c
    [] ? ktime_get_ts+0xad/0xba
    [] try_to_free_pages+0x22e/0x392
    [] ? isolate_pages_global+0x0/0x212
    [] __alloc_pages_nodemask+0x3dc/0x5cf
    [] grab_cache_page_write_begin+0x65/0xaa
    [] ext3_write_begin+0x78/0x1eb
    [] generic_file_buffered_write+0x109/0x28c
    [] ? current_fs_time+0x22/0x29
    [] __generic_file_aio_write+0x350/0x385
    [] ? generic_file_aio_write+0x4a/0xae
    [] generic_file_aio_write+0x60/0xae
    [] do_sync_write+0xe3/0x120
    [] ? autoremove_wake_function+0x0/0x34
    [] ? __dentry_open+0x1a5/0x2b8
    [] ? dentry_open+0x82/0x89
    [] cachefiles_write_page+0x298/0x335 [cachefiles]
    [] fscache_write_op+0x178/0x2c2 [fscache]
    [] fscache_op_execute+0x7a/0xd1 [fscache]
    [] slow_work_execute+0x18f/0x2d1
    [] slow_work_thread+0x1c5/0x308
    [] ? autoremove_wake_function+0x0/0x34
    [] ? slow_work_thread+0x0/0x308
    [] kthread+0x7a/0x82
    [] child_rip+0xa/0x20
    [] ? restore_args+0x0/0x30
    [] ? tg_shares_up+0x171/0x227
    [] ? kthread+0x0/0x82
    [] ? child_rip+0x0/0x20

    In the above backtrace, the following is happening:

    (1) A page storage operation is being executed by a slow-work thread
    (fscache_write_op()).

    (2) FS-Cache farms the operation out to the cache to perform
    (cachefiles_write_page()).

    (3) CacheFiles is then calling Ext3 to perform the actual write, using Ext3's
    standard write (do_sync_write()) under KERNEL_DS directly from the netfs
    page.

    (4) However, for Ext3 to perform the write, it must allocate some memory, in
    particular, it must allocate at least one page cache page into which it
    can copy the data from the netfs page.

    (5) Under OOM conditions, the memory allocator can't immediately come up with
    a page, so it uses vmscan to find something to discard
    (try_to_free_pages()).

    (6) vmscan finds a clean netfs page it might be able to discard (possibly the
    one it's trying to write out).

    (7) The netfs is called to throw the page away (nfs_release_page()) - but it's
    called with __GFP_WAIT, so the netfs decides to wait for the store to
    complete (__fscache_wait_on_page_write()).

    (8) This blocks a slow-work processing thread - possibly against itself.

    The system ends up stuck because it can't write out any netfs pages to the
    cache without allocating more memory.

    To avoid this, we make FS-Cache cancel some writes that aren't in the middle of
    actually being performed. This means that some data won't make it into the
    cache this time. To support this, a new FS-Cache function is added
    fscache_maybe_release_page() that replaces what the netfs releasepage()
    functions used to do with respect to the cache.

    The decisions fscache_maybe_release_page() makes are counted and displayed
    through /proc/fs/fscache/stats on a line labelled "VmScan". There are four
    counters provided: "nos=N" - pages that weren't pending storage; "gon=N" -
    pages that were pending storage when we first looked, but weren't by the time
    we got the object lock; "bsy=N" - pages that we ignored as they were actively
    being written when we looked; and "can=N" - pages that we cancelled the storage
    of.

    What I'd really like to do is alter the behaviour of the cancellation
    heuristics, depending on how necessary it is to expel pages. If there are
    plenty of other pages that aren't waiting to be written to the cache that
    could be ejected first, then it would be nice to hold up on immediate
    cancellation of cache writes - but I don't see a way of doing that.

    Signed-off-by: David Howells

    David Howells
     

24 Sep, 2009

1 commit

  • This patch adds a persistent, read-only caching facility for
    9p clients using the FS-Cache caching backend.

    When the fscache facility is enabled, each inode is associated
    with a corresponding vcookie which is an index into the FS-Cache
    indexing tree. The FS-Cache indexing tree is indexed at 3 levels:
    - session object associated with each mount.
    - inode/vcookie
    - actual data (pages)

    A cache tag is chosen randomly for each session. These tags can
    be read off /sys/fs/9p/caches and can be passed as a mount-time
    parameter to re-attach to the specified caching session.

    Signed-off-by: Abhishek Kulkarni
    Signed-off-by: Eric Van Hensbergen

    Abhishek Kulkarni