13 Oct, 2017

1 commit

  • When the file /proc/fs/fscache/objects (available with
    CONFIG_FSCACHE_OBJECT_LIST=y) is opened, we request a user key with
    description "fscache:objlist", then access its payload. However, a
    revoked key has a NULL payload, and we failed to check for this.
    request_key() *does* skip revoked keys, but there is still a window
    where the key can be revoked before we access its payload.

    Fix it by checking for a NULL payload, treating it like a key which was
    already revoked at the time it was requested.

    Fixes: 4fbf4291aa15 ("FS-Cache: Allow the current state of all objects to be dumped")
    Reviewed-by: James Morris
    Cc: [v2.6.32+]
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells

    Eric Biggers
     

14 Sep, 2017

1 commit

  • gcc points out a minor bug in the handling of unknown cookie types,
    which could result in a string overflow when the integer is copied into
    a 3-byte string:

    fs/fscache/object-list.c: In function 'fscache_objlist_show':
    fs/fscache/object-list.c:265:19: error: 'sprintf' may write a terminating nul past the end of the destination [-Werror=format-overflow=]
    sprintf(_type, "%02u", cookie->def->type);
    ^~~~~~
    fs/fscache/object-list.c:265:4: note: 'sprintf' output between 3 and 4 bytes into a destination of size 3

    This is currently harmless as no code sets a type other than 0 or 1, but
    it makes sense to use snprintf() here to avoid overflowing the array if
    that changes.

    Link: http://lkml.kernel.org/r/20170714120720.906842-22-arnd@arndb.de
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

02 Mar, 2017

1 commit

  • rcu_dereference_key() and user_key_payload() are currently being used in
    two different, incompatible ways:

    (1) As a wrapper to rcu_dereference() - when only the RCU read lock used
    to protect the key.

    (2) As a wrapper to rcu_dereference_protected() - when the key semaphor is
    used to protect the key and the may be being modified.

    Fix this by splitting both of the key wrappers to produce:

    (1) RCU accessors for keys when caller has the key semaphore locked:

    dereference_key_locked()
    user_key_payload_locked()

    (2) RCU accessors for keys when caller holds the RCU read lock:

    dereference_key_rcu()
    user_key_payload_rcu()

    This should fix following warning in the NFS idmapper

    ===============================
    [ INFO: suspicious RCU usage. ]
    4.10.0 #1 Tainted: G W
    -------------------------------
    ./include/keys/user-type.h:53 suspicious rcu_dereference_protected() usage!
    other info that might help us debug this:
    rcu_scheduler_active = 2, debug_locks = 0
    1 lock held by mount.nfs/5987:
    #0: (rcu_read_lock){......}, at: [] nfs_idmap_get_key+0x15c/0x420 [nfsv4]
    stack backtrace:
    CPU: 1 PID: 5987 Comm: mount.nfs Tainted: G W 4.10.0 #1
    Call Trace:
    dump_stack+0xe8/0x154 (unreliable)
    lockdep_rcu_suspicious+0x140/0x190
    nfs_idmap_get_key+0x380/0x420 [nfsv4]
    nfs_map_name_to_uid+0x2a0/0x3b0 [nfsv4]
    decode_getfattr_attrs+0xfac/0x16b0 [nfsv4]
    decode_getfattr_generic.constprop.106+0xbc/0x150 [nfsv4]
    nfs4_xdr_dec_lookup_root+0xac/0xb0 [nfsv4]
    rpcauth_unwrap_resp+0xe8/0x140 [sunrpc]
    call_decode+0x29c/0x910 [sunrpc]
    __rpc_execute+0x140/0x8f0 [sunrpc]
    rpc_run_task+0x170/0x200 [sunrpc]
    nfs4_call_sync_sequence+0x68/0xa0 [nfsv4]
    _nfs4_lookup_root.isra.44+0xd0/0xf0 [nfsv4]
    nfs4_lookup_root+0xe0/0x350 [nfsv4]
    nfs4_lookup_root_sec+0x70/0xa0 [nfsv4]
    nfs4_find_root_sec+0xc4/0x100 [nfsv4]
    nfs4_proc_get_rootfh+0x5c/0xf0 [nfsv4]
    nfs4_get_rootfh+0x6c/0x190 [nfsv4]
    nfs4_server_common_setup+0xc4/0x260 [nfsv4]
    nfs4_create_server+0x278/0x3c0 [nfsv4]
    nfs4_remote_mount+0x50/0xb0 [nfsv4]
    mount_fs+0x74/0x210
    vfs_kern_mount+0x78/0x220
    nfs_do_root_mount+0xb0/0x140 [nfsv4]
    nfs4_try_mount+0x60/0x100 [nfsv4]
    nfs_fs_mount+0x5ec/0xda0 [nfs]
    mount_fs+0x74/0x210
    vfs_kern_mount+0x78/0x220
    do_mount+0x254/0xf70
    SyS_mount+0x94/0x100
    system_call+0x38/0xe0

    Reported-by: Jan Stancek
    Signed-off-by: David Howells
    Tested-by: Jan Stancek
    Signed-off-by: James Morris

    David Howells
     

30 May, 2016

1 commit


21 Oct, 2015

1 commit

  • Merge the type-specific data with the payload data into one four-word chunk
    as it seems pointless to keep them separate.

    Use user_key_payload() for accessing the payloads of overloaded
    user-defined keys.

    Signed-off-by: David Howells
    cc: linux-cifs@vger.kernel.org
    cc: ecryptfs@vger.kernel.org
    cc: linux-ext4@vger.kernel.org
    cc: linux-f2fs-devel@lists.sourceforge.net
    cc: linux-nfs@vger.kernel.org
    cc: ceph-devel@vger.kernel.org
    cc: linux-ima-devel@lists.sourceforge.net

    David Howells
     

14 Oct, 2014

1 commit


05 Jun, 2014

1 commit


18 Feb, 2014

1 commit

  • When FS-Cache allocates an object, the following sequence of events can
    occur:

    -->fscache_alloc_object()
    -->cachefiles_alloc_object() [via cache->ops->alloc_object]
    fscache_attach_object()
    cachefiles_put_object() [via cache->ops->put_object]
    -->fscache_object_destroy()
    -->fscache_objlist_remove()
    -->rb_erase() to remove the object from fscache_object_list.

    resulting in a crash in the rbtree code.

    The problem is that the object is only added to fscache_object_list on
    the success path of fscache_attach_object() where it calls
    fscache_objlist_add().

    So if fscache_attach_object() fails, the object won't have been added to
    the objlist rbtree. We do, however, unconditionally try to remove the
    object from the tree.

    Thanks to NeilBrown for finding this and suggesting this solution.

    Reported-by: NeilBrown
    Signed-off-by: David Howells
    Tested-by: (a customer of) NeilBrown
    Signed-off-by: Linus Torvalds

    David Howells
     

19 Jun, 2013

2 commits

  • Simplify the way fscache cache objects retain their cookie. The way I
    implemented the cookie storage handling made synchronisation a pain (ie. the
    object state machine can't rely on the cookie actually still being there).

    Instead of the the object being detached from the cookie and the cookie being
    freed in __fscache_relinquish_cookie(), we defer both operations:

    (*) The detachment of the object from the list in the cookie now takes place
    in fscache_drop_object() and is thus governed by the object state machine
    (fscache_detach_from_cookie() has been removed).

    (*) The release of the cookie is now in fscache_object_destroy() - which is
    called by the cache backend just before it frees the object.

    This means that the fscache_cookie struct is now available to the cache all the
    way through from ->alloc_object() to ->drop_object() and ->put_object() -
    meaning that it's no longer necessary to take object->lock to guarantee access.

    However, __fscache_relinquish_cookie() doesn't wait for the object to go all
    the way through to destruction before letting the netfs proceed. That would
    massively slow down the netfs. Since __fscache_relinquish_cookie() leaves the
    cookie around, in must therefore break all attachments to the netfs - which
    includes ->def, ->netfs_data and any outstanding page read/writes.

    To handle this, struct fscache_cookie now has an n_active counter:

    (1) This starts off initialised to 1.

    (2) Any time the cache needs to get at the netfs data, it calls
    fscache_use_cookie() to increment it - if it is not zero. If it was zero,
    then access is not permitted.

    (3) When the cache has finished with the data, it calls fscache_unuse_cookie()
    to decrement it. This does a wake-up on it if it reaches 0.

    (4) __fscache_relinquish_cookie() decrements n_active and then waits for it to
    reach 0. The initialisation to 1 in step (1) ensures that we only get
    wake ups when we're trying to get rid of the cookie.

    This leaves __fscache_relinquish_cookie() a lot simpler.

    ***
    This fixes a problem in the current code whereby if fscache_invalidate() is
    followed sufficiently quickly by fscache_relinquish_cookie() then it is
    possible for __fscache_relinquish_cookie() to have detached the cookie from the
    object and cleared the pointer before a thread is dispatched to process the
    invalidation state in the object state machine.

    Since the pending write clearance was deferred to the invalidation state to
    make it asynchronous, we need to either wait in relinquishment for the stores
    tree to be cleared in the invalidation state or we need to handle the clearance
    in relinquishment.

    Further, if the relinquishment code does clear the tree, then the invalidation
    state need to make the clearance contingent on still having the cookie to hand
    (since that's where the tree is rooted) and we have to prevent the cookie from
    disappearing for the duration.

    This can lead to an oops like the following:

    BUG: unable to handle kernel NULL pointer dereference at 000000000000000c
    ...
    RIP: 0010:[] _spin_lock+0xe/0x30
    ...
    CR2: 000000000000000c ...
    ...
    Process kslowd002 (...)
    ....
    Call Trace:
    [] fscache_invalidate_writes+0x38/0xd0 [fscache]
    [] ? __switch_to+0xd0/0x320
    [] ? find_busiest_queue+0x69/0x150
    [] ? slow_work_enqueue+0x104/0x180
    [] fscache_object_slow_work_execute+0x5e3/0x9d0 [fscache]
    [] ? bit_waitqueue+0x17/0xd0
    [] slow_work_execute+0x233/0x310
    [] slow_work_thread+0x205/0x360
    [] ? autoremove_wake_function+0x0/0x40
    [] ? slow_work_thread+0x0/0x360
    [] kthread+0x96/0xa0
    [] child_rip+0xa/0x20
    [] ? kthread+0x0/0xa0
    [] ? child_rip+0x0/0x20

    The parameter to fscache_invalidate_writes() was object->cookie which is NULL.

    Signed-off-by: David Howells
    Tested-By: Milosz Tanski
    Acked-by: Jeff Layton

    David Howells
     
  • Fix object state machine to have separate work and wait states as that makes
    it easier to envision.

    There are now three kinds of state:

    (1) Work state. This is an execution state. No event processing is performed
    by a work state. The function attached to a work state returns a pointer
    indicating the next state to which the OSM should transition. Returning
    NO_TRANSIT repeats the current state, but goes back to the scheduler
    first.

    (2) Wait state. This is an event processing state. No execution is
    performed by a wait state. Wait states are just tables of "if event X
    occurs, clear it and transition to state Y". The dispatcher returns to
    the scheduler if none of the events in which the wait state has an
    interest are currently pending.

    (3) Out-of-band state. This is a special work state. Transitions to normal
    states can be overridden when an unexpected event occurs (eg. I/O error).
    Instead the dispatcher disables and clears the OOB event and transits to
    the specified work state. This then acts as an ordinary work state,
    though object->state points to the overridden destination. Returning
    NO_TRANSIT resumes the overridden transition.

    In addition, the states have names in their definitions, so there's no need for
    tables of state names. Further, the EV_REQUEUE event is no longer necessary as
    that is automatic for work states.

    Since the states are now separate structs rather than values in an enum, it's
    not possible to use comparisons other than (non-)equality between them, so use
    some object->flags to indicate what phase an object is in.

    The EV_RELEASE, EV_RETIRE and EV_WITHDRAW events have been squished into one
    (EV_KILL). An object flag now carries the information about retirement.

    Similarly, the RELEASING, RECYCLING and WITHDRAWING states have been merged
    into an KILL_OBJECT state and additional states have been added for handling
    waiting dependent objects (JUMPSTART_DEPS and KILL_DEPENDENTS).

    A state has also been added for synchronising with parent object initialisation
    (WAIT_FOR_PARENT) and another for initiating look up (PARENT_READY).

    Signed-off-by: David Howells
    Tested-By: Milosz Tanski
    Acked-by: Jeff Layton

    David Howells
     

21 Dec, 2012

1 commit


23 Jul, 2010

1 commit

  • Make fscache object state transition callbacks use workqueue instead
    of slow-work. New dedicated unbound CPU workqueue fscache_object_wq
    is created. get/put callbacks are renamed and modified to take
    @object and called directly from the enqueue wrapper and the work
    function. While at it, make all open coded instances of get/put to
    use fscache_get/put_object().

    * Unbound workqueue is used.

    * work_busy() output is printed instead of slow-work flags in object
    debugging outputs. They mean basically the same thing bit-for-bit.

    * sysctl fscache.object_max_active added to control concurrency. The
    default value is nr_cpus clamped between 4 and
    WQ_UNBOUND_MAX_ACTIVE.

    * slow_work_sleep_till_thread_needed() is replaced with fscache
    private implementation fscache_object_sleep_till_congested() which
    waits on fscache_object_wq congestion.

    * debugfs support is dropped for now. Tracing API based debug
    facility is planned to be added.

    Signed-off-by: Tejun Heo
    Acked-by: David Howells

    Tejun Heo
     

26 May, 2010

1 commit


30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

16 Dec, 2009

1 commit

  • Andrew Morton's compiler sees the following warning in FS-Cache:

    fs/fscache/object-list.c: In function 'fscache_objlist_lookup':
    fs/fscache/object-list.c:94: warning: 'obj' may be used uninitialized in this function

    which my compiler doesn't. This is a false positive as obj can only be
    used in the comparison against minobj if minobj has been set to something
    other than NULL, but for that to happen, obj has to be first set to
    something.

    Deal with this by preclearing obj too.

    Reported-by: Andrew Morton
    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     

20 Nov, 2009

1 commit

  • Allow the current state of all fscache objects to be dumped by doing:

    cat /proc/fs/fscache/objects

    By default, all objects and all fields will be shown. This can be restricted
    by adding a suitable key to one of the caller's keyrings (such as the session
    keyring):

    keyctl add user fscache:objlist "" @s

    The are:

    K Show hexdump of object key (don't show if not given)
    A Show hexdump of object aux data (don't show if not given)

    And paired restrictions:

    C Show objects that have a cookie
    c Show objects that don't have a cookie
    B Show objects that are busy
    b Show objects that aren't busy
    W Show objects that have pending writes
    w Show objects that don't have pending writes
    R Show objects that have outstanding reads
    r Show objects that don't have outstanding reads
    S Show objects that have slow work queued
    s Show objects that don't have slow work queued

    If neither side of a restriction pair is given, then both are implied. For
    example:

    keyctl add user fscache:objlist KB @s

    shows objects that are busy, and lists their object keys, but does not dump
    their auxiliary data. It also implies "CcWwRrSs", but as 'B' is given, 'b' is
    not implied.

    Signed-off-by: David Howells

    David Howells