16 Jun, 2011

1 commit

  • There's no reason not to support cache flushing on external log devices.
    The only thing this really requires is flushing the data device first
    both in fsync and log commits. A side effect is that we also have to
    remove the barrier write test during mount, which has been superflous
    since the new FLUSH+FUA code anyway. Also use the chance to flush the
    RT subvolume write cache before the fsync commit, which is required
    for correct semantics.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     

15 Jun, 2011

1 commit


27 May, 2011

2 commits

  • Tell the filesystem if we just updated timestamp (I_DIRTY_SYNC) or
    anything else, so that the filesystem can track internally if it
    needs to push out a transaction for fdatasync or not.

    This is just the prototype change with no user for it yet. I plan
    to push large XFS changes for the next merge window, and getting
    this trivial infrastructure in this window would help a lot to avoid
    tree interdependencies.

    Also remove incorrect comments that ->dirty_inode can't block. That
    has been changed a long time ago, and many implementations rely on it.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     
  • * 'for-linus' of git://oss.sgi.com/xfs/xfs:
    xfs: correctly decrement the extent buffer index in xfs_bmap_del_extent
    xfs: check for valid indices in xfs_iext_get_ext and xfs_iext_idx_to_irec
    xfs: fix up asserts in xfs_iflush_fork
    xfs: do not do pointer arithmetic on extent records
    xfs: do not use unchecked extent indices in xfs_bunmapi
    xfs: do not use unchecked extent indices in xfs_bmapi
    xfs: do not use unchecked extent indices in xfs_bmap_add_extent_*
    xfs: remove if_lastex
    xfs: remove the unused XFS_BMAPI_RSVBLOCKS flag
    xfs: do not discard alloc btree blocks
    xfs: add online discard support

    Linus Torvalds
     

25 May, 2011

2 commits

  • Change each shrinker's API by consolidating the existing parameters into
    shrink_control struct. This will simplify any further features added w/o
    touching each file of shrinker.

    [akpm@linux-foundation.org: fix build]
    [akpm@linux-foundation.org: fix warning]
    [kosaki.motohiro@jp.fujitsu.com: fix up new shrinker API]
    [akpm@linux-foundation.org: fix xfs warning]
    [akpm@linux-foundation.org: update gfs2]
    Signed-off-by: Ying Han
    Cc: KOSAKI Motohiro
    Cc: Minchan Kim
    Acked-by: Pavel Emelyanov
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Acked-by: Rik van Riel
    Cc: Johannes Weiner
    Cc: Hugh Dickins
    Cc: Dave Hansen
    Cc: Steven Whitehouse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ying Han
     
  • Now that we have reliably tracking of deleted extents in a
    transaction we can easily implement "online" discard support
    which calls blkdev_issue_discard once a transaction commits.

    The actual discard is a two stage operation as we first have
    to mark the busy extent as not available for reuse before we
    can start the actual discard. Note that we don't bother
    supporting discard for the non-delaylog mode.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     

20 May, 2011

4 commits

  • When we free a vmapped buffer, we need to ensure the vmap address
    and length we free is the same as when it was allocated. In various
    places in the log code we change the memory the buffer is pointing
    to before issuing IO, but we never reset the buffer to point back to
    it's original memory (or no memory, if that is the case for the
    buffer).

    As a result, when we free the buffer it points to memory that is
    owned by something else and attempts to unmap and free it. Because
    the range does not match any known mapped range, it can trigger
    BUG_ON() traps in the vmap code, and potentially corrupt the vmap
    area tracking.

    Fix this by always resetting these buffers to their original state
    before freeing them.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • When the underlying inode buffer is locked and xfs_sync_inode_attr()
    is doing a non-blocking flush, xfs_iflush() can return EAGAIN. When
    this happens, clear the error rather than returning it to
    xfs_inode_ag_walk(), as returning EAGAIN will result in the AG walk
    delaying for a short while and trying again. This can result in
    background walks getting stuck on the one AG until inode buffer is
    unlocked by some other means.

    This behaviour was noticed when analysing event traces followed by
    code inspection and verification of the fix via further traces.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • The workqueue initialisation function is called twice when
    initialising the XFS subsystem. Remove the second initialisation
    call.

    Signed-off-by: Dave Chinner
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • xfs_alert_tag() can be defined using xfs_alert(), and thereby avoid
    using xfs_printk() altogether. This is the only remaining use of
    xfs_printk(), so changing it this way means xfs_printk() can simply
    be eliminated.can simply be eliminated.can simply be eliminated.can
    simply be eliminated.can simply be eliminated.can simply be
    eliminated.can simply be eliminated.can simply be eliminated.can
    simply be eliminated.

    Also add format checking to the non-debug inline function xfs_debug.
    Miscellaneous function prototype argument alignment.

    (Updated to delete the definition of xfs_printk(), which is
    no longer used or needed.)

    Signed-off-by: Joe Perches
    Signed-off-by: Alex Elder

    Joe Perches
     

10 May, 2011

1 commit

  • On a 32 bit highmem PowerPC machine, the XFS inode cache was growing
    without bound and exhausting low memory causing the OOM killer to be
    triggered. After some effort, the problem was reproduced on a 32 bit
    x86 highmem machine.

    The problem is that the per-ag inode reclaim index cursor was not
    getting reset to the start of the AG if the radix tree tag lookup
    found no more reclaimable inodes. Hence every further reclaim
    attempt started at the same index beyond where any reclaimable
    inodes lay, and no further background reclaim ever occurred from the
    AG.

    Without background inode reclaim the VM driven cache shrinker
    simply cannot keep up with cache growth, and OOM is the result.

    While the change that exposed the problem was the conversion of the
    inode reclaim to use work queues for background reclaim, it was not
    the cause of the bug. The bug was introduced when the cursor code
    was added, just waiting for some weird configuration to strike....

    Signed-off-by: Dave Chinner
    Tested-By: Christian Kujau
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     

29 Apr, 2011

5 commits

  • XFS_IOC_ZERO_RANGE uses struct xfs_flock64, and thus requires argument
    translation for 32-bit binaries on x86. Add the required
    XFS_IOC_ZERO_RANGE_32 defined and add it to the list of commands that
    require xfs_flock64 translation.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • xfs_fsblock_t may be a 32-bit type on if XFS_BIG_BLKNOS is not set,
    make sure to cast a value of this type to an unsigned long long
    before using the ll printk qualifier.

    Reported-by: Randy Dunlap
    Signed-off-by: Christoph Hellwig
    Reviewed-by: Dave Chinner
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • Instead of finding the per-ag and then taking and releasing the pagb_lock
    for every single busy extent completed sort the list of busy extents and
    only switch betweens AGs where nessecary. This becomes especially important
    with the online discard support which will hit this lock more often.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • Update the extent tree in case we have to reuse a busy extent, so that it
    always is kept uptodate. This is done by replacing the busy list searches
    with a new xfs_alloc_busy_reuse helper, which updates the busy extent tree
    in case of a reuse. This allows us to allow reusing metadata extents
    unconditionally, and thus avoid log forces especially for allocation btree
    blocks.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • Every time we reallocate a busy extent, we cause a synchronous log force
    to occur to ensure the freeing transaction is on disk before we continue
    and use the newly allocated extent. This is extremely sub-optimal as we
    have to mark every transaction with blocks that get reused as synchronous.

    Instead of searching the busy extent list after deciding on the extent to
    allocate, check each candidate extent during the allocation decisions as
    to whether they are in the busy list. If they are in the busy list, we
    trim the busy range out of the extent we have found and determine if that
    trimmed range is still OK for allocation. In many cases, this check can
    be incorporated into the allocation extent alignment code which already
    does trimming of the found extent before determining if it is a valid
    candidate for allocation.

    Based on earlier patches from Dave Chinner.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     

21 Apr, 2011

1 commit

  • Commit 957935dc ("xfs: fix xfs_debug warnings" broke the logic in
    __xfs_printk(). Instead of only printing one of two possible output
    strings based on whether the fs has a name or not, it outputs both.
    Fix it to only output one message again.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     

12 Apr, 2011

2 commits

  • * 'for-linus' of git://oss.sgi.com/xfs/xfs:
    xfs: use proper interfaces for on-stack plugging
    xfs: fix xfs_debug warnings
    xfs: fix variable set but not used warnings
    xfs: convert log tail checking to a warning
    xfs: catch bad block numbers freeing extents.
    xfs: push the AIL from memory reclaim and periodic sync
    xfs: clean up code layout in xfs_trans_ail.c
    xfs: convert the xfsaild threads to a workqueue
    xfs: introduce background inode reclaim work
    xfs: convert ENOSPC inode flushing to use new syncd workqueue
    xfs: introduce a xfssyncd workqueue
    xfs: fix extent format buffer allocation size
    xfs: fix unreferenced var error in xfs_buf.c

    Also, applied patch from Tony Luck that fixes ia64:
    xfs_destroy_workqueues() should not be tagged with__exit
    in the branch before merging.

    Linus Torvalds
     
  • ia64 throws away .exit sections for the built-in CONFIG case, so routines
    that are used in other circumstances should not be tagged as __exit.

    Signed-off-by: Tony Luck
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder
    Signed-off-by: Linus Torvalds

    Luck, Tony
     

08 Apr, 2011

8 commits

  • Add proper blk_start_plug/blk_finish_plug pairs for the two places where
    we issue buffer I/O, and remove the blk_flush_plug in xfs_buf_lock and
    xfs_buf_iowait, given that context switches already flush the per-process
    plugging lists.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • For a CONFIG_XFS_DEBUG=n build gcc complains about statements with no
    effect in xfs_debug:

    fs/xfs/quota/xfs_qm_syscalls.c: In function 'xfs_qm_scall_trunc_qfiles':
    fs/xfs/quota/xfs_qm_syscalls.c:291:3: warning: statement with no effect

    The reason for that is that the various new xfs message functions have a
    return value which is never used, and in case of the non-debug build
    xfs_debug the macro evaluates to a plain 0 which produces the above
    warnings. This can be fixed by turning xfs_debug into an inline function
    instead of a macro, but in addition to that I've also changed all the
    message helpers to return void as we never use their return values.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Dave Chinner
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • GCC 4.6 now warnings about variables set but not used. Fix the trivially
    fixable warnings of this sort.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • When we are short on memory, we want to expedite the cleaning of
    dirty objects. Hence when we run short on memory, we need to kick
    the AIL flushing into action to clean as many dirty objects as
    quickly as possible. To implement this, sample the lsn of the log
    item at the head of the AIL and use that as the push target for the
    AIL flush.

    Further, we keep items in the AIL that are dirty that are not
    tracked any other way, so we can get objects sitting in the AIL that
    don't get written back until the AIL is pushed. Hence to get the
    filesystem to the idle state, we might need to push the AIL to flush
    out any remaining dirty objects sitting in the AIL. This requires
    the same push mechanism as the reclaim push.

    This patch also renames xfs_trans_ail_tail() to xfs_ail_min_lsn() to
    match the new xfs_ail_max_lsn() function introduced in this patch.
    Similarly for xfs_trans_ail_push -> xfs_ail_push.

    Signed-off-by: Dave Chinner
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • Similar to the xfssyncd, the per-filesystem xfsaild threads can be
    converted to a global workqueue and run periodically by delayed
    works. This makes sense for the AIL pushing because it uses
    variable timeouts depending on the work that needs to be done.

    By removing the xfsaild, we simplify the AIL pushing code and
    remove the need to spread the code to implement the threading
    and pushing across multiple files.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • Background inode reclaim needs to run more frequently that the XFS
    syncd work is run as 30s is too long between optimal reclaim runs.
    Add a new periodic work item to the xfs syncd workqueue to run a
    fast, non-blocking inode reclaim scan.

    Background inode reclaim is kicked by the act of marking inodes for
    reclaim. When an AG is first marked as having reclaimable inodes,
    the background reclaim work is kicked. It will continue to run
    periodically untill it detects that there are no more reclaimable
    inodes. It will be kicked again when the first inode is queued for
    reclaim.

    To ensure shrinker based inode reclaim throttles to the inode
    cleaning and reclaim rate but still reclaim inodes efficiently, make it kick the
    background inode reclaim so that when we are low on memory we are
    trying to reclaim inodes as efficiently as possible. This kick shoul
    d not be necessary, but it will protect against failures to kick the
    background reclaim when inodes are first dirtied.

    To provide the rate throttling, make the shrinker pass do
    synchronous inode reclaim so that it blocks on inodes under IO. This
    means that the shrinker will reclaim inodes rather than just
    skipping over them, but it does not adversely affect the rate of
    reclaim because most dirty inodes are already under IO due to the
    background reclaim work the shrinker kicked.

    These two modifications solve one of the two OOM killer invocations
    Chris Mason reported recently when running a stress testing script.
    The particular workload trigger for the OOM killer invocation is
    where there are more threads than CPUs all unlinking files in an
    extremely memory constrained environment. Unlike other solutions,
    this one does not have a performance impact on performance when
    memory is not constrained or the number of concurrent threads
    operating is
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • On of the problems with the current inode flush at ENOSPC is that we
    queue a flush per ENOSPC event, regardless of how many are already
    queued. Thi can result in hundreds of queued flushes, most of
    which simply burn CPU scanned and do no real work. This simply slows
    down allocation at ENOSPC.

    We really only need one active flush at a time, and we can easily
    implement that via the new xfs_syncd_wq. All we need to do is queue
    a flush if one is not already active, then block waiting for the
    currently active flush to complete. The result is that we only ever
    have a single ENOSPC inode flush active at a time and this greatly
    reduces the overhead of ENOSPC processing.

    On my 2p test machine, this results in tests exercising ENOSPC
    conditions running significantly faster - 042 halves execution time,
    083 drops from 60s to 5s, etc - while not introducing test
    regressions.

    This allows us to remove the old xfssyncd threads and infrastructure
    as they are no longer used.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • All of the work xfssyncd does is background functionality. There is
    no need for a thread per filesystem to do this work - it can al be
    managed by a global workqueue now they manage concurrency
    effectively.

    Introduce a new gglobal xfssyncd workqueue, and convert the periodic
    work to use this new functionality. To do this, use a delayed work
    construct to schedule the next running of the periodic sync work
    for the filesystem. When the sync work is complete, queue a new
    delayed work for the next running of the sync work.

    For laptop mode, we wait on completion for the sync works, so ensure
    that the sync work queuing interface can flush and wait for work to
    complete to enable the work queue infrastructure to replace the
    current sequence number and wakeup that is used.

    Because the sync work does non-trivial amounts of work, mark the
    new work queue as CPU intensive.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     

31 Mar, 2011

2 commits


29 Mar, 2011

1 commit

  • * 'for-linus' of git://oss.sgi.com/xfs/xfs:
    xfs: stop using the page cache to back the buffer cache
    xfs: register the inode cache shrinker before quotachecks
    xfs: xfs_trans_read_buf() should return an error on failure
    xfs: introduce inode cluster buffer trylocks for xfs_iflush
    vmap: flush vmap aliases when mapping fails
    xfs: preallocation transactions do not need to be synchronous

    Fix up trivial conflicts in fs/xfs/linux-2.6/xfs_buf.c due to plug removal.

    Linus Torvalds
     

26 Mar, 2011

5 commits

  • Now that the buffer cache has it's own LRU, we do not need to use
    the page cache to provide persistent caching and reclaim
    infrastructure. Convert the buffer cache to use alloc_pages()
    instead of the page cache. This will remove all the overhead of page
    cache management from setup and teardown of the buffers, as well as
    needing to mark pages accessed as we find buffers in the buffer
    cache.

    By avoiding the page cache, we also remove the need to keep state in
    the page_private(page) field for persistant storage across buffer
    free/buffer rebuild and so all that code can be removed. This also
    fixes the long-standing problem of not having enough bits in the
    page_private field to track all the state needed for a 512
    sector/64k page setup.

    It also removes the need for page locking during reads as the pages
    are unique to the buffer and nobody else will be attempting to
    access them.

    Finally, it removes the buftarg address space lock as a point of
    global contention on workloads that allocate and free buffers
    quickly such as when creating or removing large numbers of inodes in
    parallel. This remove the 16TB limit on filesystem size on 32 bit
    machines as the page index (32 bit) is no longer used for lookups
    of metadata buffers - the buffer cache is now solely indexed by disk
    address which is stored in a 64 bit field in the buffer.

    Signed-off-by: Dave Chinner
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • During mount, we can do a quotacheck that involves a bulkstat pass
    on all inodes. If there are more inodes in the filesystem than can
    be held in memory, we require the inode cache shrinker to run to
    ensure that we don't run out of memory.

    Unfortunately, the inode cache shrinker is not registered until we
    get to the end of the superblock setup process, which is after a
    quotacheck is run if it is needed. Hence we need to register the
    inode cache shrinker earlier in the mount process so that we don't
    OOM during mount. This requires that we also initialise the syncd
    work before we register the shrinker, so we nee dto juggle that
    around as well.

    While there, make sure that we have set up the block sizes in the
    VFS superblock correctly before the quotacheck is run so that any
    inodes that are cached as a result of the quotacheck have their
    block size fields set up correctly.

    Cc: stable@kernel.org
    Signed-off-by: Dave Chinner
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • There is an ABBA deadlock between synchronous inode flushing in
    xfs_reclaim_inode and xfs_icluster_free. xfs_icluster_free locks the
    buffer, then takes inode ilocks, whilst synchronous reclaim takes
    the ilock followed by the buffer lock in xfs_iflush().

    To avoid this deadlock, separate the inode cluster buffer locking
    semantics from the synchronous inode flush semantics, allowing
    callers to attempt to lock the buffer but still issue synchronous IO
    if it can get the buffer. This requires xfs_iflush() calls that
    currently use non-blocking semantics to pass SYNC_TRYLOCK rather
    than 0 as the flags parameter.

    This allows xfs_reclaim_inode to avoid the deadlock on the buffer
    lock and detect the failure so that it can drop the inode ilock and
    restart the reclaim attempt on the inode. This allows
    xfs_ifree_cluster to obtain the inode lock, mark the inode stale and
    release it and hence defuse the deadlock situation. It also has the
    pleasant side effect of avoiding IO in xfs_reclaim_inode when it
    tries to next reclaim the inode as it is now marked stale.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • On 32 bit systems, vmalloc space is limited and XFS can chew through
    it quickly as the vmalloc space is lazily freed. This can result in
    failure to map buffers, even when there is apparently large amounts
    of vmalloc space available. Hence, if we fail to map a buffer, purge
    the aliases that have not yet been freed to hopefuly free up enough
    vmalloc space to allow a retry to succeed.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • Preallocation and hole punch transactions are currently synchronous
    and this is causing performance problems in some cases. The
    transactions don't need to be synchronous as we don't need to
    guarantee the preallocation is persistent on disk until a
    fdatasync, fsync, sync operation occurs. If the file is opened
    O_SYNC or O_DATASYNC, only then should the transaction be issued
    synchronously.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     

25 Mar, 2011

1 commit

  • * 'for-2.6.39/core' of git://git.kernel.dk/linux-2.6-block: (65 commits)
    Documentation/iostats.txt: bit-size reference etc.
    cfq-iosched: removing unnecessary think time checking
    cfq-iosched: Don't clear queue stats when preempt.
    blk-throttle: Reset group slice when limits are changed
    blk-cgroup: Only give unaccounted_time under debug
    cfq-iosched: Don't set active queue in preempt
    block: fix non-atomic access to genhd inflight structures
    block: attempt to merge with existing requests on plug flush
    block: NULL dereference on error path in __blkdev_get()
    cfq-iosched: Don't update group weights when on service tree
    fs: assign sb->s_bdi to default_backing_dev_info if the bdi is going away
    block: Require subsystems to explicitly allocate bio_set integrity mempool
    jbd2: finish conversion from WRITE_SYNC_PLUG to WRITE_SYNC and explicit plugging
    jbd: finish conversion from WRITE_SYNC_PLUG to WRITE_SYNC and explicit plugging
    fs: make fsync_buffers_list() plug
    mm: make generic_writepages() use plugging
    blk-cgroup: Add unaccounted time to timeslice_used.
    block: fixup plugging stubs for !CONFIG_BLOCK
    block: remove obsolete comments for blkdev_issue_zeroout.
    blktrace: Use rq->cmd_flags directly in blk_add_trace_rq.
    ...

    Fix up conflicts in fs/{aio.c,super.c}

    Linus Torvalds
     

22 Mar, 2011

1 commit

  • * 'for-linus' of git://oss.sgi.com/xfs/xfs: (23 commits)
    xfs: don't name variables "panic"
    xfs: factor agf counter updates into a helper
    xfs: clean up the xfs_alloc_compute_aligned calling convention
    xfs: kill support/debug.[ch]
    xfs: Convert remaining cmn_err() callers to new API
    xfs: convert the quota debug prints to new API
    xfs: rename xfs_cmn_err_fsblock_zero()
    xfs: convert xfs_fs_cmn_err to new error logging API
    xfs: kill xfs_fs_mount_cmn_err() macro
    xfs: kill xfs_fs_repair_cmn_err() macro
    xfs: convert xfs_cmn_err to xfs_alert_tag
    xfs: Convert xlog_warn to new logging interface
    xfs: Convert linux-2.6/ files to new logging interface
    xfs: introduce new logging API.
    xfs: zero proper structure size for geometry calls
    xfs: enable delaylog by default
    xfs: more sensible inode refcounting for ialloc
    xfs: stop using xfs_trans_iget in the RT allocator
    xfs: check if device support discard in xfs_ioc_trim()
    xfs: prevent leaking uninitialized stack memory in FSGEOMETRY_V1
    ...

    Linus Torvalds
     

17 Mar, 2011

1 commit

  • …s/security-testing-2.6

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6: (33 commits)
    AppArmor: kill unused macros in lsm.c
    AppArmor: cleanup generated files correctly
    KEYS: Add an iovec version of KEYCTL_INSTANTIATE
    KEYS: Add a new keyctl op to reject a key with a specified error code
    KEYS: Add a key type op to permit the key description to be vetted
    KEYS: Add an RCU payload dereference macro
    AppArmor: Cleanup make file to remove cruft and make it easier to read
    SELinux: implement the new sb_remount LSM hook
    LSM: Pass -o remount options to the LSM
    SELinux: Compute SID for the newly created socket
    SELinux: Socket retains creator role and MLS attribute
    SELinux: Auto-generate security_is_socket_class
    TOMOYO: Fix memory leak upon file open.
    Revert "selinux: simplify ioctl checking"
    selinux: drop unused packet flow permissions
    selinux: Fix packet forwarding checks on postrouting
    selinux: Fix wrong checks for selinux_policycap_netpeer
    selinux: Fix check for xfrm selinux context algorithm
    ima: remove unnecessary call to ima_must_measure
    IMA: remove IMA imbalance checking
    ...

    Linus Torvalds
     

16 Mar, 2011

1 commit

  • * 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
    workqueue: fix build failure introduced by s/freezeable/freezable/
    workqueue: add system_freezeable_wq
    rds/ib: use system_wq instead of rds_ib_fmr_wq
    net/9p: replace p9_poll_task with a work
    net/9p: use system_wq instead of p9_mux_wq
    xfs: convert to alloc_workqueue()
    reiserfs: make commit_wq use the default concurrency level
    ocfs2: use system_wq instead of ocfs2_quota_wq
    ext4: convert to alloc_workqueue()
    scsi/scsi_tgt_lib: scsi_tgtd isn't used in memory reclaim path
    scsi/be2iscsi,qla2xxx: convert to alloc_workqueue()
    misc/iwmc3200top: use system_wq instead of dedicated workqueues
    i2o: use alloc_workqueue() instead of create_workqueue()
    acpi: kacpi*_wq don't need WQ_MEM_RECLAIM
    fs/aio: aio_wq isn't used in memory reclaim path
    input/tps6507x-ts: use system_wq instead of dedicated workqueue
    cpufreq: use system_wq instead of dedicated workqueues
    wireless/ipw2x00: use system_wq instead of dedicated workqueues
    arm/omap: use system_wq in mailbox
    workqueue: use WQ_MEM_RECLAIM instead of WQ_RESCUER

    Linus Torvalds
     

14 Mar, 2011

1 commit

  • The exportfs encode handle function should return the minimum required
    handle size. This helps user to find out the handle size by passing 0
    handle size in the first step and then redoing to the call again with
    the returned handle size value.

    Acked-by: Serge Hallyn
    Signed-off-by: Aneesh Kumar K.V
    Signed-off-by: Al Viro

    Aneesh Kumar K.V