04 Jan, 2012

1 commit

  • Seeing that just about every destructor got that INIT_LIST_HEAD() copied into
    it, there is no point whatsoever keeping this INIT_LIST_HEAD in inode_init_once();
    the cost of taking it into inode_init_always() will be negligible for pipes
    and sockets and negative for everything else. Not to mention the removal of
    boilerplate code from ->destroy_inode() instances...

    Signed-off-by: Al Viro

    Al Viro
     

05 Nov, 2011

1 commit


03 Nov, 2011

1 commit

  • This commit adds an option to set the device block size used to 4K.

    By default Squashfs sets the device block size (sb_min_blocksize) to 1K
    or the smallest block size supported by the block device (if larger).
    This, because blocks are packed together and unaligned in Squashfs,
    should reduce latency.

    This, however, gives poor performance on MTD NAND devices where
    the optimal I/O size is 4K (even though the devices can support
    smaller block sizes).

    Using a 4K device block size may also improve overall I/O
    performance for some file access patterns (e.g. sequential
    accesses of files in filesystem order) on all media.

    Signed-off-by: Phillip Lougher

    Phillip Lougher
     

02 Nov, 2011

1 commit


28 Sep, 2011

1 commit

  • There are numerous broken references to Documentation files (in other
    Documentation files, in comments, etc.). These broken references are
    caused by typo's in the references, and by renames or removals of the
    Documentation files. Some broken references are simply odd.

    Fix these broken references, sometimes by dropping the irrelevant text
    they were part of.

    Signed-off-by: Paul Bolle
    Signed-off-by: Jiri Kosina

    Paul Bolle
     

26 Jul, 2011

1 commit


22 Jul, 2011

1 commit

  • Squashfs now supports XZ and LZO compression in addition to ZLIB.
    As such it no longer makes sense to always include ZLIB support.
    In particular embedded systems may only use LZO or XZ compression, and
    the ability to exclude ZLIB support will reduce kernel size.

    Signed-off-by: Phillip Lougher

    Phillip Lougher
     

20 Jul, 2011

2 commits


30 May, 2011

1 commit


29 May, 2011

1 commit


27 May, 2011

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/pkl/squashfs-linus:
    Squashfs: update email address
    Squashfs: add extra sanity checks at mount time
    Squashfs: add sanity checks to fragment reading at mount time
    Squashfs: add sanity checks to lookup table reading at mount time
    Squashfs: add sanity checks to id reading at mount time
    Squashfs: add sanity checks to xattr reading at mount time
    Squashfs: reverse order of filesystem table reading
    Squashfs: move table allocation into squashfs_read_table()

    Linus Torvalds
     

26 May, 2011

8 commits


10 May, 2011

1 commit


31 Mar, 2011

1 commit


23 Mar, 2011

1 commit


16 Mar, 2011

1 commit

  • Handle the rare case where a directory metadata block is uncompressed and
    corrupted, leading to a kernel oops in directory scanning (memcpy).
    Normally corruption is detected at the decompression stage and dealt with
    then, however, this will not happen if:

    - metadata isn't compressed (users can optionally request no metadata
    compression), or
    - the compressed metadata block was larger than the original, in which
    case the uncompressed version was used, or
    - the data was corrupt after decompression

    This patch fixes this by adding some sanity checks against known maximum
    values.

    Signed-off-by: Phillip Lougher

    Phillip Lougher
     

01 Mar, 2011

5 commits


26 Jan, 2011

1 commit

  • Fix potential use of uninitialised variable caused by recent
    decompressor code optimisations.

    In zlib_uncompress (zlib_wrapper.c) we have

    int zlib_err, zlib_init = 0;
    ...
    do {
    ...
    if (avail == 0) {
    offset = 0;
    put_bh(bh[k++]);
    continue;
    }
    ...
    zlib_err = zlib_inflate(stream, Z_SYNC_FLUSH);
    ...
    } while (zlib_err == Z_OK);

    If continue is executed (avail == 0) then the while condition will be
    evaluated testing zlib_err, which is uninitialised first time around the
    loop.

    Fix this by getting rid of the 'if (avail == 0)' condition test, this
    edge condition should not be being handled in the decompressor code, and
    instead handle it generically in the caller code.

    Similarly for xz_wrapper.c.

    Incidentally, on most architectures (bar Mips and Parisc), no
    uninitialised variable warning is generated by gcc, this is because the
    while condition test on continue is optimised out and not performed
    (when executing continue zlib_err has not been changed since entering
    the loop, and logically if the while condition was true previously, then
    it's still true).

    Signed-off-by: Phillip Lougher
    Reported-by: Jesper Juhl
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Phillip Lougher
     

14 Jan, 2011

7 commits


07 Jan, 2011

1 commit

  • RCU free the struct inode. This will allow:

    - Subsequent store-free path walking patch. The inode must be consulted for
    permissions when walking, so an RCU inode reference is a must.
    - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want
    to take i_lock no longer need to take sb_inode_list_lock to walk the list in
    the first place. This will simplify and optimize locking.
    - Could remove some nested trylock loops in dcache code
    - Could potentially simplify things a bit in VM land. Do not need to take the
    page lock to follow page->mapping.

    The downsides of this is the performance cost of using RCU. In a simple
    creat/unlink microbenchmark, performance drops by about 10% due to inability to
    reuse cache-hot slab objects. As iterations increase and RCU freeing starts
    kicking over, this increases to about 20%.

    In cases where inode lifetimes are longer (ie. many inodes may be allocated
    during the average life span of a single inode), a lot of this cache reuse is
    not applicable, so the regression caused by this patch is smaller.

    The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU,
    however this adds some complexity to list walking and store-free path walking,
    so I prefer to implement this at a later date, if it is shown to be a win in
    real situations. I haven't found a regression in any non-micro benchmark so I
    doubt it will be a problem.

    Signed-off-by: Nick Piggin

    Nick Piggin
     

29 Oct, 2010

2 commits