22 May, 2010

1 commit

  • Now that the last user passing a NULL file pointer is gone we can remove
    the redundant dentry argument and associated hacks inside vfs_fsynmc_range.

    The next step will be removig the dentry argument from ->fsync, but given
    the luck with the last round of method prototype changes I'd rather
    defer this until after the main merge window.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     

14 Jan, 2009

1 commit


06 Jan, 2009

1 commit

  • Fsync currently has a fdatawrite/fdatawait pair around the method call,
    and a mutex_lock/unlock of the inode mutex. All callers of fsync have
    to duplicate this, but we have a few and most of them don't quite get
    it right. This patch adds a new vfs_fsync that takes care of this.
    It's a little more complicated as usual as ->fsync might get a NULL file
    pointer and just a dentry from nfsd, but otherwise gets afile and we
    want to take the mapping and file operations from it when it is there.

    Notes on the fsync callers:

    - ecryptfs wasn't calling filemap_fdatawrite / filemap_fdatawait on the
    lower file
    - coda wasn't calling filemap_fdatawrite / filemap_fdatawait on the host
    file, and returning 0 when ->fsync was missing
    - shm wasn't calling either filemap_fdatawrite / filemap_fdatawait nor
    taking i_mutex. Now given that shared memory doesn't have disk
    backing not doing anything in fsync seems fine and I left it out of
    the vfs_fsync conversion for now, but in that case we might just
    not pass it through to the lower file at all but just call the no-op
    simple_sync_file directly.

    [and now actually export vfs_fsync]

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Al Viro

    Christoph Hellwig
     

22 May, 2007

1 commit

  • First thing mm.h does is including sched.h solely for can_do_mlock() inline
    function which has "current" dereference inside. By dealing with can_do_mlock()
    mm.h can be detached from sched.h which is good. See below, why.

    This patch
    a) removes unconditional inclusion of sched.h from mm.h
    b) makes can_do_mlock() normal function in mm/mlock.c
    c) exports can_do_mlock() to not break compilation
    d) adds sched.h inclusions back to files that were getting it indirectly.
    e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
    getting them indirectly

    Net result is:
    a) mm.h users would get less code to open, read, preprocess, parse, ... if
    they don't need sched.h
    b) sched.h stops being dependency for significant number of files:
    on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
    after patch it's only 3744 (-8.3%).

    Cross-compile tested on

    all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
    alpha alpha-up
    arm
    i386 i386-up i386-defconfig i386-allnoconfig
    ia64 ia64-up
    m68k
    mips
    parisc parisc-up
    powerpc powerpc-up
    s390 s390-up
    sparc sparc-up
    sparc64 sparc64-up
    um-x86_64
    x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig

    as well as my two usual configs.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

26 Sep, 2006

1 commit

  • With the tracking of dirty pages properly done now, msync doesn't need to scan
    the PTEs anymore to determine the dirty status.

    From: Hugh Dickins

    In looking to do that, I made some other tidyups: can remove several
    #includes, and sys_msync loop termination not quite right.

    Most of those points are criticisms of the existing sys_msync, not of your
    patch. In particular, the loop termination errors were introduced in 2.6.17:
    I did notice this shortly before it came out, but decided I was more likely to
    get it wrong myself, and make matters worse if I tried to rush a last-minute
    fix in. And it's not terribly likely to go wrong, nor disastrous if it does
    go wrong (may miss reporting an unmapped area; may also fsync file of a
    following vma).

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

23 Jun, 2006

1 commit

  • A process flag to indicate whether we are doing sync io is incredibly
    ugly. It also causes performance problems when one does a lot of async
    io and then proceeds to sync it. Part of the io will go out as async,
    and the other part as sync. This causes a disconnect between the
    previously submitted io and the synced io. For io schedulers such as CFQ,
    this will cause us lost merges and suboptimal behaviour in scheduling.

    Remove PF_SYNCWRITE completely from the fsync/msync paths, and let
    the O_DIRECT path just directly indicate that the writes are sync
    by using WRITE_SYNC instead.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

25 Mar, 2006

1 commit


24 Mar, 2006

4 commits

  • No need to duplicate all that code.

    Cc: Hugh Dickins
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • msync() does a strange thing. Essentially:

    vma = find_vma();
    for ( ; ; ) {
    if (!vma)
    return -ENOMEM;
    ...
    vma = vma->vm_next;
    }

    so an msync() request which starts within or before a valid VMA and which ends
    within or beyond the final VMA will incorrectly return -ENOMEM.

    Fix.

    Cc: Hugh Dickins
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • It seems bad to hold mmap_sem while performing synchronous disk I/O. Alter
    the msync(MS_SYNC) code so that the lock is released while we sync the file.

    Cc: Hugh Dickins
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • It seems sensible to perform dirty page throttling in msync: as the application
    dirties pages we can kick off pdflush early, or even force the msync() caller
    to perform writeout, or even throttle the msync() caller.

    The main effect of this is to start disk writeback earlier if we've just
    discovered that a large amount of pagecache has been dirtied. (Otherwise it
    wouldn't happen for up to five seconds, next time pdflush wakes up).

    It also will cause the page-dirtying process to get panalised for dirtying
    those pages rather than whacking someone else with the problem.

    We should do this for munmap() and possibly even exit(), too.

    We drop the mmap_sem while performing the dirty page balancing. It doesn't
    seem right to hold mmap_sem for that long.

    Note that this patch only affects MS_ASYNC. MS_SYNC will be syncing all the
    dirty pages anyway.

    We note that msync(MS_SYNC) does a full-file-sync inside mmap_sem, and always
    has. We can fix that up...

    The patch also tightens up the mmap_sem coverage in sys_msync(): no point in
    taking it while we perform the incoming arg checking.

    Cc: Hugh Dickins
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

10 Jan, 2006

1 commit


29 Nov, 2005

1 commit

  • This replaces the (in my opinion horrible) VM_UNMAPPED logic with very
    explicit support for a "remapped page range" aka VM_PFNMAP. It allows a
    VM area to contain an arbitrary range of page table entries that the VM
    never touches, and never considers to be normal pages.

    Any user of "remap_pfn_range()" automatically gets this new
    functionality, and doesn't even have to mark the pages reserved or
    indeed mark them any other way. It just works. As a side effect, doing
    mmap() on /dev/mem works for arbitrary ranges.

    Sparc update from David in the next commit.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

23 Nov, 2005

1 commit

  • Although we tend to associate VM_RESERVED with remap_pfn_range, quite a few
    drivers set VM_RESERVED on areas which are then populated by nopage. The
    PageReserved removal in 2.6.15-rc1 changed VM_RESERVED not to free pages in
    zap_pte_range, without changing those drivers not to set it: so their pages
    just leak away.

    Let's not change miscellaneous drivers now: introduce VM_UNPAGED at the core,
    to flag the special areas where the ptes may have no struct page, or if they
    have then it's not to be touched. Replace most instances of VM_RESERVED in
    core mm by VM_UNPAGED. Force it on in remap_pfn_range, and the sparc and
    sparc64 io_remap_pfn_range.

    Revert addition of VM_RESERVED to powerpc vdso, it's not needed there. Is it
    needed anywhere? It still governs the mm->reserved_vm statistic, and special
    vmas not to be merged, and areas not to be core dumped; but could probably be
    eliminated later (the drivers are probably specifying it because in 2.4 it
    kept swapout off the vma, but in 2.6 we work from the LRU, which these pages
    don't get on).

    Use the VM_SHM slot for VM_UNPAGED, and define VM_SHM to 0: it serves no
    purpose whatsoever, and should be removed from drivers when we clean up.

    Signed-off-by: Hugh Dickins
    Acked-by: William Irwin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

30 Oct, 2005

4 commits

  • Convert those common loops using page_table_lock on the outside and
    pte_offset_map within to use just pte_offset_map_lock within instead.

    These all hold mmap_sem (some exclusively, some not), so at no level can a
    page table be whipped away from beneath them. But whereas pte_alloc loops
    tested with the "atomic" pmd_present, these loops are testing with pmd_none,
    which on i386 PAE tests both lower and upper halves.

    That's now unsafe, so add a cast into pmd_none to test only the vital lower
    half: we lose a little sensitivity to a corrupt middle directory, but not
    enough to worry about. It appears that i386 and UML were the only
    architectures vulnerable in this way, and pgd and pud no problem.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Remove PageReserved() calls from core code by tightening VM_RESERVED
    handling in mm/ to cover PageReserved functionality.

    PageReserved special casing is removed from get_page and put_page.

    All setting and clearing of PageReserved is retained, and it is now flagged
    in the page_alloc checks to help ensure we don't introduce any refcount
    based freeing of Reserved pages.

    MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
    deprecated. We never completely handled it correctly anyway, and is be
    reintroduced in future if required (Hugh has a proof of concept).

    Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
    arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
    be trivially removed.

    Last real user of PageReserved is swsusp, which uses PageReserved to
    determine whether a struct page points to valid memory or not. This still
    needs to be addressed (a generic page_is_ram() should work).

    A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
    thus mapcounted and count towards shared rss). These writes to the struct
    page could cause excessive cacheline bouncing on big systems. There are a
    number of ways this could be addressed if it is an issue.

    Signed-off-by: Nick Piggin

    Refcount bug fix for filemap_xip.c

    Signed-off-by: Carsten Otte
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Use latency breaking in msync_pte_range like that in copy_pte_range, instead
    of the ugly CONFIG_PREEMPT filemap_msync alternatives.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • This is not problem actually, but sync_page_range() is using for exported
    function to filesystems.

    The msync_xxx is more readable at least to me.

    Signed-off-by: OGAWA Hirofumi
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    OGAWA Hirofumi
     

22 Jun, 2005

1 commit

  • It's common practice to msync a large address range regularly, in which
    often only a few ptes have actually been dirtied since the previous pass.

    sync_pte_range then goes much faster if it tests whether pte is dirty
    before locating and accessing each struct page cacheline; and it is hardly
    slowed by ptep_clear_flush_dirty repeating that test in the opposite case,
    when every pte actually is dirty.

    But beware, s390's pte_dirty always says false, since its dirty bit is kept
    in the storage key, located via the struct page address. So skip this
    optimization in its case: use a pte_maybe_dirty macro which just says true
    if page_test_and_clear_dirty is implemented.

    Signed-off-by: Abhijit Karmarkar
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Abhijit Karmarkar
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds