07 Mar, 2010

22 commits

  • And bring them back to 4-bit mode during resume.

    Signed-off-by: Daniel Drake
    Signed-off-by: Nicolas Pitre
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Drake
     
  • This patch series provides the core changes needed to allow SDIO cards to
    remain powered and active while the host system is suspended, and let them
    wake up the host system when needed. This is used to implement
    wake-on-lan with SDIO wireless cards at the moment. Patches to add that
    support to the libertas driver will be posted separately.

    This patch:

    Some SDIO cards have the ability to keep on running autonomously when the
    host system is suspended, and wake it up when needed. This however
    requires that the host controller preserve power to the card, and
    configure itself appropriately for wake-up.

    There is however 4 layers of abstractions involved: the host controller
    driver, the MMC core code, the SDIO card management code, and the actual
    SDIO function driver. To make things simple and manageable, host drivers
    must advertise their PM capabilities with a feature bitmask, then function
    drivers can query and set those features from their suspend method. Then
    each layer in the suspend call chain is expected to act upon those bits
    accordingly.

    [akpm@linux-foundation.org: fix typo in comment]
    Signed-off-by: Nicolas Pitre
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nicolas Pitre
     
  • Some SDIO cards expect byte transfers not to exceed the configured block
    transfer size. Add a quirk to that effect.

    Patches to make use of this quirk will be sent separately.

    Signed-off-by: Bing Zhao
    Signed-off-by: Nicolas Pitre
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bing Zhao
     
  • The function name must be followed by a space, hypen, space, and a short
    description.

    Signed-off-by: Ben Hutchings
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ben Hutchings
     
  • smp: Fix documentation.

    Fix documentation in include/linux/smp.h: smp_processor_id()

    Signed-off-by: Rakib Mullick
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rakib Mullick
     
  • The macro any_online_node() is prone to producing sparse warnings due to
    the local symbol 'node'. Since all the in-tree users are really
    requesting the first online node (the mask argument is either
    NODE_MASK_ALL or node_online_map) just use the first_online_node macro and
    remove the any_online_node macro since there are no users.

    Signed-off-by: H Hartley Sweeten
    Acked-by: David Rientjes
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Lee Schermerhorn
    Acked-by: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Dave Hansen
    Cc: Milton Miller
    Cc: Nathan Fontenot
    Cc: Geoff Levand
    Cc: Grant Likely
    Cc: J. Bruce Fields
    Cc: Neil Brown
    Cc: Trond Myklebust
    Cc: David S. Miller
    Cc: Benny Halevy
    Cc: Chuck Lever
    Cc: Ricardo Labiaga
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    H Hartley Sweeten
     
  • Dependent on CONFIG_SMP the num_*_cpus() functions return unsigned or
    signed values. Let them always return unsigned values to avoid strange
    casts.

    Fixes at least one warning:

    kernel/kprobes.c: In function 'register_kretprobe':
    kernel/kprobes.c:1038: warning: comparison of distinct pointer types lacks a cast

    Signed-off-by: Heiko Carstens
    Cc: Heiko Carstens
    Cc: Ananth N Mavinakayanahalli
    Cc: Masami Hiramatsu
    Cc: Ingo Molnar
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     
  • __GFP_NOFAIL was deprecated in dab48dab, so add a comment that no new
    users should be added.

    Reviewed-by: KAMEZAWA Hiroyuki
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The VM currently assumes that an inactive, mapped and referenced file page
    is in use and promotes it to the active list.

    However, every mapped file page starts out like this and thus a problem
    arises when workloads create a stream of such pages that are used only for
    a short time. By flooding the active list with those pages, the VM
    quickly gets into trouble finding eligible reclaim canditates. The result
    is long allocation latencies and eviction of the wrong pages.

    This patch reuses the PG_referenced page flag (used for unmapped file
    pages) to implement a usage detection that scales with the speed of LRU
    list cycling (i.e. memory pressure).

    If the scanner encounters those pages, the flag is set and the page cycled
    again on the inactive list. Only if it returns with another page table
    reference it is activated. Otherwise it is reclaimed as 'not recently
    used cache'.

    This effectively changes the minimum lifetime of a used-once mapped file
    page from a full memory cycle to an inactive list cycle, which allows it
    to occur in linear streams without affecting the stable working set of the
    system.

    Signed-off-by: Johannes Weiner
    Reviewed-by: Rik van Riel
    Cc: Minchan Kim
    Cc: OSAKI Motohiro
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • There are quite a few GFP_KERNEL memory allocations made during
    suspend/hibernation and resume that may cause the system to hang, because
    the I/O operations they depend on cannot be completed due to the
    underlying devices being suspended.

    Avoid this problem by clearing the __GFP_IO and __GFP_FS bits in
    gfp_allowed_mask before suspend/hibernation and restoring the original
    values of these bits in gfp_allowed_mask durig the subsequent resume.

    [akpm@linux-foundation.org: fix CONFIG_PM=n linkage]
    Signed-off-by: Rafael J. Wysocki
    Reported-by: Maxim Levitsky
    Cc: Sebastian Ott
    Cc: Benjamin Herrenschmidt
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     
  • When a VMA is in an inconsistent state during setup or teardown, the worst
    that can happen is that the rmap code will not be able to find the page.

    The mapping is in the process of being torn down (PTEs just got
    invalidated by munmap), or set up (no PTEs have been instantiated yet).

    It is also impossible for the rmap code to follow a pointer to an already
    freed VMA, because the rmap code holds the anon_vma->lock, which the VMA
    teardown code needs to take before the VMA is removed from the anon_vma
    chain.

    Hence, we should not need the VM_LOCK_RMAP locking at all.

    Signed-off-by: Rik van Riel
    Cc: Nick Piggin
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Cc: Andrea Arcangeli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • When the parent process breaks the COW on a page, both the original which
    is mapped at child and the new page which is mapped parent end up in that
    same anon_vma. Generally this won't be a problem, but for some workloads
    it could preserve the O(N) rmap scanning complexity.

    A simple fix is to ensure that, when a page which is mapped child gets
    reused in do_wp_page, because we already are the exclusive owner, the page
    gets moved to our own exclusive child's anon_vma.

    Signed-off-by: Rik van Riel
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Reviewed-by: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • The old anon_vma code can lead to scalability issues with heavily forking
    workloads. Specifically, each anon_vma will be shared between the parent
    process and all its child processes.

    In a workload with 1000 child processes and a VMA with 1000 anonymous
    pages per process that get COWed, this leads to a system with a million
    anonymous pages in the same anon_vma, each of which is mapped in just one
    of the 1000 processes. However, the current rmap code needs to walk them
    all, leading to O(N) scanning complexity for each page.

    This can result in systems where one CPU is walking the page tables of
    1000 processes in page_referenced_one, while all other CPUs are stuck on
    the anon_vma lock. This leads to catastrophic failure for a benchmark
    like AIM7, where the total number of processes can reach in the tens of
    thousands. Real workloads are still a factor 10 less process intensive
    than AIM7, but they are catching up.

    This patch changes the way anon_vmas and VMAs are linked, which allows us
    to associate multiple anon_vmas with a VMA. At fork time, each child
    process gets its own anon_vmas, in which its COWed pages will be
    instantiated. The parents' anon_vma is also linked to the VMA, because
    non-COWed pages could be present in any of the children.

    This reduces rmap scanning complexity to O(1) for the pages of the 1000
    child processes, with O(N) complexity for at most 1/N pages in the system.
    This reduces the average scanning cost in heavily forking workloads from
    O(N) to 2.

    The only real complexity in this patch stems from the fact that linking a
    VMA to anon_vmas now involves memory allocations. This means vma_adjust
    can fail, if it needs to attach a VMA to anon_vma structures. This in
    turn means error handling needs to be added to the calling functions.

    A second source of complexity is that, because there can be multiple
    anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
    "the" anon_vma lock. To prevent the rmap code from walking up an
    incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
    flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
    to make sure it is impossible to compile a kernel that needs both symbolic
    values for the same bitflag.

    Some test results:

    Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
    box with 16GB RAM and not quite enough IO), the system ends up running
    >99% in system time, with every CPU on the same anon_vma lock in the
    pageout code.

    With these changes, AIM7 hits the cross-over point around 29.7k users.
    This happens with ~99% IO wait time, there never seems to be any spike in
    system time. The anon_vma lock contention appears to be resolved.

    [akpm@linux-foundation.org: cleanups]
    Signed-off-by: Rik van Riel
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Cc: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • It was tolerable until Eric went and added 8388608.

    Cc: Eric Paris
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • This fixes inefficient page-by-page reads on POSIX_FADV_RANDOM.

    POSIX_FADV_RANDOM used to set ra_pages=0, which leads to poor performance:
    a 16K read will be carried out in 4 _sync_ 1-page reads.

    In other places, ra_pages==0 means
    - it's ramfs/tmpfs/hugetlbfs/sysfs/configfs
    - some IO error happened
    where multi-page read IO won't help or should be avoided.

    POSIX_FADV_RANDOM actually want a different semantics: to disable the
    *heuristic* readahead algorithm, and to use a dumb one which faithfully
    submit read IO for whatever application requests.

    So introduce a flag FMODE_RANDOM for POSIX_FADV_RANDOM.

    Note that the random hint is not likely to help random reads performance
    noticeably. And it may be too permissive on huge request size (its IO
    size is not limited by read_ahead_kb).

    In Quentin's report (http://lkml.org/lkml/2009/12/24/145), the overall
    (NFS read) performance of the application increased by 313%!

    Tested-by: Quentin Barnes
    Signed-off-by: Wu Fengguang
    Cc: Nick Piggin
    Cc: Andi Kleen
    Cc: Steven Whitehouse
    Cc: David Howells
    Cc: Jonathan Corbet
    Cc: Al Viro
    Cc: Christoph Hellwig
    Cc: Trond Myklebust
    Cc: Chuck Lever
    Cc: [2.6.33.x]
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wu Fengguang
     
  • A memmap is a directory in sysfs which includes 3 text files: start, end
    and type. For example:

    start: 0x100000
    end: 0x7e7b1cff
    type: System RAM

    Interface firmware_map_add was not called explicitly. Remove it and add
    function firmware_map_add_hotplug as hotplug interface of memmap.

    Each memory entry has a memmap in sysfs, When we hot-add new memory, sysfs
    does not export memmap entry for it. We add a call in function add_memory
    to function firmware_map_add_hotplug.

    Add a new function add_sysfs_fw_map_entry() to create memmap entry, it
    will be called when initialize memmap and hot-add memory.

    [akpm@linux-foundation.org: un-kernedoc a no longer kerneldoc comment]
    Signed-off-by: Shaohui Zheng
    Acked-by: Andi Kleen
    Acked-by: Yasunori Goto
    Reviewed-by: Wu Fengguang
    Cc: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    akpm@linux-foundation.org
     
  • commit e815af95 ("change all_unreclaimable zone member to flags") changed
    all_unreclaimable member to bit flag. But it had an undesireble side
    effect. free_one_page() is one of most hot path in linux kernel and
    increasing atomic ops in it can reduce kernel performance a bit.

    Thus, this patch revert such commit partially. at least
    all_unreclaimable shouldn't share memory word with other zone flags.

    [akpm@linux-foundation.org: fix patch interaction]
    Signed-off-by: KOSAKI Motohiro
    Cc: David Rientjes
    Cc: Wu Fengguang
    Cc: KAMEZAWA Hiroyuki
    Cc: Minchan Kim
    Cc: Huang Shijie
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • free_hot_page() is just a wrapper around free_hot_cold_page() with
    parameter 'cold = 0'. After adding a clear comment for
    free_hot_cold_page(), it is reasonable to remove a level of call.

    [akpm@linux-foundation.org: fix build]
    Signed-off-by: Li Hong
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Ingo Molnar
    Cc: Larry Woodman
    Cc: Peter Zijlstra
    Cc: Li Ming Chun
    Cc: KOSAKI Motohiro
    Cc: Americo Wang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Li Hong
     
  • A frequent questions from users about memory management is what numbers of
    swap ents are user for processes. And this information will give some
    hints to oom-killer.

    Besides we can count the number of swapents per a process by scanning
    /proc//smaps, this is very slow and not good for usual process
    information handler which works like 'ps' or 'top'. (ps or top is now
    enough slow..)

    This patch adds a counter of swapents to mm_counter and update is at each
    swap events. Information is exported via /proc//status file as

    [kamezawa@bluextal memory]$ cat /proc/self/status
    Name: cat
    State: R (running)
    Tgid: 2910
    Pid: 2910
    PPid: 2823
    TracerPid: 0
    Uid: 500 500 500 500
    Gid: 500 500 500 500
    FDSize: 256
    Groups: 500
    VmPeak: 82696 kB
    VmSize: 82696 kB
    VmLck: 0 kB
    VmHWM: 432 kB
    VmRSS: 432 kB
    VmData: 172 kB
    VmStk: 84 kB
    VmExe: 48 kB
    VmLib: 1568 kB
    VmPTE: 40 kB
    VmSwap: 0 kB
    Reviewed-by: Minchan Kim
    Reviewed-by: Christoph Lameter
    Cc: Lee Schermerhorn
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Considering the nature of per mm stats, it's the shared object among
    threads and can be a cache-miss point in the page fault path.

    This patch adds per-thread cache for mm_counter. RSS value will be
    counted into a struct in task_struct and synchronized with mm's one at
    events.

    Now, in this patch, the event is the number of calls to handle_mm_fault.
    Per-thread value is added to mm at each 64 calls.

    rough estimation with small benchmark on parallel thread (2threads) shows
    [before]
    4.5 cache-miss/faults
    [after]
    4.0 cache-miss/faults
    Anyway, the most contended object is mmap_sem if the number of threads grows.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Minchan Kim
    Cc: Christoph Lameter
    Cc: Lee Schermerhorn
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Presently, per-mm statistics counter is defined by macro in sched.h

    This patch modifies it to
    - defined in mm.h as inlinf functions
    - use array instead of macro's name creation.

    This patch is for reducing patch size in future patch to modify
    implementation of per-mm counter.

    Signed-off-by: KAMEZAWA Hiroyuki
    Reviewed-by: Minchan Kim
    Cc: Christoph Lameter
    Cc: Lee Schermerhorn
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Rename for_each_bit to for_each_set_bit in the kernel source tree. To
    permit for_each_clear_bit(), should that ever be added.

    The patch includes a macro to map the old for_each_bit() onto the new
    for_each_set_bit(). This is a (very) temporary thing to ease the migration.

    [akpm@linux-foundation.org: add temporary for_each_bit()]
    Suggested-by: Alexey Dobriyan
    Suggested-by: Andrew Morton
    Signed-off-by: Akinobu Mita
    Cc: "David S. Miller"
    Cc: Russell King
    Cc: David Woodhouse
    Cc: Artem Bityutskiy
    Cc: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

06 Mar, 2010

18 commits

  • * 'slab-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
    SLUB: Fix per-cpu merge conflict
    failslab: add ability to filter slab caches
    slab: fix regression in touched logic
    dma kmalloc handling fixes
    slub: remove impossible condition
    slab: initialize unused alien cache entry as NULL at alloc_alien_cache().
    SLUB: Make slub statistics use this_cpu_inc
    SLUB: this_cpu: Remove slub kmem_cache fields
    SLUB: Get rid of dynamic DMA kmalloc cache allocation
    SLUB: Use this_cpu operations in slub

    Linus Torvalds
     
  • * 'nfs-for-2.6.34' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6: (44 commits)
    NFS: Remove requirement for inode->i_mutex from nfs_invalidate_mapping
    NFS: Clean up nfs_sync_mapping
    NFS: Simplify nfs_wb_page()
    NFS: Replace __nfs_write_mapping with sync_inode()
    NFS: Simplify nfs_wb_page_cancel()
    NFS: Ensure inode is always marked I_DIRTY_DATASYNC, if it has unstable pages
    NFS: Run COMMIT as an asynchronous RPC call when wbc->for_background is set
    NFS: Reduce the number of unnecessary COMMIT calls
    NFS: Add a count of the number of unstable writes carried by an inode
    NFS: Cleanup - move nfs_write_inode() into fs/nfs/write.c
    nfs41 fix NFS4ERR_CLID_INUSE for exchange id
    NFS: Fix an allocation-under-spinlock bug
    SUNRPC: Handle EINVAL error returns from the TCP connect operation
    NFSv4.1: Various fixes to the sequence flag error handling
    nfs4: renewd renew operations should take/put a client reference
    nfs41: renewd sequence operations should take/put client reference
    nfs: prevent backlogging of renewd requests
    nfs: kill renewd before clearing client minor version
    NFS: Make close(2) asynchronous when closing NFS O_DIRECT files
    NFS: Improve NFS iostat byte count accuracy for writes
    ...

    Linus Torvalds
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs:
    fs/9p: Add hardlink support to .u extension
    9P2010.L handshake: .L protocol negotiation
    9P2010.L handshake: Remove "dotu" variable
    9P2010.L handshake: Add mount option
    9P2010.L handshake: Add VFS flags
    net/9p: Handle mount errors correctly.
    net/9p: Remove MAX_9P_CHAN limit
    net/9p: Add multi channel support.

    Linus Torvalds
     
  • * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs-2.6: (33 commits)
    quota: stop using QUOTA_OK / NO_QUOTA
    dquot: cleanup dquot initialize routine
    dquot: move dquot initialization responsibility into the filesystem
    dquot: cleanup dquot drop routine
    dquot: move dquot drop responsibility into the filesystem
    dquot: cleanup dquot transfer routine
    dquot: move dquot transfer responsibility into the filesystem
    dquot: cleanup inode allocation / freeing routines
    dquot: cleanup space allocation / freeing routines
    ext3: add writepage sanity checks
    ext3: Truncate allocated blocks if direct IO write fails to update i_size
    quota: Properly invalidate caches even for filesystems with blocksize < pagesize
    quota: generalize quota transfer interface
    quota: sb_quota state flags cleanup
    jbd: Delay discarding buffers in journal_unmap_buffer
    ext3: quota_write cross block boundary behaviour
    quota: drop permission checks from xfs_fs_set_xstate/xfs_fs_set_xquota
    quota: split out compat_sys_quotactl support from quota.c
    quota: split out netlink notification support from quota.c
    quota: remove invalid optimization from quota_sync_all
    ...

    Fixed trivial conflicts in fs/namei.c and fs/ufs/inode.c

    Linus Torvalds
     
  • * 'kvm-updates/2.6.34' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (145 commits)
    KVM: x86: Add KVM_CAP_X86_ROBUST_SINGLESTEP
    KVM: VMX: Update instruction length on intercepted BP
    KVM: Fix emulate_sys[call, enter, exit]()'s fault handling
    KVM: Fix segment descriptor loading
    KVM: Fix load_guest_segment_descriptor() to inject page fault
    KVM: x86 emulator: Forbid modifying CS segment register by mov instruction
    KVM: Convert kvm->requests_lock to raw_spinlock_t
    KVM: Convert i8254/i8259 locks to raw_spinlocks
    KVM: x86 emulator: disallow opcode 82 in 64-bit mode
    KVM: x86 emulator: code style cleanup
    KVM: Plan obsolescence of kernel allocated slots, paravirt mmu
    KVM: x86 emulator: Add LOCK prefix validity checking
    KVM: x86 emulator: Check CPL level during privilege instruction emulation
    KVM: x86 emulator: Fix popf emulation
    KVM: x86 emulator: Check IOPL level during io instruction emulation
    KVM: x86 emulator: fix memory access during x86 emulation
    KVM: x86 emulator: Add Virtual-8086 mode of emulation
    KVM: x86 emulator: Add group9 instruction decoding
    KVM: x86 emulator: Add group8 instruction decoding
    KVM: do not store wqh in irqfd
    ...

    Trivial conflicts in Documentation/feature-removal-schedule.txt

    Linus Torvalds
     
  • Removes 'dotu' variable and make everything dependent
    on 'proto_version' field.

    Signed-off-by: Sripathi Kodi
    Signed-off-by: Eric Van Hensbergen

    Sripathi Kodi
     
  • Add new mount V9FS mount option to specify protocol version

    This patch adds a new mount option to specify protocol version.
    With this option it is possible to use "-o version=" switch to
    specify 9P protocol version to use. Valid options for version
    are:
    9p2000
    9p2000.u
    9p2010.L

    Signed-off-by: Sripathi Kodi
    Signed-off-by: Eric Van Hensbergen

    Sripathi Kodi
     
  • Use a list to track the channel instead of statically
    allocated array

    Signed-off-by: Aneesh Kumar K.V
    Signed-off-by: Eric Van Hensbergen

    Aneesh Kumar K.V
     
  • This is needed for supporting multiple mount points.

    We can find out the device names to be used with mount by checking

    /sys/devices/virtio-pci/virtio*/device file

    if the device file have value 9 then the specific virtio device can
    be used for mounting.

    ex:
    #cat /sys/devices/virtio-pci/virtio1/device
    9

    now we can mount using
    # mount -t 9p -o trans=virtio virtio1 /mnt/

    Signed-off-by: Aneesh Kumar K.V
    Signed-off-by: Eric Van Hensbergen

    Aneesh Kumar K.V
     
  • Trond Myklebust
     
  • Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • Now that we have correct COMMIT semantics in writeback_single_inode, we can
    reduce and simplify nfs_wb_all(). Also replace nfs_wb_nocommit() with a
    call to filemap_write_and_wait(), which doesn't need to hold the
    inode->i_mutex.

    With that done, we can eliminate nfs_write_mapping() altogether.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • In all cases we should be able to just remove the request and call
    cancel_dirty_page().

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • In order to know when we should do opportunistic commits of the unstable
    writes, when the VM is doing a background flush, we add a field to count
    the number of unstable writes.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • The sole purpose of nfs_write_inode is to commit unstable writes, so
    move it into fs/nfs/write.c, and make nfs_commit_inode static.

    Signed-off-by: Trond Myklebust

    Trond Myklebust
     
  • * 'write_inode2' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6:
    pass writeback_control to ->write_inode
    make sure data is on disk before calling ->write_inode

    Linus Torvalds
     
  • …nel/git/tip/linux-2.6-tip

    * 'perf-probes-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86: Issue at least one memory barrier in stop_machine_text_poke()
    perf probe: Correct probe syntax on command line help
    perf probe: Add lazy line matching support
    perf probe: Show more lines after last line
    perf probe: Check function address range strictly in line finder
    perf probe: Use libdw callback routines
    perf probe: Use elfutils-libdw for analyzing debuginfo
    perf probe: Rename probe finder functions
    perf probe: Fix bugs in line range finder
    perf probe: Update perf probe document
    perf probe: Do not show --line option without dwarf support
    kprobes: Add documents of jump optimization
    kprobes/x86: Support kprobes jump optimization on x86
    x86: Add text_poke_smp for SMP cross modifying code
    kprobes/x86: Cleanup save/restore registers
    kprobes/x86: Boost probes when reentering
    kprobes: Jump optimization sysctl interface
    kprobes: Introduce kprobes jump optimization
    kprobes: Introduce generic insn_slot framework
    kprobes/x86: Cleanup RELATIVEJUMP_INSTRUCTION to RELATIVEJUMP_OPCODE

    Linus Torvalds