19 Jan, 2006

40 commits

  • v9fs mmap support was originally removed from v9fs at Al Viro's request,
    but recently there have been requests from folks who want readpage
    functionality (primarily to enable execution of files mounted via 9P).
    This patch adds readpage support (but not writepage which contained most of
    the objectionable code). It passes fsx-linux (and other regressions) so it
    should be relatively safe.

    Signed-off-by: Eric Van Hensbergen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Van Hensbergen
     
  • Correct a bit of whitespace problems while working here.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • When the user specifies both a COW file and its backing file, if the previous
    backing file is not found, currently UML tries again to use it and fails.

    This can be corrected by changing same_backing_files() return value in that
    case, so that the caller will try to change the COW file to point to the new
    location, as already done in other cases.

    Additionally, given the change in the meaning of the func, change its name,
    invert its return value, so all values are inverted except when
    stat(from_cow,&buf2) fails. And add some comments and two minor bugfixes -
    remove a fd leak (return err rather than goto out) and a repeated check.

    Tested well.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • *) mark as "EXPERIMENTAL" various items that either aren't very stable or
    that are actively crashing the setup of users which don't really need them
    (i.e. HIGHMEM and 3-level pagetables on x86 - nobody needs either,
    everybody reports "I'm using it and getting trouble").

    *) move net/Kconfig near to the rest of network configurations, and
    drivers/block/Kconfig near "Block layer" submenu.

    *) it's useless and doesn't work well to force NETDEVICES on and to disable
    the prompt like it's done. Better remove the attempt, and change that to a
    simple "default y if UML".

    *) drop the warning about "report problems about HPPFS" - it's redundant
    anyway, as that's the usual procedure, and HPPFS users are especially
    technical (i.e. they know reporting bugs is _good_).

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • Ugly trick to help make malloc not sleeping - we can't do anything else. But
    this is not yet optimal, since spinlock don't trigger in_atomic() when
    preemption is disabled.

    Also, even if ugly, this was already used in one place, and was even more
    bogus. Fix it.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • In a previous patch I shifted an allocation to being atomic.

    In this patch, a better but more intrusive solution is implemented, i.e. hold
    the lock only when really needing it, especially not over pipe operations, nor
    over the culprit allocation.

    Additionally, while at it, add a missing kfree in the failure path, and make
    sure that if we fail in forking, write_sigio_pid is -1 and not, say, -ENOMEM.

    And fix whitespace, at least for things I was touching anyway.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • In this error path, when the interface has had a problem, we call dev_close(),
    which is disallowed for two reasons:

    *) takes again the UML internal spinlock, inside the ->stop method of this
    device
    *) can be called in process context only, while we're in interrupt context.

    I've also thought that calling dev_close() may be a wrong policy to follow,
    but it's not up to me to decide that.

    However, we may end up with multiple dev_close() queued on the same device.
    But the initial test for (dev->flags & IFF_UP) makes this harmless, though -
    and dev_close() is supposed to care about races with itself. So there's no
    harm in delaying the shutdown, IMHO.

    Something to mark the interface as "going to shutdown" would be appreciated,
    but dev_deactivate has the same problems as dev_close(), so we can't use it
    either.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • Pre-clear transport-specific private structure before passing it down.

    In fact, I just got a slab corruption and kernel panic on exit because kfree()
    was called on a pointer which probably was never allocated, BUT hadn't been
    set to NULL by the driver.

    As the code is full of such errors, I've decided for now to go the safe way
    (we're talking about drivers), and to do the simple thing. I'm also starting
    to fix drivers, and already sent a patch for the daemon transport.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • Avoid uninitialized data in the daemon_data structure. I used this transport
    before doing proper setup before-hand, and I got some very nice SLAB
    corruption due to freeing crap pointers. So just make sure to clear
    everything when appropriate.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • I added this line to share this file with UML, but now it's no longer
    shared so remove this useless leftover.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Acked-by: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • Some fixes to make softints work in tt mode.

    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bodo Stroesser
     
  • Now that we are doing soft interrupts, there's no point in using sigsetjmp and
    siglongjmp. Using setjmp and longjmp saves a sigprocmask on every jump.

    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jeff Dike
     
  • This patch implements soft interrupts. Interrupt enabling and disabling no
    longer map to sigprocmask. Rather, a flag is set indicating whether
    interrupts may be handled. If a signal comes in and interrupts are marked as
    OK, then it is handled normally. If interrupts are marked as off, then the
    signal handler simply returns after noting that a signal needs handling. When
    interrupts are enabled later on, this pending signals flag is checked, and the
    IRQ handlers are called at that point.

    The point of this is to reduce the cost of local_irq_save et al, since they
    are very much more common than the signals that they are enabling and
    disabling. Soft interrupts produce a speed-up of ~25% on a kernel build.

    Subtleties -

    UML uses sigsetjmp/siglongjmp to switch contexts. sigsetjmp has been
    wrapped in a save_flags-like macro which remembers the interrupt state at
    setjmp time, and restores it when it is longjmp-ed back to.

    The enable_signals function has to loop because the IRQ handler
    disables interrupts before returning. enable_signals has to return with
    signals enabled, and signals may come in between the disabling and the
    return to enable_signals. So, it loops for as long as there are pending
    signals, ensuring that signals are enabled when it finally returns, and
    that there are no pending signals that need to be dealt with.

    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jeff Dike
     
  • Stop using global variables to hold the file descriptor and offset used to map
    the skas0 stubs. Instead, calculate them using the page physical addresses.

    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jeff Dike
     
  • The serial UML OS-abstraction layer patch (um/kernel/skas dir).

    This moves all systemcalls from skas/process.c file under os-Linux dir and
    join skas/process.c and skas/process_kern.c files.

    Signed-off-by: Gennady Sharapov
    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gennady Sharapov
     
  • The serial UML OS-abstraction layer patch (um/kernel/skas dir).

    This moves all systemcalls from skas/mem_user.c file under os-Linux dir and
    join skas/mem_user.c and skas/mem.c files.

    Signed-off-by: Gennady Sharapov
    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gennady Sharapov
     
  • The serial UML OS-abstraction layer patch (um/kernel dir).

    This moves skas headers to arch/um/include.

    Signed-off-by: Gennady Sharapov
    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gennady Sharapov
     
  • Current implementation of boot_timer_handler isn't usable for s390. So I
    changed its name to do_boot_timer_handler, taking (struct sigcontext *)sc as
    argument. do_boot_timer_handler is called from new boot_timer_handler() in
    arch/um/os-Linux/signal.c, which uses the same mechanisms as other signal
    handler to find out sigcontext pointer.

    Signed-off-by: Bodo Stroesser
    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bodo Stroesser
     
  • The serial UML OS-abstraction layer patch (um/kernel dir).

    This moves all systemcalls from time.c file under os-Linux dir and joins
    time.c and tine_kernel.c files

    Signed-off-by: Gennady Sharapov
    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gennady Sharapov
     
  • The serial UML OS-abstraction layer patch (um/kernel dir).

    This moves all systemcalls from user_util.c file under os-Linux dir

    Signed-off-by: Gennady Sharapov
    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gennady Sharapov
     
  • s390 doesn't have a LDT. So MM_COPY_SEGMENTS will not be supported on s390.

    The only user of MM_COPY_SEGMENTS is new_mm(), but that's no longer useful, as
    arch/sys-i386/ldt.c defines init_new_ldt(), which is called immediately after
    new_mm(). So we should copy host's LDT in init_new_ldt(), if /proc/mm is
    available, to have this subarch specific call in subarch code.

    Signed-off-by: Bodo Stroesser
    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bodo Stroesser
     
  • Add implementations of the write* and __raw_write* functions. __raw_writel is
    needed by lib/iocopy.c, which shouldn't be used in UML, but which is
    unconditionally linked in anyway.

    Signed-off-by: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jeff Dike
     
  • Move the interrupt check from slab_node into ___cache_alloc and adds an
    "unlikely()" to avoid pipeline stalls on some architectures.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • This patch fixes a regression in 2.6.14 against 2.6.13 that causes an
    imbalance in memory allocation during bootup.

    The slab allocator in 2.6.13 is not numa aware and simply calls
    alloc_pages(). This means that memory policies may control the behavior of
    alloc_pages(). During bootup the memory policy is set to MPOL_INTERLEAVE
    resulting in the spreading out of allocations during bootup over all
    available nodes. The slab allocator in 2.6.13 has only a single list of
    slab pages. As a result the per cpu slab cache and the spinlock controlled
    page lists may contain slab entries from off node memory. The slab
    allocator in 2.6.13 makes no effort to discern the locality of an entry on
    its lists.

    The NUMA aware slab allocator in 2.6.14 controls locality of the slab pages
    explicitly by calling alloc_pages_node(). The NUMA slab allocator manages
    slab entries by having lists of available slab pages for each node. The
    per cpu slab cache can only contain slab entries associated with the node
    local to the processor. This guarantees that the default allocation mode
    of the slab allocator always assigns local memory if available.

    Setting MPOL_INTERLEAVE as a default policy during bootup has no effect
    anymore. In 2.6.14 all node unspecific slab allocations are performed on
    the boot processor. This means that most of key data structures are
    allocated on one node. Most processors will have to refer to these
    structures making the boot node a potential bottleneck. This may reduce
    performance and cause unnecessary memory pressure on the boot node.

    This patch implements NUMA policies in the slab layer. There is the need
    of explicit application of NUMA memory policies by the slab allcator itself
    since the NUMA slab allocator does no longer let the page_allocator control
    locality.

    The check for policies is made directly at the beginning of __cache_alloc
    using current->mempolicy. The memory policy is already frequently checked
    by the page allocator (alloc_page_vma() and alloc_page_current()). So it
    is highly likely that the cacheline is present. For MPOL_INTERLEAVE
    kmalloc() will spread out each request to one node after another so that an
    equal distribution of allocations can be obtained during bootup.

    It is not possible to push the policy check to lower layers of the NUMA
    slab allocator since the per cpu caches are now only containing slab
    entries from the current node. If the policy says that the local node is
    not to be preferred or forbidden then there is no point in checking the
    slab cache or local list of slab pages. The allocation better be directed
    immediately to the lists containing slab entries for the allowed set of
    nodes.

    This way of applying policy also fixes another strange behavior in 2.6.13.
    alloc_pages() is controlled by the memory allocation policy of the current
    process. It could therefore be that one process is running with
    MPOL_INTERLEAVE and would f.e. obtain a new page following that policy
    since no slab entries are in the lists anymore. A page can typically be
    used for multiple slab entries but lets say that the current process is
    only using one. The other entries are then added to the slab lists. These
    are now non local entries in the slab lists despite of the possible
    availability of local pages that would provide faster access and increase
    the performance of the application.

    Another process without MPOL_INTERLEAVE may now run and expect a local slab
    entry from kmalloc(). However, there are still these free slab entries
    from the off node page obtained from the other process via MPOL_INTERLEAVE
    in the cache. The process will then get an off node slab entry although
    other slab entries may be available that are local to that process. This
    means that the policy if one process may contaminate the locality of the
    slab caches for other processes.

    This patch in effect insures that a per process policy is followed for the
    allocation of slab entries and that there cannot be a memory policy
    influence from one process to another. A process with default policy will
    always get a local slab entry if one is available. And the process using
    memory policies will get its memory arranged as requested. Off-node slab
    allocation will require the use of spinlocks and will make the use of per
    cpu caches not possible. A process using memory policies to redirect
    allocations offnode will have to cope with additional lock overhead in
    addition to the latency added by the need to access a remote slab entry.

    Changes V1->V2
    - Remove #ifdef CONFIG_NUMA by moving forward declaration into
    prior #ifdef CONFIG_NUMA section.

    - Give the function determining the node number to use a saner
    name.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Convert mm/swapfile.c's swapon_sem to swapon_mutex.

    Signed-off-by: Ingo Molnar
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • proc support for zone reclaim

    This patch creates a proc entry /proc/sys/vm/zone_reclaim_mode that may be
    used to override the automatic determination of the zone reclaim made on
    bootup.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Some bits for zone reclaim exists in 2.6.15 but they are not usable. This
    patch fixes them up, removes unused code and makes zone reclaim usable.

    Zone reclaim allows the reclaiming of pages from a zone if the number of
    free pages falls below the watermarks even if other zones still have enough
    pages available. Zone reclaim is of particular importance for NUMA
    machines. It can be more beneficial to reclaim a page than taking the
    performance penalties that come with allocating a page on a remote zone.

    Zone reclaim is enabled if the maximum distance to another node is higher
    than RECLAIM_DISTANCE, which may be defined by an arch. By default
    RECLAIM_DISTANCE is 20. 20 is the distance to another node in the same
    component (enclosure or motherboard) on IA64. The meaning of the NUMA
    distance information seems to vary by arch.

    If zone reclaim is not successful then no further reclaim attempts will
    occur for a certain time period (ZONE_RECLAIM_INTERVAL).

    This patch was discussed before. See

    http://marc.theaimsgroup.com/?l=linux-kernel&m=113519961504207&w=2
    http://marc.theaimsgroup.com/?l=linux-kernel&m=113408418232531&w=2
    http://marc.theaimsgroup.com/?l=linux-kernel&m=113389027420032&w=2
    http://marc.theaimsgroup.com/?l=linux-kernel&m=113380938612205&w=2

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Zone reclaim has a huge impact on NUMA performance (f.e. our maximum
    throughput with XFS is raised from 4GB to 6GB/sec / page cache contamination
    of numa nodes destroys locality if one just does a large copy operation which
    results in performance dropping for good until reboot).

    This patch:

    Resurrect may_swap in struct scan_control

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Simplify migrate_page_add after feedback from Hugh. This also allows us to
    drop one parameter from migrate_page_add.

    Signed-off-by: Christoph Lameter
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Migration code currently does not take a reference to target page
    properly, so between unlocking the pte and trying to take a new
    reference to the page with isolate_lru_page, anything could happen to
    it.

    Fix this by holding the pte lock until we get a chance to elevate the
    refcount.

    Other small cleanups while we're here.

    Signed-off-by: Nick Piggin
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Ravikiran reports that this variable is bouncing all around nodes on NUMA
    machines, causing measurable performance problems. Fix that up by only
    writing to it when it actually changed.

    And put it in a new cacheline to prevent it sharing with other things (this
    happened).

    Signed-off-by: Ravikiran Thirumalai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Some pcnet32 hardware erroneously has the Vendor ID for Trident. The
    pcnet32 driver looks for the PCI ethernet class before grabbing the
    hardware, but the current trident driver does not check against the PCI
    audio class. This allows the trident driver to claim the pcnet32 hardware.
    This patch prevents that.

    This revised version of the OSS Trident patch includes PCI_DEVICE Macro
    usage.

    Signed-off-by: Jon Mason
    Signed-off-by: Muli Ben-Yehuda
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jon Mason
     
  • Fix incorrect variable size used to hold register value. This bug might
    wipe out a portion of the TCR value when setting the interface options.

    Signed-off-by: Paul Fulghum
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Fulghum
     
  • On alpha:

    In file included from drivers/scsi/sym53c8xx_2/sym_glue.h:59,
    from drivers/scsi/sym53c8xx_2/sym_fw.c:40:
    include/scsi/scsi_transport_spi.h:57: error: field `dv_mutex' has incomplete type

    Cc: James Bottomley
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Fix a typo/mis-merge in one of the previous patches.

    Signed-off-by: Jan Beulich
    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     
  • We have to check that also the second checkpoint list is non-empty before
    dropping the transaction.

    Signed-off-by: Jan Kara
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • While checkpointing we have to check that our transaction still is in the
    checkpoint list *and* (not or) that it's not just a different transaction
    with the same address.

    Signed-off-by: Jan Kara
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • Linus Torvalds
     
  • Linus Torvalds
     
  • Linus Torvalds