09 Sep, 2005

1 commit

  • There were three changes necessary in order to allow
    sparc64 to use setup-res.c:

    1) Sparc64 roots the PCI I/O and MEM address space using
    parent resources contained in the PCI controller structure.
    I'm actually surprised no other platforms do this, especially
    ones like Alpha and PPC{,64}. These resources get linked into the
    iomem/ioport tree when PCI controllers are probed.

    So the hierarchy looks like this:

    iomem --|
    PCI controller 1 MEM space --|
    device 1
    device 2
    etc.
    PCI controller 2 MEM space --|
    ...
    ioport --|
    PCI controller 1 IO space --|
    ...
    PCI controller 2 IO space --|
    ...

    You get the idea. The drivers/pci/setup-res.c code allocates
    using plain iomem_space and ioport_space as the root, so that
    wouldn't work with the above setup.

    So I added a pcibios_select_root() that is used to handle this.
    It uses the PCI controller struct's io_space and mem_space on
    sparc64, and io{port,mem}_resource on every other platform to
    keep current behavior.

    2) quirk_io_region() is buggy. It takes in raw BUS view addresses
    and tries to use them as a PCI resource.

    pci_claim_resource() expects the resource to be fully formed when
    it gets called. The sparc64 implementation would do the translation
    but that's absolutely wrong, because if the same resource gets
    released then re-claimed we'll adjust things twice.

    So I fixed up quirk_io_region() to do the proper pcibios_bus_to_resource()
    conversion before passing it on to pci_claim_resource().

    3) I was mistakedly __init'ing the function methods the PCI controller
    drivers provide on sparc64 to implement some parts of these
    routines. This was, of course, easy to fix.

    So we end up with the following, and that nasty SPARC64 makefile
    ifdef in drivers/pci/Makefile is finally zapped.

    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    David S. Miller
     

08 Sep, 2005

10 commits

  • This patch fixes a race condition where in system used to hang or sometime
    crash within minutes when kprobes are inserted on ISR routine and a task
    routine.

    The fix has been stress tested on i386, ia64, pp64 and on x86_64. To
    reproduce the problem insert kprobes on schedule() and do_IRQ() functions
    and you should see hang or system crash.

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Ananth N Mavinakayanahalli
    Acked-by: Prasanna S Panchamukhi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Keshavamurthy Anil S
     
  • This patch contains the ppc64 architecture specific changes to prevent the
    possible race conditions.

    Signed-off-by: Prasanna S Panchamukhi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Prasanna S Panchamukhi
     
  • This makes sense now that we have asm-powerpc.

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     
  • These two files are basically identical, so make one just include the other
    (protecting the 32-bit-only parts with __powerpc64__). Also remove some
    completely unused defines.

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     
  • This set of patches creates asm-generic/fcntl.h and consolidates as much as
    possible from the asm-*/fcntl.h files into it.

    This patch just gathers all the identical bits of the asm-*/fcntl.h files into
    asm-generic/fcntl.h.

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Yoichi Yuasa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     
  • Remove the deprecated (and unused) verify_area() from various uaccess.h
    headers.

    Signed-off-by: Jesper Juhl
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Juhl
     
  • IRQ_PER_CPU is not used by all architectures. This patch introduces the
    macros ARCH_HAS_IRQ_PER_CPU and CHECK_IRQ_PER_CPU() to avoid the generation
    of dead code in __do_IRQ().

    ARCH_HAS_IRQ_PER_CPU is defined by architectures using IRQ_PER_CPU in their
    include/asm_ARCH/irq.h file.

    Through grepping the tree I found the following architectures currently use
    IRQ_PER_CPU:

    cris, ia64, ppc, ppc64 and parisc.

    Signed-off-by: Karsten Wiese
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Karsten Wiese
     
  • The size of auxiliary vector is fixed at 42 in linux/sched.h. But it isn't
    very obvious when looking at linux/elf.h. This patch adds AT_VECTOR_SIZE
    so that we can change it if necessary when a new vector is added.

    Because of include file ordering problems, doing this necessitated the
    extraction of the AT_* symbols into a standalone header file.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    H. J. Lu
     
  • When I first wrote the compat layer patches, I was somewhat cavalier about
    the definition of compat_uid_t and compat_gid_t (or maybe I just
    misunderstood :-)). This patch makes the compat types much more consistent
    with the types we are being compatible with and hopefully will fix a few
    bugs along the way.

    compat type type in compat arch
    __compat_[ug]id_t __kernel_[ug]id_t
    __compat_[ug]id32_t __kernel_[ug]id32_t
    compat_[ug]id_t [ug]id_t

    The difference is that compat_uid_t is always 32 bits (for the archs we
    care about) but __compat_uid_t may be 16 bits on some.

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     
  • ATM pthread_cond_signal is unnecessarily slow, because it wakes one waiter
    (which at least on UP usually means an immediate context switch to one of
    the waiter threads). This waiter wakes up and after a few instructions it
    attempts to acquire the cv internal lock, but that lock is still held by
    the thread calling pthread_cond_signal. So it goes to sleep and eventually
    the signalling thread is scheduled in, unlocks the internal lock and wakes
    the waiter again.

    Now, before 2003-09-21 NPTL was using FUTEX_REQUEUE in pthread_cond_signal
    to avoid this performance issue, but it was removed when locks were
    redesigned to the 3 state scheme (unlocked, locked uncontended, locked
    contended).

    Following scenario shows why simply using FUTEX_REQUEUE in
    pthread_cond_signal together with using lll_mutex_unlock_force in place of
    lll_mutex_unlock is not enough and probably why it has been disabled at
    that time:

    The number is value in cv->__data.__lock.
    thr1 thr2 thr3
    0 pthread_cond_wait
    1 lll_mutex_lock (cv->__data.__lock)
    0 lll_mutex_unlock (cv->__data.__lock)
    0 lll_futex_wait (&cv->__data.__futex, futexval)
    0 pthread_cond_signal
    1 lll_mutex_lock (cv->__data.__lock)
    1 pthread_cond_signal
    2 lll_mutex_lock (cv->__data.__lock)
    2 lll_futex_wait (&cv->__data.__lock, 2)
    2 lll_futex_requeue (&cv->__data.__futex, 0, 1, &cv->__data.__lock)
    # FUTEX_REQUEUE, not FUTEX_CMP_REQUEUE
    2 lll_mutex_unlock_force (cv->__data.__lock)
    0 cv->__data.__lock = 0
    0 lll_futex_wake (&cv->__data.__lock, 1)
    1 lll_mutex_lock (cv->__data.__lock)
    0 lll_mutex_unlock (cv->__data.__lock)
    # Here, lll_mutex_unlock doesn't know there are threads waiting
    # on the internal cv's lock

    Now, I believe it is possible to use FUTEX_REQUEUE in pthread_cond_signal,
    but it will cost us not one, but 2 extra syscalls and, what's worse, one of
    these extra syscalls will be done for every single waiting loop in
    pthread_cond_*wait.

    We would need to use lll_mutex_unlock_force in pthread_cond_signal after
    requeue and lll_mutex_cond_lock in pthread_cond_*wait after lll_futex_wait.

    Another alternative is to do the unlocking pthread_cond_signal needs to do
    (the lock can't be unlocked before lll_futex_wake, as that is racy) in the
    kernel.

    I have implemented both variants, futex-requeue-glibc.patch is the first
    one and futex-wake_op{,-glibc}.patch is the unlocking inside of the kernel.
    The kernel interface allows userland to specify how exactly an unlocking
    operation should look like (some atomic arithmetic operation with optional
    constant argument and comparison of the previous futex value with another
    constant).

    It has been implemented just for ppc*, x86_64 and i?86, for other
    architectures I'm including just a stub header which can be used as a
    starting point by maintainers to write support for their arches and ATM
    will just return -ENOSYS for FUTEX_WAKE_OP. The requeue patch has been
    (lightly) tested just on x86_64, the wake_op patch on ppc64 kernel running
    32-bit and 64-bit NPTL and x86_64 kernel running 32-bit and 64-bit NPTL.

    With the following benchmark on UP x86-64 I get:

    for i in nptl-orig nptl-requeue nptl-wake_op; do echo time elf/ld.so --library-path .:$i /tmp/bench; \
    for j in 1 2; do echo ( time elf/ld.so --library-path .:$i /tmp/bench ) 2>&1; done; done
    time elf/ld.so --library-path .:nptl-orig /tmp/bench
    real 0m0.655s user 0m0.253s sys 0m0.403s
    real 0m0.657s user 0m0.269s sys 0m0.388s
    time elf/ld.so --library-path .:nptl-requeue /tmp/bench
    real 0m0.496s user 0m0.225s sys 0m0.271s
    real 0m0.531s user 0m0.242s sys 0m0.288s
    time elf/ld.so --library-path .:nptl-wake_op /tmp/bench
    real 0m0.380s user 0m0.176s sys 0m0.204s
    real 0m0.382s user 0m0.175s sys 0m0.207s

    The benchmark is at:
    http://sourceware.org/ml/libc-alpha/2005-03/txt00001.txt
    Older futex-requeue-glibc.patch version is at:
    http://sourceware.org/ml/libc-alpha/2005-03/txt00002.txt
    Older futex-wake_op-glibc.patch version is at:
    http://sourceware.org/ml/libc-alpha/2005-03/txt00003.txt
    Will post a new version (just x86-64 fixes so that the patch
    applies against pthread_cond_signal.S) to libc-hacker ml soon.

    Attached is the kernel FUTEX_WAKE_OP patch as well as a simple-minded
    testcase that will not test the atomicity of the operation, but at least
    check if the threads that should have been woken up are woken up and
    whether the arithmetic operation in the kernel gave the expected results.

    Acked-by: Ingo Molnar
    Cc: Ulrich Drepper
    Cc: Jamie Lokier
    Cc: Rusty Russell
    Signed-off-by: Yoichi Yuasa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jakub Jelinek
     

06 Sep, 2005

11 commits


05 Sep, 2005

4 commits

  • We need to indicate to the hypervisor that it needs to save our VMX
    registers when switching partitions on a shared-processor system, just as
    it needs to for FP and PMC registers.

    This could be made to be on-demand when VMX is used, but we don't do that
    for FP nor PMC right now either so let's not overcomplicate things.

    Signed-off-by: Olof Johansson
    Acked-by: Paul Mackerras
    Cc: Anton Blanchard
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Olof Johansson
     
  • This is used only in slab.c and each architecture gets to define whcih
    underlying type is to be used.

    Seems a bit silly - move it to slab.c and use the same type for all
    architectures: unsigned int.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kyle Moffett
     
  • Someone mentioned that almost all the architectures used basically the same
    implementation of get_order. This patch consolidates them into
    asm-generic/page.h and includes that in the appropriate places. The
    exceptions are ia64 and ppc which have their own (presumably optimised)
    versions.

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     
  • A new option for SPARSEMEM is ARCH_SPARSEMEM_EXTREME. Architecture
    platforms with a very sparse physical address space would likely want to
    select this option. For those architecture platforms that don't select the
    option, the code generated is equivalent to SPARSEMEM currently in -mm.
    I'll be posting a patch on ia64 ml which uses this new SPARSEMEM feature.

    ARCH_SPARSEMEM_EXTREME makes mem_section a one dimensional array of
    pointers to mem_sections. This two level layout scheme is able to achieve
    smaller memory requirements for SPARSEMEM with the tradeoff of an
    additional shift and load when fetching the memory section. The current
    SPARSEMEM -mm implementation is a one dimensional array of mem_sections
    which is the default SPARSEMEM configuration. The patch attempts isolates
    the implementation details of the physical layout of the sparsemem section
    array.

    ARCH_SPARSEMEM_EXTREME depends on 64BIT and is by default boolean false.

    I've boot tested under aim load ia64 configured for ARCH_SPARSEMEM_EXTREME.
    I've also boot tested a 4 way Opteron machine with !ARCH_SPARSEMEM_EXTREME
    and tested with aim.

    Signed-off-by: Andy Whitcroft
    Signed-off-by: Bob Picco
    Signed-off-by: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bob Picco
     

30 Aug, 2005

12 commits


29 Aug, 2005

2 commits

  • Paulus, I think this is now a reasonable candidate for the post-2.6.13
    queue.

    Relax address restrictions for hugepages on ppc64

    Presently, 64-bit applications on ppc64 may only use hugepages in the
    address region from 1-1.5T. Furthermore, if hugepages are enabled in
    the kernel config, they may only use hugepages and never normal pages
    in this area. This patch relaxes this restriction, allowing any
    address to be used with hugepages, but with a 1TB granularity. That
    is if you map a hugepage anywhere in the region 1TB-2TB, that entire
    area will be reserved exclusively for hugepages for the remainder of
    the process's lifetime. This works analagously to hugepages in 32-bit
    applications, where hugepages can be mapped anywhere, but with 256MB
    (mmu segment) granularity.

    This patch applies on top of the four level pagetable patch
    (http://patchwork.ozlabs.org/linuxppc64/patch?id=1936).

    Signed-off-by: David Gibson
    Signed-off-by: Paul Mackerras

    David Gibson
     
  • This patch moves power4_enable_pmcs() to arch/ppc64/kernel/pmc.c.

    I've tested it on P5 LPAR and P4. It does what it used to.

    Signed-off-by: Michael Ellerman
    Signed-off-by: Paul Mackerras

    Michael Ellerman