02 Jul, 2007

1 commit

  • The recent change to cell_defconfig to enable cpufreq on Cell exposed
    the fact that the cbe_cpufreq driver currently needs the PMI interface
    code to compile, but Kconfig doesn't make sure that the PMI interface
    code gets built if cbe_cpufreq is enabled.

    In fact cbe_cpufreq can work without PMI, so this ifdefs out the code
    that deals with PMI. This is a minimal solution for 2.6.22; a more
    comprehensive solution will be merged for 2.6.23.

    Signed-off-by: Christian Krafft
    Signed-off-by: Paul Mackerras

    Christian Krafft
     

07 Jun, 2007

9 commits

  • The error path in spufs_fill_dir() is broken. If d_alloc_name() or
    spufs_new_file() fails, spufs_prune_dir() is getting called. At this time
    dir->inode is not set and a NULL pointer is dereferenced by mutex_lock().
    This bugfix replaces spufs_prune_dir() with a shorter version that does
    not touch dir->inode but simply removes all children.

    Signed-off-by: Sebastian Siewior
    Signed-off-by: Jeremy Kerr
    Acked-by: Arnd Bergmann
    Signed-off-by: Paul Mackerras

    Sebastian Siewior
     
  • Nosched context sould never be scheduled out, thus we must not
    deactivate them in spu_yield ever.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jeremy Kerr
    Signed-off-by: Paul Mackerras

    Christoph Hellwig
     
  • ... and get rid of cpufreq_set_policy call that caused a build
    failure due interfering commits.

    Signed-off-by: Thomas Renninger
    Signed-off-by: Christian Krafft
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Paul Mackerras

    Thomas Renninger
     
  • Fix the race between checking for contexts on the runqueue and actually
    waking them in spu_deactive and spu_yield.

    The guts of spu_reschedule are split into a new helper called
    grab_runnable_context which shows if there is a runnable thread below
    a specified priority and if yes removes if from the runqueue and uses
    it. This function is used by the new __spu_deactivate hepler shared
    by preemption and spu_yield to grab a new context before deactivating
    a specified priority and if yes removes if from the runqueue and uses
    it. This function is used by the new __spu_deactivate hepler shared
    by preemption and spu_yield to grab a new context before deactivating
    the old one.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Jeremy Kerr
    Signed-off-by: Paul Mackerras

    Christoph Hellwig
     
  • Make sure the mapping_lock also protects access to the various address_space
    pointers used for tearing down the ptes on a spu context switch.

    Because unmap_mapping_range can sleep we need to turn mapping_lock from
    a spinlock into a sleeping mutex.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Jeremy Kerr
    Signed-off-by: Paul Mackerras

    Christoph Hellwig
     
  • In case spufs_fill_dir() fails only put_spu_context()
    gets called for cleanup and the acquired mm_struct never gets freed.

    Signed-off-by: Sebastian Siewior
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Jeremy Kerr
    Signed-off-by: Paul Mackerras

    Sebastian Siewior
     
  • Previously, closing a SPE gang that still has contexts would trigger
    a WARN_ON, and leak the allocated gang.

    This change fixes the problem by using the gang's reference counts to
    destroy the gang instead. The gangs will persist until their last
    reference (be it context or open file handle) is gone.

    Also, avoid using statements with side-effects in a WARN_ON().

    Signed-off-by: Jeremy Kerr
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Paul Mackerras

    Jeremy Kerr
     
  • Currently spufs_mem_release and the mem file doesn't have any release
    method hooked up, leading to leaks everytime is used.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jeremy Kerr
    Signed-off-by: Paul Mackerras

    Christoph Hellwig
     
  • As noticed by David Woodhouse, it's currently possible to mount
    spufs on any machine, which means that it actually will get
    mounted by fedora.
    This refuses to load the module on platforms that have no
    support for SPUs.

    Cc: David Woodhouse
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Jeremy Kerr
    Signed-off-by: Paul Mackerras

    Arnd Bergmann
     

17 May, 2007

1 commit

  • SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it.

    Signed-off-by: Christoph Lameter
    Cc: David Howells
    Cc: Jens Axboe
    Cc: Steven French
    Cc: Michael Halcrow
    Cc: OGAWA Hirofumi
    Cc: Miklos Szeredi
    Cc: Steven Whitehouse
    Cc: Roman Zippel
    Cc: David Woodhouse
    Cc: Dave Kleikamp
    Cc: Trond Myklebust
    Cc: "J. Bruce Fields"
    Cc: Anton Altaparmakov
    Cc: Mark Fasheh
    Cc: Paul Mackerras
    Cc: Christoph Hellwig
    Cc: Jan Kara
    Cc: David Chinner
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

11 May, 2007

1 commit

  • This patch renames the raw hard_irq_{enable,disable} into
    __hard_irq_{enable,disable} and introduces a higher level hard_irq_disable()
    function that can be used by any code to enforce that IRQs are fully disabled,
    not only lazy disabled.

    The difference with the __ versions is that it will update some per-processor
    fields so that the kernel keeps track and properly re-enables them in the next
    local_irq_disable();

    This prepares powerpc for my next patch that introduces hard_irq_disable()
    generically.

    Signed-off-by: Benjamin Herrenschmidt
    Cc: Rusty Russell
    Cc: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benjamin Herrenschmidt
     

10 May, 2007

1 commit

  • * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
    [POWERPC] Further fixes for the removal of 4level-fixup hack from ppc32
    [POWERPC] EEH: log all PCI-X and PCI-E AER registers
    [POWERPC] EEH: capture and log pci state on error
    [POWERPC] EEH: Split up long error msg
    [POWERPC] EEH: log error only after driver notification.
    [POWERPC] fsl_soc: Make mac_addr const in fs_enet_of_init().
    [POWERPC] Don't use SLAB/SLUB for PTE pages
    [POWERPC] Spufs support for 64K LS mappings on 4K kernels
    [POWERPC] Add ability to 4K kernel to hash in 64K pages
    [POWERPC] Introduce address space "slices"
    [POWERPC] Small fixes & cleanups in segment page size demotion
    [POWERPC] iSeries: Make HVC_ISERIES the default
    [POWERPC] iSeries: suppress build warning in lparmap.c
    [POWERPC] Mark pages that don't exist as nosave
    [POWERPC] swsusp: Introduce register_nosave_region_late

    Linus Torvalds
     

09 May, 2007

5 commits

  • Signed-off-by: Michael Opdenacker
    Signed-off-by: Adrian Bunk

    Michael Opdenacker
     
  • This adds an option to spufs when the kernel is configured for
    4K page to give it the ability to use 64K pages for SPE local store
    mappings.

    Currently, we are optimistic and try order 4 allocations when creating
    contexts. If that fails, the code will fallback to 4K automatically.

    Signed-off-by: Benjamin Herrenschmidt
    Signed-off-by: Paul Mackerras

    Benjamin Herrenschmidt
     
  • The basic issue is to be able to do what hugetlbfs does but with
    different page sizes for some other special filesystems; more
    specifically, my need is:

    - Huge pages

    - SPE local store mappings using 64K pages on a 4K base page size
    kernel on Cell

    - Some special 4K segments in 64K-page kernels for mapping a dodgy
    type of powerpc-specific infiniband hardware that requires 4K MMU
    mappings for various reasons I won't explain here.

    The main issues are:

    - To maintain/keep track of the page size per "segment" (as we can
    only have one page size per segment on powerpc, which are 256MB
    divisions of the address space).

    - To make sure special mappings stay within their allotted
    "segments" (including MAP_FIXED crap)

    - To make sure everybody else doesn't mmap/brk/grow_stack into a
    "segment" that is used for a special mapping

    Some of the necessary mechanisms to handle that were present in the
    hugetlbfs code, but mostly in ways not suitable for anything else.

    The patch relies on some changes to the generic get_unmapped_area()
    that just got merged. It still hijacks hugetlb callbacks here or
    there as the generic code hasn't been entirely cleaned up yet but
    that shouldn't be a problem.

    So what is a slice ? Well, I re-used the mechanism used formerly by our
    hugetlbfs implementation which divides the address space in
    "meta-segments" which I called "slices". The division is done using
    256MB slices below 4G, and 1T slices above. Thus the address space is
    divided currently into 16 "low" slices and 16 "high" slices. (Special
    case: high slice 0 is the area between 4G and 1T).

    Doing so simplifies significantly the tracking of segments and avoids
    having to keep track of all the 256MB segments in the address space.

    While I used the "concepts" of hugetlbfs, I mostly re-implemented
    everything in a more generic way and "ported" hugetlbfs to it.

    Slices can have an associated page size, which is encoded in the mmu
    context and used by the SLB miss handler to set the segment sizes. The
    hash code currently doesn't care, it has a specific check for hugepages,
    though I might add a mechanism to provide per-slice hash mapping
    functions in the future.

    The slice code provide a pair of "generic" get_unmapped_area() (bottomup
    and topdown) functions that should work with any slice size. There is
    some trickiness here so I would appreciate people to have a look at the
    implementation of these and let me know if I got something wrong.

    Signed-off-by: Benjamin Herrenschmidt
    Signed-off-by: Paul Mackerras

    Benjamin Herrenschmidt
     
  • * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (77 commits)
    [POWERPC] Abolish powerpc_flash_init()
    [POWERPC] Early serial debug support for PPC44x
    [POWERPC] Support for the Ebony 440GP reference board in arch/powerpc
    [POWERPC] Add device tree for Ebony
    [POWERPC] Add powerpc/platforms/44x, disable platforms/4xx for now
    [POWERPC] MPIC U3/U4 MSI backend
    [POWERPC] MPIC MSI allocator
    [POWERPC] Enable MSI mappings for MPIC
    [POWERPC] Tell Phyp we support MSI
    [POWERPC] RTAS MSI implementation
    [POWERPC] PowerPC MSI infrastructure
    [POWERPC] Rip out the existing powerpc msi stubs
    [POWERPC] Remove use of 4level-fixup.h for ppc32
    [POWERPC] Add powerpc PCI-E reset API implementation
    [POWERPC] Holly bootwrapper
    [POWERPC] Holly DTS
    [POWERPC] Holly defconfig
    [POWERPC] Add support for 750CL Holly board
    [POWERPC] Generalize tsi108 PCI setup
    [POWERPC] Generalize tsi108 PHY types
    ...

    Fixed conflict in include/asm-powerpc/kdebug.h manually

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • Remove includes of where it is not used/needed.
    Suggested by Al Viro.

    Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
    sparc64, and arm (all 59 defconfigs).

    Signed-off-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

08 May, 2007

2 commits

  • Paul Mackerras
     
  • I have never seen a use of SLAB_DEBUG_INITIAL. It is only supported by
    SLAB.

    I think its purpose was to have a callback after an object has been freed
    to verify that the state is the constructor state again? The callback is
    performed before each freeing of an object.

    I would think that it is much easier to check the object state manually
    before the free. That also places the check near the code object
    manipulation of the object.

    Also the SLAB_DEBUG_INITIAL callback is only performed if the kernel was
    compiled with SLAB debugging on. If there would be code in a constructor
    handling SLAB_DEBUG_INITIAL then it would have to be conditional on
    SLAB_DEBUG otherwise it would just be dead code. But there is no such code
    in the kernel. I think SLUB_DEBUG_INITIAL is too problematic to make real
    use of, difficult to understand and there are easier ways to accomplish the
    same effect (i.e. add debug code before kfree).

    There is a related flag SLAB_CTOR_VERIFY that is frequently checked to be
    clear in fs inode caches. Remove the pointless checks (they would even be
    pointless without removeal of SLAB_DEBUG_INITIAL) from the fs constructors.

    This is the last slab flag that SLUB did not support. Remove the check for
    unimplemented flags from SLUB.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

07 May, 2007

1 commit


30 Apr, 2007

3 commits


27 Apr, 2007

1 commit

  • check_legacy_ioport makes only sense on PREP, CHRP and pSeries.
    They may have an isa node with PS/2, parport, floppy and serial ports.

    Remove the check_legacy_ioport call from ppc_md, it's not needed
    anymore. Hardware capabilities come from the device-tree.

    Signed-off-by: Olaf Hering
    Signed-off-by: Paul Mackerras

    Olaf Hering
     

24 Apr, 2007

15 commits