12 Oct, 2011

1 commit

  • On FSL Book-E devices we support multiple large TLB sizes and so we can
    get into situations in which the initial 1G TLB size is too big and
    we're asked for a size that is not mappable by a single entry (like
    512M). The single entry is important because when we bring up secondary
    cores they need to ensure any data structure they need to access (eg
    PACA or stack) is always mapped.

    So we really need to determine what size will actually be mapped by the
    first TLB entry to ensure we limit early memory references to that
    region. We refactor the map_mem_in_cams() code to provider a helper
    function that we can utilize to determine the size of the first TLB
    entry while taking into account size and alignment constraints.

    Signed-off-by: Kumar Gala

    Kumar Gala
     

14 Oct, 2010

1 commit

  • On Freescale parts typically have TLB array for large mappings that we can
    bolt the linear mapping into. We utilize the code that already exists
    on PPC32 on the 64-bit side to setup the linear mapping to be cover by
    bolted TLB entries. We utilize a quarter of the variable size TLB array
    for this purpose.

    Additionally, we limit the amount of memory to what we can cover via
    bolted entries so we don't get secondary faults in the TLB miss
    handlers. We should fix this limitation in the future.

    Signed-off-by: Kumar Gala

    Kumar Gala
     

17 May, 2010

1 commit


05 May, 2010

1 commit

  • This patch adds the base support for the 476 processor. The code was
    primarily written by Ben Herrenschmidt and Torez Smith, but I've been
    maintaining it for a while.

    The goal is to have a single binary that will run on 44x and 47x, but
    we still have some details to work out. The biggest is that the L1 cache
    line size differs on the two platforms, but it's currently a compile-time
    option.

    Signed-off-by: Benjamin Herrenschmidt
    Signed-off-by: Torez Smith
    Signed-off-by: Dave Kleikamp
    Signed-off-by: Josh Boyer

    Dave Kleikamp
     

17 Dec, 2009

1 commit

  • * 'next' of git://git.secretlab.ca/git/linux-2.6: (23 commits)
    powerpc: fix up for mmu_mapin_ram api change
    powerpc: wii: allow ioremap within the memory hole
    powerpc: allow ioremap within reserved memory regions
    wii: use both mem1 and mem2 as ram
    wii: bootwrapper: add fixup to calc useable mem2
    powerpc: gamecube/wii: early debugging using usbgecko
    powerpc: reserve fixmap entries for early debug
    powerpc: wii: default config
    powerpc: wii: platform support
    powerpc: wii: hollywood interrupt controller support
    powerpc: broadway processor support
    powerpc: wii: bootwrapper bits
    powerpc: wii: device tree
    powerpc: gamecube: default config
    powerpc: gamecube: platform support
    powerpc: gamecube/wii: flipper interrupt controller support
    powerpc: gamecube/wii: udbg support for usbgecko
    powerpc: gamecube/wii: do not include PCI support
    powerpc: gamecube/wii: declare as non-coherent platforms
    powerpc: gamecube/wii: introduce GAMECUBE_COMMON
    ...

    Fix up conflicts in arch/powerpc/mm/fsl_booke_mmu.c.

    Hopefully even close to correctly.

    Linus Torvalds
     

15 Dec, 2009

1 commit

  • Today's linux-next build (powerpc ppc44x_defconfig) failed like this:

    arch/powerpc/mm/pgtable_32.c: In function 'mapin_ram':
    arch/powerpc/mm/pgtable_32.c:318: error: too many arguments to function 'mmu_mapin_ram'

    Casued by commit de32400dd26e743c5d500aa42d8d6818b79edb73 ("wii: use both
    mem1 and mem2 as ram").

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Grant Likely

    Stephen Rothwell
     

13 Dec, 2009

2 commits

  • Add a flag to let a platform ioremap memory regions marked as reserved.

    This flag will be used later by the Nintendo Wii support code to allow
    ioremapping the I/O region sitting between MEM1 and MEM2 and marked
    as reserved RAM in the patch "wii: use both mem1 and mem2 as ram".

    This will no longer be needed when proper discontig memory support
    for 32-bit PowerPC is added to the kernel.

    Signed-off-by: Albert Herranz
    Acked-by: Benjamin Herrenschmidt
    Signed-off-by: Grant Likely

    Albert Herranz
     
  • The Nintendo Wii video game console has two discontiguous RAM regions:
    - MEM1: 24MB @ 0x00000000
    - MEM2: 64MB @ 0x10000000

    Unfortunately, the kernel currently does not support discontiguous RAM
    memory regions on 32-bit PowerPC platforms.

    This patch adds a series of workarounds to allow the use of the second
    memory region (MEM2) as RAM by the kernel.
    Basically, a single range of memory from the beginning of MEM1 to the
    end of MEM2 is reported to the kernel, and a memory reservation is
    created for the hole between MEM1 and MEM2.

    With this patch the system is able to use all the available RAM and not
    just ~27% of it.

    This will no longer be needed when proper discontig memory support
    for 32-bit PowerPC is added to the kernel.

    Signed-off-by: Albert Herranz
    Acked-by: Benjamin Herrenschmidt
    Signed-off-by: Grant Likely

    Albert Herranz
     

21 Nov, 2009

1 commit

  • Re-write the code so its more standalone and fixed some issues:
    * Bump'd # of CAM entries to 64 to support e500mc
    * Make the code handle MAS7 properly
    * Use pr_cont instead of creating a string as we go

    Signed-off-by: Kumar Gala

    Kumar Gala
     

20 Aug, 2009

3 commits


13 Jan, 2009

1 commit


08 Jan, 2009

3 commits

  • This is a brown paper bag from one of my earlier patches that
    breaks build on 40x and 8xx.

    And yes, I've now added 40x and 8xx to my list of test configs :-)

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • This is a global variable defined in fsl_booke_mmu.c with a value that gets
    initialized in assembly code in head_fsl_booke.S.

    It's never used.

    If some code ever does want to know the number of entries in TLB1, then
    "numcams = mfspr(SPRN_TLB1CFG) & 0xfff", is a whole lot simpler than a
    global initialized during kernel boot from assembly.

    Signed-off-by: Trent Piepho
    Signed-off-by: Kumar Gala

    Trent Piepho
     
  • Some assembly code in head_fsl_booke.S hard-coded the size of struct tlbcam
    to 20 when it indexed the TLBCAM table. Anyone changing the size of struct
    tlbcam would not know to expect that.

    The kernel already has a system to get the size of C structures into
    assembly language files, asm-offsets, so let's use it.

    The definition of the struct gets moved to a header, so that asm-offsets.c
    can include it.

    Signed-off-by: Trent Piepho
    Signed-off-by: Kumar Gala

    Trent Piepho
     

21 Dec, 2008

1 commit

  • Currently, the various forms of low level TLB invalidations are all
    implemented in misc_32.S for 32-bit processors, in a fairly scary
    mess of #ifdef's and with interesting duplication such as a whole
    bunch of code for FSL _tlbie and _tlbia which are no longer used.

    This moves things around such that _tlbie is now defined in
    hash_low_32.S and is only used by the 32-bit hash code, and all
    nohash CPUs use the various _tlbil_* forms that are now moved to
    a new file, tlb_nohash_low.S.

    I moved all the definitions for that stuff out of
    include/asm/tlbflush.h as they are really internal mm stuff, into
    mm/mmu_decl.h

    The code should have no functional changes. I kept some variants
    inline for trivial forms on things like 40x and 8xx.

    Signed-off-by: Benjamin Herrenschmidt
    Acked-by: Kumar Gala
    Signed-off-by: Paul Mackerras

    Benjamin Herrenschmidt
     

16 Dec, 2008

1 commit

  • The function flush_HPTE() is used in only one place, the implementation
    of DEBUG_PAGEALLOC on ppc32.

    It's actually a dup of flush_tlb_page() though it's -slightly- more
    efficient on hash based processors. We remove it and replace it by
    a direct call to the hash flush code on those processors and to
    flush_tlb_page() for everybody else.

    Signed-off-by: Benjamin Herrenschmidt
    Signed-off-by: Paul Mackerras

    Benjamin Herrenschmidt
     

10 Jul, 2008

1 commit


30 Jun, 2008

1 commit

  • Currently, the physical address is an unsigned long, but it should
    be phys_addr_t in set_bat, [v/p]_mapped_by_bat. Also, create a
    macro that can convert a large physical address into the correct
    format for programming the BAT registers.

    Signed-off-by: Becky Bruce
    Signed-off-by: Paul Mackerras

    Becky Bruce
     

17 Apr, 2008

3 commits

  • We always use __initial_memory_limit as an address so rename it
    to be clear.

    Signed-off-by: Kumar Gala
    Signed-off-by: Paul Mackerras

    Kumar Gala
     
  • total_lowmem represents the amount of low memory, not the physical
    address that low memory ends at. If the start of memory is at 0 it
    happens that total_lowmem can be used as both the size and the address
    that lowmem ends at (or more specifically one byte beyond the end).

    To make the code a bit more clear and deal with the case when the start of
    memory isn't at physical 0, we introduce lowmem_end_addr that represents
    one byte beyond the last physical address in the lowmem region.

    Signed-off-by: Kumar Gala
    Signed-off-by: Paul Mackerras

    Kumar Gala
     
  • A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always
    use 0 as we don't support booting these kernels at non-zero physical
    addresses since their exception vectors must be at 0 (or 0xfffx_xxxx).

    For the sub-arches that support relocatable interrupt vectors
    (book-e), it's reasonable to have memory start at a non-zero physical
    address. For those cases use the variable memstart_addr instead of
    the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for
    initialization and in the future we can set memstart_addr at runtime
    to have a relocatable kernel.

    Signed-off-by: Kumar Gala
    Signed-off-by: Paul Mackerras

    Kumar Gala
     

20 Nov, 2007

1 commit


01 Nov, 2007

1 commit

  • On 4xx CPUs, the current implementation of flush_tlb_page() uses
    a low level _tlbie() assembly function that only works for the
    current PID. Thus, invalidations caused by, for example, a COW
    fault triggered by get_user_pages() from a different context will
    not work properly, causing among other things, gdb breakpoints
    to fail.

    This patch adds a "pid" argument to _tlbie() on 4xx processors,
    and uses it to flush entries in the right context. FSL BookE
    also gets the argument but it seems they don't need it (their
    tlbivax form ignores the PID when invalidating according to the
    document I have).

    Signed-off-by: Benjamin Herrenschmidt
    Acked-by: Kumar Gala
    Signed-off-by: Josh Boyer

    Benjamin Herrenschmidt
     

14 Jun, 2007

3 commits

  • Using typedefs to rename structure types if frowned on by CodingStyle.
    However, we do so for the hash PTE structure on both ppc32 (where it's
    called "PTE") and ppc64 (where it's called "hpte_t"). On ppc32 we
    also have such a typedef for the BATs ("BAT").

    This removes this unhelpful use of typedefs, in the process
    bringing ppc32 and ppc64 closer together, by using the name "struct
    hash_pte" in both cases.

    Signed-off-by: David Gibson
    Signed-off-by: Paul Mackerras

    David Gibson
     
  • APUS (the Amiga Power-Up System) is not supported under arch/powerpc
    and it's unlikely it ever will be. Therefore, this patch removes the
    fragments of APUS support code from arch/powerpc which have been
    copied from arch/ppc.

    A few APUS references are left in asm-powerpc in .h files which are
    still used from arch/ppc.

    Signed-off-by: David Gibson
    Signed-off-by: Paul Mackerras

    David Gibson
     
  • This rewrites pretty much from scratch the handling of MMIO and PIO
    space allocations on powerpc64. The main goals are:

    - Get rid of imalloc and use more common code where possible
    - Simplify the current mess so that PIO space is allocated and
    mapped in a single place for PCI bridges
    - Handle allocation constraints of PIO for all bridges including
    hot plugged ones within the 2GB space reserved for IO ports,
    so that devices on hotplugged busses will now work with drivers
    that assume IO ports fit in an int.
    - Cleanup and separate tracking of the ISA space in the reserved
    low 64K of IO space. No ISA -> Nothing mapped there.

    I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
    far, that's it :-)

    With this patch, all allocations are done using the code in
    mm/vmalloc.c, though we use the low level __get_vm_area with
    explicit start/stop constraints in order to manage separate
    areas for vmalloc/vmap, ioremap, and PCI IOs.

    This greatly simplifies a lot of things, as you can see in the
    diffstat of that patch :-)

    A new pair of functions pcibios_map/unmap_io_space() now replace
    all of the previous code that used to manipulate PCI IOs space.
    The allocation is done at mapping time, which is now called from
    scan_phb's, just before the devices are probed (instead of after,
    which is by itself a bug fix). The only other caller is the PCI
    hotplug code for hot adding PCI-PCI bridges (slots).

    imalloc is gone, as is the "sub-allocation" thing, but I do beleive
    that hotplug should still work in the sense that the space allocation
    is always done by the PHB, but if you unmap a child bus of this PHB
    (which seems to be possible), then the code should properly tear
    down all the HPTE mappings for that area of the PHB allocated IO space.

    I now always reserve the first 64K of IO space for the bridge with
    the ISA bus on it. I have moved the code for tracking ISA in a separate
    file which should also make it smarter if we ever are capable of
    hot unplugging or re-plugging an ISA bridge.

    This should have a side effect on platforms like powermac where VGA IOs
    will no longer work. This is done on purpose though as they would have
    worked semi-randomly before. The idea at this point is to isolate drivers
    that might need to access those and fix them by providing a proper
    function to obtain an offset to the legacy IOs of a given bus.

    Signed-off-by: Benjamin Herrenschmidt
    Signed-off-by: Paul Mackerras

    Benjamin Herrenschmidt
     

02 May, 2007

1 commit

  • This patch takes the definitions for the PPC44x MMU (a software loaded
    TLB) from asm-ppc/mmu.h, cleans them up of things no longer necessary
    in arch/powerpc and puts them in a new asm-powerpc/mmu_44x.h file. It
    also substantially simplifies arch/powerpc/mm/44x_mmu.c and makes a
    couple of small fixes necessary for the 44x MMU code to build and work
    properly in arch/powerpc.

    Signed-off-by: David Gibson
    Signed-off-by: Paul Mackerras

    David Gibson
     

24 Apr, 2007

1 commit

  • BenH's commit a741e67969577163a4cfc78d7fd2753219087ef1 in powerpc.git,
    although (AFAICT) only intended to affect ppc64, also has side-effects
    which break 44x. I think 40x, 8xx and Freescale Book E are also
    affected, though I haven't tested them.

    The problem lies in unconditionally removing flush_tlb_pending() from
    the versions of flush_tlb_mm(), flush_tlb_range() and
    flush_tlb_kernel_range() used on ppc64 - which are also used the
    embedded platforms mentioned above.

    The patch below cleans up the convoluted #ifdef logic in tlbflush.h,
    in the process restoring the necessary flushes for the software TLB
    platforms. There are three sets of definitions for the flushing
    hooks: the software TLB versions (revised to avoid using names which
    appear to related to TLB batching), the 32-bit hash based versions
    (external functions) amd the 64-bit hash based versions (which
    implement batching).

    It also moves the declaration of update_mmu_cache() to always be in
    tlbflush.h (previously it was in tlbflush.h except for PPC64, where it
    was in pgtable.h).

    Booted on Ebony (440GP) and compiled for 64-bit and 32-bit
    multiplatform.

    Signed-off-by: David Gibson
    Acked-by: Benjamin Herrenschmidt
    Signed-off-by: Paul Mackerras

    David Gibson
     

13 Apr, 2007

1 commit

  • On hash table based 32 bits powerpc's, the hash management code runs with
    a big spinlock. It's thus important that it never causes itself a hash
    fault. That code is generally safe (it does memory accesses in real mode
    among other things) with the exception of the actual access to the code
    itself. That is, the kernel text needs to be accessible without taking
    a hash miss exceptions.

    This is currently guaranteed by having a BAT register mapping part of the
    linear mapping permanently, which includes the kernel text. But this is
    not true if using the "nobats" kernel command line option (which can be
    useful for debugging) and will not be true when using DEBUG_PAGEALLOC
    implemented in a subsequent patch.

    This patch fixes this by pre-faulting in the hash table pages that hit
    the kernel text, and making sure we never evict such a page under hash
    pressure.

    Signed-off-by: Benjamin Herrenchmidt

    arch/powerpc/mm/hash_low_32.S | 22 ++++++++++++++++++++--
    arch/powerpc/mm/mem.c | 3 ---
    arch/powerpc/mm/mmu_decl.h | 4 ++++
    arch/powerpc/mm/pgtable_32.c | 11 +++++++----
    4 files changed, 31 insertions(+), 9 deletions(-)
    Signed-off-by: Paul Mackerras

    Benjamin Herrenschmidt
     

19 Nov, 2005

1 commit

  • asm-ppc64/imalloc.h is only included from files in arch/powerpc/mm.
    We already have a header for mm local definitions,
    arch/powerpc/mm/mmu_decl.h. Thus, this patch moves the contents of
    imalloc.h into mmu_decl.h. The only exception are the definitions of
    PHBS_IO_BASE, IMALLOC_BASE and IMALLOC_END. Those are moved into
    pgtable.h, next to similar definitions of VMALLOC_START and
    VMALLOC_SIZE.

    Built for multiplatform 32bit and 64bit (ARCH=powerpc).

    Signed-off-by: David Gibson
    Signed-off-by: Paul Mackerras

    David Gibson
     

10 Oct, 2005

1 commit


06 Oct, 2005

1 commit

  • This also creates merged versions of do_init_bootmem, paging_init
    and mem_init and moves them to arch/powerpc/mm/mem.c. It gets rid
    of the mem_pieces stuff.

    I made memory_limit a parameter to lmb_enforce_memory_limit rather
    than a global referenced by that function. This will require some
    small changes to ppc64 if we want to continue building ARCH=ppc64
    using the merged lmb.c.

    Signed-off-by: Paul Mackerras

    Paul Mackerras
     

26 Sep, 2005

1 commit

  • This creates the directory structure under arch/powerpc and a bunch
    of Kconfig files. It does a first-cut merge of arch/powerpc/mm,
    arch/powerpc/lib and arch/powerpc/platforms/powermac. This is enough
    to build a 32-bit powermac kernel with ARCH=powerpc.

    For now we are getting some unmerged files from arch/ppc/kernel and
    arch/ppc/syslib, or arch/ppc64/kernel. This makes some minor changes
    to files in those directories and files outside arch/powerpc.

    The boot directory is still not merged. That's going to be interesting.

    Signed-off-by: Paul Mackerras

    Paul Mackerras