09 Dec, 2011

1 commit

  • Now all ARCH_POPULATES_NODE_MAP archs select HAVE_MEBLOCK_NODE_MAP -
    there's no user of early_node_map[] left. Kill early_node_map[] and
    replace ARCH_POPULATES_NODE_MAP with HAVE_MEMBLOCK_NODE_MAP. Also,
    relocate for_each_mem_pfn_range() and helper from mm.h to memblock.h
    as page_alloc.c would no longer host an alternative implementation.

    This change is ultimately one to one mapping and shouldn't cause any
    observable difference; however, after the recent changes, there are
    some functions which now would fit memblock.c better than page_alloc.c
    and dependency on HAVE_MEMBLOCK_NODE_MAP instead of HAVE_MEMBLOCK
    doesn't make much sense on some of them. Further cleanups for
    functions inside HAVE_MEMBLOCK_NODE_MAP in mm.h would be nice.

    -v2: Fix compile bug introduced by mis-spelling
    CONFIG_HAVE_MEMBLOCK_NODE_MAP to CONFIG_MEMBLOCK_HAVE_NODE_MAP in
    mmzone.h. Reported by Stephen Rothwell.

    Signed-off-by: Tejun Heo
    Cc: Stephen Rothwell
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu
    Cc: Tony Luck
    Cc: Ralf Baechle
    Cc: Martin Schwidefsky
    Cc: Chen Liqin
    Cc: Paul Mundt
    Cc: "David S. Miller"
    Cc: "H. Peter Anvin"

    Tejun Heo
     

04 Nov, 2010

1 commit

  • The nommu code has regressed somewhat in that 29BIT gets set for the
    SH-2/2A configs regardless of the fact that they are really 32BIT sans
    MMU or PMB. This does a bit of tidying to get nommu properly selecting
    32BIT as it was before.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

15 Oct, 2010

1 commit

  • This sets up a generic SRAM pool for CPUs and platform code to insert
    their otherwise unused memories into. A simple alloc/free interface is
    provided (lifed from avr32) for generic code.

    This only applies to tiny SRAMs that are otherwise unmanaged, and does
    not take in to account the more complex SRAMs sitting behind transfer
    engines, or that employ an I/D split.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

18 Feb, 2010

1 commit

  • This implements a bit of rework for the PMB code, which permits us to
    kill off the legacy PMB mode completely. Rather than trusting the boot
    loader to do the right thing, we do a quick verification of the PMB
    contents to determine whether to have the kernel setup the initial
    mappings or whether it needs to mangle them later on instead.

    If we're booting from legacy mappings, the kernel will now take control
    of them and make them match the kernel's initial mapping configuration.
    This is accomplished by breaking the initialization phase out in to
    multiple steps: synchronization, merging, and resizing. With the recent
    rework, the synchronization code establishes page links for compound
    mappings already, so we build on top of this for promoting mappings and
    reclaiming unused slots.

    At the same time, the changes introduced for the uncached helpers also
    permit us to dynamically resize the uncached mapping without any
    particular headaches. The smallest page size is more than sufficient for
    mapping all of kernel text, and as we're careful not to jump to any far
    off locations in the setup code the mapping can safely be resized
    regardless of whether we are executing from it or not.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

12 Feb, 2010

1 commit

  • This splits out the uncached mapping support under its own config option,
    presently only used by 29-bit mode and 32-bit + PMB. This will make it
    possible to optionally add an uncached mapping on sh64 as well as booting
    without an uncached mapping for 32-bit.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

16 Jan, 2010

1 commit

  • Some devices need to be ioremap'd and accessed very early in the boot
    process. It is not possible to use the standard ioremap() function in
    this case because that requires kmalloc()'ing some virtual address space
    and kmalloc() may not be available so early in boot.

    This patch provides fixmap mappings that allow physical address ranges
    to be remapped into the kernel address space during the early boot
    stages.

    Signed-off-by: Matt Fleming

    Matt Fleming
     

13 Jan, 2010

2 commits

  • All SH-X2 and SH-X3 parts support an extended TLB mode, which has been
    left as experimental since support was originally merged. Now that it's
    had some time to stabilize and get some exposure to various platforms,
    we can drop it as an option and default enable it across the board.

    This is also good future proofing for newer parts that will drop support
    for the legacy TLB mode completely.

    This will also force 3-level page tables for all newer parts, which is
    necessary both for the varying page sizes and larger memories.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • This introduces some much overdue chainsawing of the fixed PMB support.
    fixed PMB was introduced initially to work around the fact that dynamic
    PMB mode was relatively broken, though they were never intended to
    converge. The main areas where there are differences are whether the
    system is booted in 29-bit mode or 32-bit mode, and whether legacy
    mappings are to be preserved. Any system booting in true 32-bit mode will
    not care about legacy mappings, so these are roughly decoupled.

    Regardless of the entry point, PMB and 32BIT are directly related as far
    as the kernel is concerned, so we also switch back to having one select
    the other.

    With legacy mappings iterated through and applied in the initialization
    path it's now possible to finally merge the two implementations and
    permit dynamic remapping overtop of remaining entries regardless of
    whether boot mappings are crafted by hand or inherited from the boot
    loader.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

05 Jan, 2010

1 commit


04 Jan, 2010

3 commits


02 Jan, 2010

1 commit

  • The previous expressions were wrong which made free_pmd_range() explode
    when using anything other than 4KB pages (which is why 8KB and 64KB
    pages were disabled with the 3-level page table layout).

    The problem was that pmd_offset() was returning an index of non-zero
    when it should have been returning 0. This non-zero offset was used to
    calculate the address of the pmd table to free in free_pmd_range(),
    which ended up trying to free an object that was not aligned on a page
    boundary.

    Now 3-level page tables should work with 4KB, 8KB and 64KB pages.

    Signed-off-by: Matt Fleming

    Matt Fleming
     

17 Dec, 2009

1 commit

  • If using 64-bit PTEs and 4K pages then each page table has 512 entries
    (as opposed to 1024 entries with 32-bit PTEs). Unlike MIPS, SH follows
    the convention that all structures in the page table (pgd_t, pmd_t,
    pgprot_t, etc) must be the same size. Therefore, 64-bit PTEs require
    64-bit PGD entries, etc. Using 2-levels of page tables and 64-bit PTEs
    it is only possible to map 1GB of virtual address space.

    In order to map all 4GB of virtual address space we need to adopt a
    3-level page table layout. This actually works out better for
    CONFIG_SUPERH32 because we only waste 2 PGD entries on the P1 and P2
    areas (which are untranslated) instead of 256.

    Signed-off-by: Matt Fleming
    Signed-off-by: Paul Mundt

    Matt Fleming
     

11 Nov, 2009

1 commit


27 Oct, 2009

2 commits

  • Paul Mundt
     
  • The hugetlb dependencies presently depend on SUPERH && MMU while the
    hugetlb page size definitions depend on CPU_SH4 or CPU_SH5. This
    unfortunately allows SH-3 + MMU configurations to enable hugetlbfs
    without a corresponding HPAGE_SHIFT definition, resulting in the build
    blowing up.

    As SH-3 doesn't support variable page sizes, we tighten up the
    dependenies a bit to prevent hugetlbfs from being enabled. These days
    we also have a shiny new SYS_SUPPORTS_HUGETLBFS, so switch to using
    that rather than adding to the list of corner cases in fs/Kconfig.

    Reported-by: Kristoffer Ericson
    Signed-off-by: Paul Mundt

    Paul Mundt
     

16 Oct, 2009

1 commit

  • This enables SCHED_MC support for SH-X3 multi-cores. Presently this is
    just a simple wrapper around the possible map, but this allows for
    tying in support for some of the more exotic NUMA clusters where we can
    actually do something with the topology.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

10 Oct, 2009

1 commit


21 Aug, 2009

1 commit


14 May, 2009

1 commit

  • Several platforms want to be able to do large physically contiguous
    allocations (primarily nommu and video codecs on SH-Mobile), provide a
    MAX_ORDER override for those cases.

    Tested-by: Conrad Parker
    Signed-off-by: Paul Mundt

    Paul Mundt
     

10 May, 2009

1 commit


02 Apr, 2009

1 commit

  • Forcing direct-mapped worked on certain older 2-way set associative
    parts, but was always error prone on 4-way parts. As these are the
    norm these days, there is not much point in continuing to support this
    mode. Most of the folks that used direct-mapped mode generally just
    wanted writethrough caching in the first place..

    Signed-off-by: Paul Mundt

    Paul Mundt
     

10 Mar, 2009

1 commit

  • This provides a method for supporting fixed PMB mappings inherited from
    the bootloader, as an alternative to the dynamic PMB mapping currently
    used by the kernel. In the future these methods will be combined.

    P1/P2 area is handled like a regular 29-bit physical address, and local
    bus device are assigned P3 area addresses.

    Signed-off-by: Yoshihiro Shimoda
    Signed-off-by: Paul Mundt

    Yoshihiro Shimoda
     

17 Sep, 2008

1 commit


08 Sep, 2008

1 commit


11 Aug, 2008

1 commit


04 Aug, 2008

1 commit


28 Jul, 2008

3 commits


06 Mar, 2008

1 commit


28 Jan, 2008

6 commits


07 Nov, 2007

1 commit

  • The ST40 stuff in-tree hasn't built for some time, and hasn't been
    updated for over 3 years. ST maintains their own out-of-tree changes
    and rebases occasionally, and that's ultimately where all of the ST40
    users go anyways.

    In order for the ST40 code to be brought up to date most of the stuff
    removed in this changeset would have to be rewritten anyways, so there's
    very little benefit in keeping the remnants around either.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

27 Sep, 2007

1 commit