28 Aug, 2013

1 commit

  • The map_mem() function limits the current memblock limit to PGDIR_SIZE
    (the initial swapper_pg_dir mapping) to avoid create_mapping()
    allocating memory from unmapped areas. However, if the first block is
    within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page
    configuration is enabled, create_mapping() will try to allocate a pte
    page. Such page may be returned by memblock_alloc() from the end of such
    bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet.

    The patch limits the current memblock limit to the aligned end of the
    first bank and gradually increases it as more memory is mapped. It also
    ensures that the start of the first bank is aligned to PMD_SIZE to avoid
    pte page allocation for this mapping.

    Signed-off-by: Catalin Marinas
    Reported-by: "Leizhen (ThunderTown, Euler)"
    Tested-by: "Leizhen (ThunderTown, Euler)"

    Catalin Marinas
     

01 Jul, 2013

1 commit

  • …ux into upstream-hugepages

    * 'for-next/hugepages' of git://git.linaro.org/people/stevecapper/linux:
    ARM64: mm: THP support.
    ARM64: mm: Raise MAX_ORDER for 64KB pages and THP.
    ARM64: mm: HugeTLB support.
    ARM64: mm: Move PTE_PROT_NONE bit.
    ARM64: mm: Make PAGE_NONE pages read only and no-execute.
    ARM64: mm: Restore memblock limit when map_mem finished.
    mm: thp: Correct the HPAGE_PMD_ORDER check.
    x86: mm: Remove general hugetlb code from x86.
    mm: hugetlb: Copy general hugetlb code from x86 to mm.
    x86: mm: Remove x86 version of huge_pmd_share.
    mm: hugetlb: Copy huge_pmd_share from x86 to mm.

    Conflicts:
    arch/arm64/Kconfig
    arch/arm64/include/asm/pgtable-hwdef.h
    arch/arm64/include/asm/pgtable.h

    Catalin Marinas
     

14 Jun, 2013

1 commit

  • In paging_init the memblock limit is set to restrict any addresses
    returned by early_alloc to fit within the initial direct kernel
    mapping in swapper_pg_dir. This allows map_mem to allocate puds,
    pmds and ptes from the initial direct kernel mapping.

    The limit stays low after paging_init() though, meaning any
    bootmem allocations will be from a restricted subset of memory.
    Gigabyte huge pages, for instance, are normally allocated from
    bootmem as their order (18) is too large for the default buddy
    allocator (MAX_ORDER = 11).

    This patch restores the memblock limit when map_mem has finished,
    allowing gigabyte huge pages (and other objects) to be allocated
    from all of bootmem.

    Signed-off-by: Steve Capper
    Acked-by: Catalin Marinas

    Steve Capper
     

08 Jun, 2013

1 commit


30 Apr, 2013

1 commit

  • The sparse code, when asking the architecture to populate the vmemmap,
    specifies the section range as a starting page and a number of pages.

    This is an awkward interface, because none of the arch-specific code
    actually thinks of the range in terms of 'struct page' units and always
    translates it to bytes first.

    In addition, later patches mix huge page and regular page backing for
    the vmemmap. For this, they need to call vmemmap_populate_basepages()
    on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind. But these
    are not necessarily multiples of the 'struct page' size and so this unit
    is too coarse.

    Just translate the section range into bytes once in the generic sparse
    code, then pass byte ranges down the stack.

    Signed-off-by: Johannes Weiner
    Cc: Ben Hutchings
    Cc: Bernhard Schmidt
    Cc: Johannes Weiner
    Cc: Russell King
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: "Luck, Tony"
    Cc: Heiko Carstens
    Acked-by: David S. Miller
    Tested-by: David S. Miller
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

26 Mar, 2013

1 commit


24 Feb, 2013

1 commit

  • Introduce a new API vmemmap_free() to free and remove vmemmap
    pagetables. Since pagetable implements are different, each architecture
    has to provide its own version of vmemmap_free(), just like
    vmemmap_populate().

    Note: vmemmap_free() is not implemented for ia64, ppc, s390, and sparc.

    [mhocko@suse.cz: fix implicit declaration of remove_pagetable]
    Signed-off-by: Yasuaki Ishimatsu
    Signed-off-by: Jianguo Wu
    Signed-off-by: Wen Congyang
    Signed-off-by: Tang Chen
    Cc: KOSAKI Motohiro
    Cc: Jiang Liu
    Cc: Kamezawa Hiroyuki
    Cc: Lai Jiangshan
    Cc: Wu Jianguo
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Signed-off-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tang Chen
     

23 Jan, 2013

1 commit

  • This patch adds support for "earlyprintk=" parameter on the kernel
    command line. The format is:

    earlyprintk=[,][,]

    where is the name of the (UART) device, e.g. "pl011", is
    the I/O address. The aren't currently used.

    The mapping of the earlyprintk device is done very early during kernel
    boot and there are restrictions on which functions it can call. A
    special early_io_map() function is added which creates the mapping from
    the pre-defined EARLY_IOBASE to the device I/O address passed via the
    kernel parameter. The pgd entry corresponding to EARLY_IOBASE is
    pre-populated in head.S during kernel boot.

    Only PL011 is currently supported and it is assumed that the interface
    is already initialised by the boot loader before the kernel is started.

    Signed-off-by: Catalin Marinas
    Acked-by: Arnd Bergmann

    Catalin Marinas
     

17 Sep, 2012

1 commit

  • This patch contains the initialisation of the memory blocks, MMU
    attributes and the memory map. Only five memory types are defined:
    Device nGnRnE (equivalent to Strongly Ordered), Device nGnRE (classic
    Device memory), Device GRE, Normal Non-cacheable and Normal Cacheable.
    Cache policies are supported via the memory attributes register
    (MAIR_EL1) and only affect the Normal Cacheable mappings.

    This patch also adds the SPARSEMEM_VMEMMAP initialisation.

    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Acked-by: Tony Lindgren
    Acked-by: Nicolas Pitre
    Acked-by: Olof Johansson
    Acked-by: Santosh Shilimkar
    Acked-by: Arnd Bergmann

    Catalin Marinas