25 Dec, 2016

1 commit


14 Dec, 2015

1 commit


05 Aug, 2010

2 commits

  • The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
    server ppc64 though I hijack it on embedded ppc64 for similar purposes)
    and represents the area of memory that can be accessed in real mode
    (aka with MMU off), or on embedded, from the exception vectors (which
    is bolted in the TLB) which pretty much boils down to the same thing.

    We take that out of the generic MEMBLOCK data structure and move it into
    arch/powerpc where it belongs, renaming it to "RMA" while at it.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     
  • This introduce memblock.current_limit which is used to limit allocations
    from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE).

    The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still
    be used with memblock_alloc_base() to allocate really anywhere.

    It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears.

    Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I
    strongly recommend that you ensure that you set an appropriate limit
    during boot in order to guarantee that an memblock_alloc() at any time
    results in something that is accessible with a simple __va().

    The reason is that a subsequent patch will introduce the ability for
    the array to resize itself by reallocating itself. The MEMBLOCK core will
    honor the current limit when performing those allocations.

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     

14 Jul, 2010

1 commit

  • via following scripts

    FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

    sed -i \
    -e 's/lmb/memblock/g' \
    -e 's/LMB/MEMBLOCK/g' \
    $FILES

    for N in $(find . -name lmb.[ch]); do
    M=$(echo $N | sed 's/lmb/memblock/g')
    mv $N $M
    done

    and remove some wrong change like lmbench and dlmb etc.

    also move memblock.c from lib/ to mm/

    Suggested-by: Ingo Molnar
    Acked-by: "H. Peter Anvin"
    Acked-by: Benjamin Herrenschmidt
    Acked-by: Linus Torvalds
    Signed-off-by: Yinghai Lu
    Signed-off-by: Benjamin Herrenschmidt

    Yinghai Lu
     

12 Feb, 2010

1 commit


15 Dec, 2009

1 commit

  • Today's linux-next build (powerpc ppc44x_defconfig) failed like this:

    arch/powerpc/mm/pgtable_32.c: In function 'mapin_ram':
    arch/powerpc/mm/pgtable_32.c:318: error: too many arguments to function 'mmu_mapin_ram'

    Casued by commit de32400dd26e743c5d500aa42d8d6818b79edb73 ("wii: use both
    mem1 and mem2 as ram").

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Grant Likely

    Stephen Rothwell
     

27 Aug, 2009

1 commit

  • This is an attempt at cleaning up a bit the way we handle execute
    permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only
    defined by CPUs that can do something with it, and the myriad of
    #ifdef's in the I$/D$ coherency code is reduced to 2 cases that
    hopefully should cover everything.

    The logic on BookE is a little bit different than what it was though
    not by much. Since now, _PAGE_EXEC will be set by the generic code
    for executable pages, we need to filter out if they are unclean and
    recover it. However, I don't expect the code to be more bloated than
    it already was in that area due to that change.

    I could boast that this brings proper enforcing of per-page execute
    permissions to all BookE and 40x but in fact, we've had that now for
    some time as a side effect of my previous rework in that area (and
    I didn't even know it :-) We would only enable execute permission if
    the page was cache clean and we would only cache clean it if we took
    and exec fault. Since we now enforce that the later only work if
    VM_EXEC is part of the VMA flags, we de-fact already enforce per-page
    execute permissions... Unless I missed something

    Signed-off-by: Benjamin Herrenschmidt

    Benjamin Herrenschmidt
     

13 Nov, 2008

1 commit

  • If the size of DRAM is not an exact power of two, we may not have
    covered DRAM in its entirety with large 16 and 4 MiB pages. If that
    is the case, we can get non-recoverable page faults when doing the
    final PTE mappings for the non-large page PTEs.

    Consequently, we restrict the top end of DRAM currently allocable
    by updating '__initial_memory_limit_addr' so that calls to the LMB to
    allocate PTEs for "tail" coverage with normal-sized pages (or other
    reasons) do not attempt to allocate outside the allowed range.

    Signed-off-by: Grant Erickson
    Signed-off-by: Josh Boyer

    Grant Erickson
     

17 Apr, 2008

1 commit

  • A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always
    use 0 as we don't support booting these kernels at non-zero physical
    addresses since their exception vectors must be at 0 (or 0xfffx_xxxx).

    For the sub-arches that support relocatable interrupt vectors
    (book-e), it's reasonable to have memory start at a non-zero physical
    address. For those cases use the variable memstart_addr instead of
    the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for
    initialization and in the future we can set memstart_addr at runtime
    to have a relocatable kernel.

    Signed-off-by: Kumar Gala
    Signed-off-by: Paul Mackerras

    Kumar Gala
     

01 Nov, 2007

1 commit

  • mmu_mapin_ram() loops over total_lowmem to setup page tables. However, if
    total_lowmem is less that 16M, the subtraction rolls over and results in
    a number just under 4G (because total_lowmem is an unsigned value).

    This patch rejigs the loop from countup to countdown to eliminate the
    bug.

    Special thanks to Magnus Hjorth who wrote the original patch to fix this
    bug. This patch improves on his by making the loop code simpler (which
    also eliminates the possibility of another rollover at the high end)
    and also applies the change to arch/powerpc.

    Signed-off-by: Grant Likely
    Signed-off-by: Josh Boyer

    Grant Likely
     

20 Aug, 2007

2 commits