09 Dec, 2011

1 commit

  • The only function of memblock_analyze() is now allowing resize of
    memblock region arrays. Rename it to memblock_allow_resize() and
    update its users.

    * The following users remain the same other than renaming.

    arm/mm/init.c::arm_memblock_init()
    microblaze/kernel/prom.c::early_init_devtree()
    powerpc/kernel/prom.c::early_init_devtree()
    openrisc/kernel/prom.c::early_init_devtree()
    sh/mm/init.c::paging_init()
    sparc/mm/init_64.c::paging_init()
    unicore32/mm/init.c::uc32_memblock_init()

    * In the following users, analyze was used to update total size which
    is no longer necessary.

    powerpc/kernel/machine_kexec.c::reserve_crashkernel()
    powerpc/kernel/prom.c::early_init_devtree()
    powerpc/mm/init_32.c::MMU_init()
    powerpc/mm/tlb_nohash.c::__early_init_mmu()
    powerpc/platforms/ps3/mm.c::ps3_mm_add_memory()
    powerpc/platforms/embedded6xx/wii.c::wii_memory_fixups()
    sh/kernel/machine_kexec.c::reserve_crashkernel()

    * x86/kernel/e820.c::memblock_x86_fill() was directly setting
    memblock_can_resize before populating memblock and calling analyze
    afterwards. Call memblock_allow_resize() before start populating.

    memblock_can_resize is now static inside memblock.c.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu
    Cc: Russell King
    Cc: Michal Simek
    Cc: Paul Mundt
    Cc: "David S. Miller"
    Cc: Guan Xuetao
    Cc: "H. Peter Anvin"

    Tejun Heo
     

10 Jun, 2011

1 commit


14 Jul, 2010

1 commit

  • via following scripts

    FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

    sed -i \
    -e 's/lmb/memblock/g' \
    -e 's/LMB/MEMBLOCK/g' \
    $FILES

    for N in $(find . -name lmb.[ch]); do
    M=$(echo $N | sed 's/lmb/memblock/g')
    mv $N $M
    done

    and remove some wrong change like lmbench and dlmb etc.

    also move memblock.c from lib/ to mm/

    Suggested-by: Ingo Molnar
    Acked-by: "H. Peter Anvin"
    Acked-by: Benjamin Herrenschmidt
    Acked-by: Linus Torvalds
    Signed-off-by: Yinghai Lu
    Signed-off-by: Benjamin Herrenschmidt

    Yinghai Lu
     

13 May, 2010

1 commit


10 May, 2010

1 commit

  • This reworks the memory limit handling to tie in through the available
    LMB infrastructure. This requires a bit of reordering as we need to have
    all of the LMB reservations taken care of prior to establishing the
    limits.

    While we're at it, the crash kernel reservation semantics are reworked
    so that we allocate from the bottom up and reduce the risk of having
    to disable the memory limit due to a clash with the crash kernel
    reservation.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

07 May, 2010

2 commits


20 Jan, 2010

1 commit

  • This provides a machine_ops-based reboot interface loosely cloned from
    x86, and converts the native sh32 and sh64 cases over to it.

    Necessary both for tying in SMP support and also enabling platforms like
    SDK7786 to add support for their microcontroller-based power managers.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

12 Jan, 2010

1 commit

  • This moves the VBR handling out of the main trap handling code and in to
    the sh-bios helper code. A couple of accessors are added in order to
    permit other kernel code to get at the VBR value for state save/restore
    paths.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

26 Oct, 2009

1 commit


10 Oct, 2009

1 commit

  • Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
    idea that all addresses in P1SEG are untranslated, so we can access an
    address's physical page as an offset from P1SEG. This doesn't work for
    CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
    for PMB mappings and so can be translated to any physical address.

    Likewise, replace a P1SEGADDR() use with virt_to_phys().

    Signed-off-by: Matt Fleming
    Signed-off-by: Paul Mundt

    Matt Fleming
     

20 Mar, 2009

1 commit

  • Older versions of kexec-tools has a zImage loader that
    passes a virtual address as entry point. The elf loader
    otoh it passes a physical address as entry point, and
    pages are always passed as physical addresses as well.

    Only allow physical addresses from now on.

    Signed-off-by: Magnus Damm
    Signed-off-by: Paul Mundt

    Magnus Damm
     

18 Mar, 2009

5 commits

  • Save and restore ftrace state when returning from kexec jump in
    machine_kexec(). Follows the x86 change.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • For the time being, this creates far more problems than it solves,
    evident by the second local_irq_disable(). Kill all of this off
    and rely on IRQ disabling to protect against the VBR reload.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • Add kexec jump support to the SuperH architecture.

    Similar to the x86 implementation, with the following
    exceptions:

    - Instead of separating the assembly code flow into
    two parts for regular kexec and kexec jump we use a
    single code path. In the assembly snippet regular
    kexec is just kexec jump that never comes back.

    - Instead of using a swap page when moving data between
    pages the page copy assembly routine has been modified
    to exchange the data between the pages using registers.

    - We walk the page list twice in machine_kexec() to
    do and undo physical to virtual address conversion.

    Signed-off-by: Magnus Damm
    Signed-off-by: Paul Mundt

    Magnus Damm
     
  • Rework the kexec code to avoid using P2SEG. Instead
    we walk the page list in machine_kexec() and convert
    the addresses from physical to virtual using C.

    Signed-off-by: Magnus Damm
    Signed-off-by: Paul Mundt

    Magnus Damm
     
  • Setup the vbr register in machine_kexec(). This
    instead of passing values to the assembly snippet.

    Signed-off-by: Magnus Damm
    Signed-off-by: Paul Mundt

    Magnus Damm
     

28 Aug, 2008

1 commit

  • The crash kernel entry point is currently checked by the kexec kernel
    code and only physical addresses in the reserved memory window are
    accepted. This means that we can't pass P2 or P1 addresses as entry
    points in the case of crash kernels. This patch makes sure we can start
    crash kernels by adding support for physical address entry points.

    Signed-off-by: Magnus Damm
    Signed-off-by: Paul Mundt

    Magnus Damm
     

04 Aug, 2008

1 commit


27 Jul, 2008

1 commit

  • This patch provides an enhancement to kexec/kdump. It implements the
    following features:

    - Backup/restore memory used by the original kernel before/after
    kexec.

    - Save/restore CPU state before/after kexec.

    The features of this patch can be used as a general method to call program in
    physical mode (paging turning off). This can be used to call BIOS code under
    Linux.

    kexec-tools needs to be patched to support kexec jump. The patches and
    the precompiled kexec can be download from the following URL:

    source: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-src_git_kh10.tar.bz2
    patches: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-patches_git_kh10.tar.bz2
    binary: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec_git_kh10

    Usage example of calling some physical mode code and return:

    1. Compile and install patched kernel with following options selected:

    CONFIG_X86_32=y
    CONFIG_KEXEC=y
    CONFIG_PM=y
    CONFIG_KEXEC_JUMP=y

    2. Build patched kexec-tool or download the pre-built one.

    3. Build some physical mode executable named such as "phy_mode"

    4. Boot kernel compiled in step 1.

    5. Load physical mode executable with /sbin/kexec. The shell command
    line can be as follow:

    /sbin/kexec --load-preserve-context --args-none phy_mode

    6. Call physical mode executable with following shell command line:

    /sbin/kexec -e

    Implementation point:

    To support jumping without reserving memory. One shadow backup page (source
    page) is allocated for each page used by kexeced code image (destination
    page). When do kexec_load, the image of kexeced code is loaded into source
    pages, and before executing, the destination pages and the source pages are
    swapped, so the contents of destination pages are backupped. Before jumping
    to the kexeced code image and after jumping back to the original kernel, the
    destination pages and the source pages are swapped too.

    C ABI (calling convention) is used as communication protocol between
    kernel and called code.

    A flag named KEXEC_PRESERVE_CONTEXT for sys_kexec_load is added to
    indicate that the loaded kernel image is used for jumping back.

    Now, only the i386 architecture is supported.

    Signed-off-by: Huang Ying
    Acked-by: Vivek Goyal
    Cc: "Eric W. Biederman"
    Cc: Pavel Machek
    Cc: Nigel Cunningham
    Cc: "Rafael J. Wysocki"
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     

20 Oct, 2007

1 commit

  • This patch removes the crashkernel parsing from arch/sh/kernel/machine_kexec.c
    and calls the generic function, introduced in the generic patch, in
    setup_bootmem_allocator().

    This is necessary because the amount of System RAM must be known in this
    function now because of the new syntax.

    Signed-off-by: Bernhard Walle
    Acked-by: Paul Mundt
    Cc: Vivek Goyal
    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bernhard Walle
     

07 May, 2007

1 commit


27 Sep, 2006

1 commit


27 Jun, 2006

1 commit


17 Jan, 2006

1 commit