09 Dec, 2011

14 commits

  • Now that all early memory information is in memblock when enabled, we
    can implement reverse free area iterator and use it to implement NUMA
    aware allocator which is then wrapped for simpler variants instead of
    the confusing and inefficient mending of information in separate NUMA
    aware allocator.

    Implement for_each_free_mem_range_reverse(), use it to reimplement
    memblock_find_in_range_node() which in turn is used by all allocators.

    The visible allocator interface is inconsistent and can probably use
    some cleanup too.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • Now all ARCH_POPULATES_NODE_MAP archs select HAVE_MEBLOCK_NODE_MAP -
    there's no user of early_node_map[] left. Kill early_node_map[] and
    replace ARCH_POPULATES_NODE_MAP with HAVE_MEMBLOCK_NODE_MAP. Also,
    relocate for_each_mem_pfn_range() and helper from mm.h to memblock.h
    as page_alloc.c would no longer host an alternative implementation.

    This change is ultimately one to one mapping and shouldn't cause any
    observable difference; however, after the recent changes, there are
    some functions which now would fit memblock.c better than page_alloc.c
    and dependency on HAVE_MEMBLOCK_NODE_MAP instead of HAVE_MEMBLOCK
    doesn't make much sense on some of them. Further cleanups for
    functions inside HAVE_MEMBLOCK_NODE_MAP in mm.h would be nice.

    -v2: Fix compile bug introduced by mis-spelling
    CONFIG_HAVE_MEMBLOCK_NODE_MAP to CONFIG_MEMBLOCK_HAVE_NODE_MAP in
    mmzone.h. Reported by Stephen Rothwell.

    Signed-off-by: Tejun Heo
    Cc: Stephen Rothwell
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu
    Cc: Tony Luck
    Cc: Ralf Baechle
    Cc: Martin Schwidefsky
    Cc: Chen Liqin
    Cc: Paul Mundt
    Cc: "David S. Miller"
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • Implement memblock_add_node() which can add a new memblock memory
    region with specific node ID.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • The only function of memblock_analyze() is now allowing resize of
    memblock region arrays. Rename it to memblock_allow_resize() and
    update its users.

    * The following users remain the same other than renaming.

    arm/mm/init.c::arm_memblock_init()
    microblaze/kernel/prom.c::early_init_devtree()
    powerpc/kernel/prom.c::early_init_devtree()
    openrisc/kernel/prom.c::early_init_devtree()
    sh/mm/init.c::paging_init()
    sparc/mm/init_64.c::paging_init()
    unicore32/mm/init.c::uc32_memblock_init()

    * In the following users, analyze was used to update total size which
    is no longer necessary.

    powerpc/kernel/machine_kexec.c::reserve_crashkernel()
    powerpc/kernel/prom.c::early_init_devtree()
    powerpc/mm/init_32.c::MMU_init()
    powerpc/mm/tlb_nohash.c::__early_init_mmu()
    powerpc/platforms/ps3/mm.c::ps3_mm_add_memory()
    powerpc/platforms/embedded6xx/wii.c::wii_memory_fixups()
    sh/kernel/machine_kexec.c::reserve_crashkernel()

    * x86/kernel/e820.c::memblock_x86_fill() was directly setting
    memblock_can_resize before populating memblock and calling analyze
    afterwards. Call memblock_allow_resize() before start populating.

    memblock_can_resize is now static inside memblock.c.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu
    Cc: Russell King
    Cc: Michal Simek
    Cc: Paul Mundt
    Cc: "David S. Miller"
    Cc: Guan Xuetao
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • Total size of memory regions was calculated by memblock_analyze()
    requiring explicitly calling the function between operations which can
    change memory regions and possible users of total size, which is
    cumbersome and fragile.

    This patch makes each memblock_type track total size automatically
    with minor modifications to memblock manipulation functions and remove
    requirements on calling memblock_analyze(). [__]memblock_dump_all()
    now also dumps the total size of reserved regions.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • With recent updates, the basic memblock operations are robust enough
    that there's no reason for memblock_enfore_memory_limit() to directly
    manipulate memblock region arrays. Reimplement it using
    __memblock_remove().

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • Allow memblock users to specify range where @base + @size overflows
    and automatically cap it at maximum. This makes the interface more
    robust and specifying till-the-end-of-memory easier.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • __memblock_remove()'s open coded region manipulation can be trivially
    replaced with memblock_islate_range(). This increases code sharing
    and eases improving region tracking.

    This pulls memblock_isolate_range() out of HAVE_MEMBLOCK_NODE_MAP.
    Make it use memblock_get_region_node() instead of assuming rgn->nid is
    available.

    -v2: Fixed build failure on !HAVE_MEMBLOCK_NODE_MAP caused by direct
    rgn->nid access.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • memblock_set_node() operates in three steps - break regions crossing
    boundaries, set nid and merge back regions. This patch separates the
    first part into a separate function - memblock_isolate_range(), which
    breaks regions crossing range boundaries and returns range index range
    for regions properly contained in the specified memory range.

    This doesn't introduce any behavior change and will be used to further
    unify region handling.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • memblock_init() initializes arrays for regions and memblock itself;
    however, all these can be done with struct initializers and
    memblock_init() can be removed. This patch kills memblock_init() and
    initializes memblock with struct initializer.

    The only difference is that the first dummy entries don't have .nid
    set to MAX_NUMNODES initially. This doesn't cause any behavior
    difference.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu
    Cc: Russell King
    Cc: Michal Simek
    Cc: Paul Mundt
    Cc: "David S. Miller"
    Cc: Guan Xuetao
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • memblock no longer depends on having one more entry at the end during
    addition making the sentinel entries at the end of region arrays not
    too useful. Remove the sentinels. This eases further updates.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • Add __memblock_dump_all() which dumps memblock configuration whether
    memblock_debug is enabled or not.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • Make memblock_double_array(), __memblock_alloc_base() and
    memblock_alloc_nid() use memblock_reserve() instead of calling
    memblock_add_region() with reserved array directly. This eases
    debugging and updates to memblock_add_region().

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     
  • memblock_{add|remove|free|reserve}() return either 0 or -errno but had
    long as return type. Chage it to int. Also, drop 'extern' from all
    prototypes in memblock.h - they are unnecessary and used
    inconsistently (especially if mm.h is included in the picture).

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu

    Tejun Heo
     

29 Nov, 2011

1 commit

  • Conflicts & resolutions:

    * arch/x86/xen/setup.c

    dc91c728fd "xen: allow extra memory to be in multiple regions"
    24aa07882b "memblock, x86: Replace memblock_x86_reserve/free..."

    conflicted on xen_add_extra_mem() updates. The resolution is
    trivial as the latter just want to replace
    memblock_x86_reserve_range() with memblock_reserve().

    * drivers/pci/intel-iommu.c

    166e9278a3f "x86/ia64: intel-iommu: move to drivers/iommu/"
    5dfe8660a3d "bootmem: Replace work_with_active_regions() with..."

    conflicted as the former moved the file under drivers/iommu/.
    Resolved by applying the chnages from the latter on the moved
    file.

    * mm/Kconfig

    6661672053a "memblock: add NO_BOOTMEM config symbol"
    c378ddd53f9 "memblock, x86: Make ARCH_DISCARD_MEMBLOCK a config option"

    conflicted trivially. Both added config options. Just
    letting both add their own options resolves the conflict.

    * mm/memblock.c

    d1f0ece6cdc "mm/memblock.c: small function definition fixes"
    ed7b56a799c "memblock: Remove memblock_memory_can_coalesce()"

    confliected. The former updates function removed by the
    latter. Resolution is trivial.

    Signed-off-by: Tejun Heo

    Tejun Heo
     

01 Nov, 2011

3 commits

  • Quiet the following sparse noise in this file:

    warning: symbol 'memblock_overlaps_region' was not declared. Should it be static?

    Signed-off-by: H Hartley Sweeten
    Cc: Yinghai Lu
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: Tomi Valkeinen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    H Hartley Sweeten
     
  • warning: function 'memblock_memory_can_coalesce'
    with external linkage has definition.

    Signed-off-by: Jonghwan Choi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jonghwan Choi
     
  • SPARC32 require access to the start address. Add a new helper
    memblock_start_of_DRAM() to give access to the address of the first
    memblock - which contains the lowest address.

    The awkward name was chosen to match the already present
    memblock_end_of_DRAM().

    Signed-off-by: Sam Ravnborg
    Cc: "David S. Miller"
    Cc: Yinghai Lu
    Acked-by: Tejun Heo
    Cc: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sam Ravnborg
     

26 Jul, 2011

1 commit

  • RED_INACTIVE is a slab thing, and reusing it for memblock was
    inappropriate, because memblock is dealing with phys_addr_t's which have a
    Kconfigurable sizeof().

    Create a new poison type for this application. Fixes the sparse warning

    warning: cast truncates bits from constant value (9f911029d74e35b becomes 9d74e35b)

    Reported-by: H Hartley Sweeten
    Tested-by: H Hartley Sweeten
    Acked-by: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

15 Jul, 2011

11 commits

  • phys_addr_t is not necessarily the same thing as unsigned long long.
    It is, however, easier to cast it to unsigned long long for printf
    purposes than it is to deal with differnent printf formats.

    Signed-off-by: H. Peter Anvin
    Cc: Tejun Heo
    Link: http://lkml.kernel.org/r/4E1F4D2C.3000507@zytor.com

    H. Peter Anvin
     
  • Other than sanity check and debug message, the x86 specific version of
    memblock reserve/free functions are simple wrappers around the generic
    versions - memblock_reserve/free().

    This patch adds debug messages with caller identification to the
    generic versions and replaces x86 specific ones and kills them.
    arch/x86/include/asm/memblock.h and arch/x86/mm/memblock.c are empty
    after this change and removed.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310462166-31469-14-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • From 6839454ae63f1eb21e515c10229ca95c22955fec Mon Sep 17 00:00:00 2001
    From: Tejun Heo
    Date: Thu, 14 Jul 2011 11:22:17 +0200

    Make ARCH_DISCARD_MEMBLOCK a config option so that it can be handled
    together with other MEMBLOCK options.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/20110714094603.GH3455@htj.dyndns.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • Implement for_each_free_mem_range() which iterates over free memory
    areas according to memblock (memory && !reserved). This will be used
    to simplify memblock users.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310462166-31469-7-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • From 83103b92f3234ec830852bbc5c45911bd6cbdb20 Mon Sep 17 00:00:00 2001
    From: Tejun Heo
    Date: Thu, 14 Jul 2011 11:22:16 +0200

    Add optional region->nid which can be enabled by arch using
    CONFIG_HAVE_MEMBLOCK_NODE_MAP. When enabled, memblock also carries
    NUMA node information and replaces early_node_map[].

    Newly added memblocks have MAX_NUMNODES as nid. Arch can then call
    memblock_set_node() to set node information. memblock takes care of
    merging and node affine allocations w.r.t. node information.

    When MEMBLOCK_NODE_MAP is enabled, early_node_map[], related data
    structures and functions to manipulate and iterate it are disabled.
    memblock version of __next_mem_pfn_range() is provided such that
    for_each_mem_pfn_range() behaves the same and its users don't have to
    be updated.

    -v2: Yinghai spotted section mismatch caused by missing
    __init_memblock in memblock_set_node(). Fixed.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/20110714094342.GF3455@htj.dyndns.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • memblock_add_region() carefully checked for merge and overlap
    conditions while adding a new region, which is complicated and makes
    it difficult to allow arbitrary overlaps or add more merge conditions
    (e.g. node ID).

    This re-implements memblock_add_region() such that insertion is done
    in two steps - all non-overlapping portions of new area are inserted
    as separate regions first and then memblock_merge_regions() scan and
    merge all neighbouring compatible regions.

    This makes addition logic simpler and more versatile and enables
    adding node information to memblock.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310462166-31469-3-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • Arch could implement memblock_memor_can_coalesce() to veto merging of
    adjacent or overlapping memblock regions; however, no arch did and any
    vetoing would trigger WARN_ON(). Memblock regions are supposed to
    deal with proper memory anyway. Remove the unused hook.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310462166-31469-2-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • Node affine memblock allocation logic is currently implemented across
    memblock_alloc_nid() and memblock_alloc_nid_region(). This
    reorganizes it such that it resembles that of non-NUMA allocation API.

    Area finding is collected and moved into new exported function
    memblock_find_in_range_node() which is symmetrical to non-NUMA
    counterpart - it handles @start/@end and understands ANYWHERE and
    ACCESSIBLE. memblock_alloc_nid() now simply calls
    memblock_find_in_range_node() and reserves the returned area.

    This makes memblock_alloc[_try]_nid() observe ACCESSIBLE limit on node
    affine allocations too (again, this doesn't make any difference for
    the current sole user - sparc64).

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310460395-30913-8-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • NUMA aware memblock alloc functions - memblock_alloc_[try_]nid() -
    weren't properly top-down because memblock_nid_range() scanned
    forward. This patch reverses memblock_nid_range(), renames it to
    memblock_nid_range_rev() and updates related functions to implement
    proper top-down allocation.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310460395-30913-7-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • memblock_nid_range() is used to implement memblock_[try_]alloc_nid().
    The generic version determines the range by walking early_node_map
    with for_each_mem_pfn_range(). The generic version is defined __weak
    to allow arch override.

    Currently, only sparc overrides it; however, with the previous update
    to the generic implementation, there isn't much to be gained with arch
    override. Sparc would behave exactly the same with the generic
    implementation.

    This patch disallows arch override for memblock_nid_range() and make
    both generic and sparc versions static.

    sparc is only compile tested.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310460395-30913-6-git-send-email-tj@kernel.org
    Cc: "David S. Miller"
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • Given an address range, memblock_nid_range() determines the node the
    start of the range belongs to and upto where the range stays in the
    same node.

    It's implemented by calling get_pfn_range_for_nid(), which determines
    min and max pfns for a given node, for each node and testing whether
    start address falls in there. This is not only inefficient but also
    incorrect when nodes interleave as min-max ranges for nodes overlap.

    This patch reimplements memblock_nid_range() using
    for_each_mem_pfn_range(). It's simpler, walks the mem ranges once and
    can find the exact range the start address falls in.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310460395-30913-5-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     

14 Jul, 2011

4 commits

  • memblock_find_base() is a static function with two callers in
    memblock.c and memblock_find_in_range() is a wrapper around it which
    just changes the types and order of parameters.

    Make memblock_find_in_range() take phys_addr_t instead of u64 for
    consistency and replace memblock_find_base() with it.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310457490-3356-7-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • 25818f0f28 (memblock: Make MEMBLOCK_ERROR be 0) thankfully made
    MEMBLOCK_ERROR 0 and there already are codes which expect error return
    to be 0. There's no point in keeping MEMBLOCK_ERROR around. End its
    misery.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310457490-3356-6-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310457490-3356-5-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • After node affine allocation fails, memblock_alloc_try_nid() calls
    memblock_alloc_base() with @max_addr set to MEMBLOCK_ALLOC_ANYWHERE.
    This is inconsistent with memblock_alloc() and what the function's
    sole user - sparc/mm/init_64 - expects, although it doesn't make any
    difference as sparc64 doesn't have highmem and ACCESSIBLE equals
    ANYWHERE.

    This patch makes memblock_alloc_try_nid() use ACCESSIBLE instead of
    ANYWHERE. This isn't complete as node affine allocation doesn't
    consider memblock.current_limit. It will be handled with future
    changes.

    This patch doesn't introduce any behavior difference.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310457490-3356-4-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     

23 Mar, 2011

1 commit

  • Currently memblock_reserve() or memblock_free() don't handle overlaps of
    any kind. There is some special casing for coalescing exactly adjacent
    regions but that's about it.

    This is annoying because typically memblock_reserve() is used to mark
    regions passed by the firmware as reserved and we all know how much we can
    trust our firmwares...

    Also, with the current code, if we do something it doesn't handle right
    such as trying to memblock_reserve() a large range spanning multiple
    existing smaller reserved regions for example, or doing overlapping
    reservations, it can silently corrupt the internal region array, causing
    odd errors much later on, such as allocations returning reserved regions
    etc...

    This patch rewrites the underlying functions that add or remove a region
    to the arrays. The new code is a lot more robust as it fully handles
    overlapping regions. It's also, imho, simpler than the previous
    implementation.

    In addition, while doing so, I found a bug where if we fail to double the
    array while adding a region, we would remove the last region of the array
    rather than the region we just allocated. This fixes it too.

    Signed-off-by: Benjamin Herrenschmidt
    Acked-by: Yinghai Lu
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benjamin Herrenschmidt
     

12 Feb, 2011

1 commit

  • While applying patch to use memblock to find aperture for 64bit x86.
    Ingo found system with 1g + force_iommu

    > No AGP bridge found
    > Node 0: aperture @ 38000000 size 32 MB
    > Aperture pointing to e820 RAM. Ignoring.
    > Your BIOS doesn't leave a aperture memory hole
    > Please enable the IOMMU option in the BIOS setup
    > This costs you 64 MB of RAM
    > Cannot allocate aperture memory hole (0,65536K)

    the corresponding code:

    addr = memblock_find_in_range(0, 1ULL<< 0xffffffff) {
    printk(KERN_ERR
    "Cannot allocate aperture memory hole (%lx,%uK)\n",
    addr, aper_size>>10);
    return 0;
    }
    memblock_x86_reserve_range(addr, addr + aper_size, "aperture64")

    fails because memblock core code align the size with 512M. That could
    make size way too big.

    So don't align the size in that case.

    actually __memblock_alloc_base, the another caller already align that
    before calling that function.

    BTW. x86 does not use __memblock_alloc_base...

    Signed-off-by: Yinghai Lu
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: Dave Airlie
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     

21 Jan, 2011

1 commit

  • memblock_is_region_memory() uses reserved memblocks to search for the
    given region, while it should use the memory memblocks.

    I encountered the problem with OMAP's framebuffer ram allocation.
    Normally the ram is allocated dynamically, and this function is not
    called. However, if we want to pass the framebuffer from the bootloader
    to the kernel (to retain the boot image), this function is used to check
    the validity of the kernel parameters for the framebuffer ram area.

    Signed-off-by: Tomi Valkeinen
    Acked-by: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tomi Valkeinen
     

12 Oct, 2010

2 commits

  • Stephen found

    WARNING: mm/built-in.o(.text+0x25ab8): Section mismatch in reference from the function memblock_find_base() to the function .init.text:memblock_find_region()
    The function memblock_find_base() references
    the function __init memblock_find_region().
    This is often because memblock_find_base lacks a __init
    annotation or the annotation of memblock_find_region is wrong.

    So let memblock_find_region() to use __init_memblock instead of __init
    directly.

    Also fix one function that did not have __init* to be __init_memblock.

    Reported-by: Stephen Rothwell
    Signed-off-by: Yinghai Lu
    LKML-Reference:
    Signed-off-by: H. Peter Anvin

    Yinghai Lu
     
  • The Xen setup code needs to call memblock_x86_reserve_range() very early,
    so allow it to initialize the memblock subsystem before doing so. The
    second memblock_init() is ignored.

    Signed-off-by: Jeremy Fitzhardinge
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    LKML-Reference:
    Signed-off-by: H. Peter Anvin

    Jeremy Fitzhardinge
     

06 Oct, 2010

1 commit

  • When trying to find huge range for crashkernel, get

    [ 0.000000] ------------[ cut here ]------------
    [ 0.000000] WARNING: at arch/x86/mm/memblock.c:248 memblock_x86_reserve_range+0x40/0x7a()
    [ 0.000000] Hardware name: Sun Fire x4800
    [ 0.000000] memblock_x86_reserve_range: wrong range [0xffffffff37000000, 0x137000000)
    [ 0.000000] Modules linked in:
    [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.36-rc5-tip-yh-01876-g1cac214-dirty #59
    [ 0.000000] Call Trace:
    [ 0.000000] [] ? memblock_x86_reserve_range+0x40/0x7a
    [ 0.000000] [] warn_slowpath_common+0x85/0x9e
    [ 0.000000] [] warn_slowpath_fmt+0x6e/0x70
    [ 0.000000] [] ? memblock_find_region+0x40/0x78
    [ 0.000000] [] ? memblock_find_base+0x9a/0xb9
    [ 0.000000] [] memblock_x86_reserve_range+0x40/0x7a
    [ 0.000000] [] setup_arch+0x99d/0xb2a
    [ 0.000000] [] ? trace_hardirqs_off+0xd/0xf
    [ 0.000000] [] ? _raw_spin_unlock_irqrestore+0x3d/0x4c
    [ 0.000000] [] start_kernel+0xde/0x3f1
    [ 0.000000] [] x86_64_start_reservations+0xa0/0xa4
    [ 0.000000] [] x86_64_start_kernel+0x106/0x10d
    [ 0.000000] ---[ end trace a7919e7f17c0a725 ]---
    [ 0.000000] Reserving 8192MB of memory at 17592186041200MB for crashkernel (System RAM: 526336MB)

    This is caused by a wraparound in the test due to size > end;
    explicitly check for this condition and fail.

    Signed-off-by: Yinghai Lu
    LKML-Reference:
    Signed-off-by: H. Peter Anvin

    Yinghai Lu