21 May, 2016

2 commits

  • Comparing an u64 variable to >= 0 returns always true and can therefore
    be removed. This issue was detected using the -Wtype-limits gcc flag.

    This patch fixes following type-limits warning:

    mm/memblock.c: In function `__next_reserved_mem_region':
    mm/memblock.c:843:11: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
    if (*idx >= 0 && *idx < type->cnt) {

    Link: http://lkml.kernel.org/r/20160510103625.3a7f8f32@g0hl1n.net
    Signed-off-by: Richard Leitner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Richard Leitner
     
  • memblock_add_region() and memblock_reserve_region() do nothing specific
    before the call of memblock_add_range(), only print debug output.

    We can do the same in memblock_add() and memblock_reserve() since both
    memblock_add_region() and memblock_reserve_region() are not used by
    anybody outside of memblock.c and memblock_{add,reserve}() have the same
    set of flags and nids.

    Since memblock_add_region() and memblock_reserve_region() will be
    inlined, there will not be functional changes, but will improve code
    readability a little.

    Signed-off-by: Alexander Kuleshov
    Acked-by: Ard Biesheuvel
    Cc: Mel Gorman
    Cc: Pekka Enberg
    Cc: Tony Luck
    Cc: Tang Chen
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     

18 Mar, 2016

1 commit

  • Kernel style prefers a single string over split strings when the string is
    'user-visible'.

    Miscellanea:

    - Add a missing newline
    - Realign arguments

    Signed-off-by: Joe Perches
    Acked-by: Tejun Heo [percpu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

16 Mar, 2016

1 commit

  • We define struct memblock_type *type in the memblock_add_region() and
    memblock_reserve_region() functions only for passing it to the
    memlock_add_range() and memblock_reserve_range() functions. Let's
    remove these variables and will pass a type directly.

    Signed-off-by: Alexander Kuleshov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     

06 Feb, 2016

1 commit

  • At the moment memblock_phys_mem_size() is marked as __init, and so is
    discarded after boot. This is different from most of the memblock
    functions which are marked __init_memblock, and are only discarded after
    boot if memory hotplug is not configured.

    To allow for upcoming code which will need memblock_phys_mem_size() in
    the hotplug path, change it from __init to __init_memblock.

    Signed-off-by: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Gibson
     

15 Jan, 2016

3 commits

  • We already have the for_each_memblock() macro in
    which provides ability to iterate over memblock regions of a known type.
    The for_each_memblock() macro allows us to pass the pointer to the
    struct memblock_type, instead we need to pass name of the type.

    This patch introduces a new macro for_each_memblock_type() which allows
    us iterate over memblock regions with the given type when the type is
    unknown.

    Signed-off-by: Alexander Kuleshov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     
  • Remove rgnbase and rgnsize variables from memblock_overlaps_region().
    We use these variables only for passing to the memblock_addrs_overlap()
    function and that's all. Let's remove them.

    Signed-off-by: Alexander Kuleshov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     
  • Make memblock_is_memory() and memblock_is_reserved return bool to
    improve readability due to these particular functions only using either
    one or zero as their return value.

    No functional change.

    Signed-off-by: Yaowei Bai
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yaowei Bai
     

10 Dec, 2015

1 commit

  • This introduces the MEMBLOCK_NOMAP attribute and the required plumbing
    to make it usable as an indicator that some parts of normal memory
    should not be covered by the kernel direct mapping. It is up to the
    arch to actually honor the attribute when laying out this mapping,
    but the memblock code itself is modified to disregard these regions
    for allocations and other general use.

    Cc: linux-mm@kvack.org
    Cc: Alexander Kuleshov
    Cc: Andrew Morton
    Reviewed-by: Matt Fleming
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Will Deacon

    Ard Biesheuvel
     

06 Nov, 2015

1 commit


09 Sep, 2015

6 commits

  • Signed-off-by: Alexander Kuleshov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     
  • s/succees/success/

    Signed-off-by: Alexander Kuleshov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     
  • Since commit e3239ff92a17 ("memblock: Rename memblock_region to
    memblock_type and memblock_property to memblock_region"), all local
    variables of the membock_type type were renamed to 'type'. This commit
    renames all remaining local variables with the memblock_type type to the
    same view.

    Signed-off-by: Alexander Kuleshov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     
  • When parsing SRAT, all memory ranges are added into numa_meminfo. In
    numa_init(), before entering numa_cleanup_meminfo(), all possible memory
    ranges are in numa_meminfo. And numa_cleanup_meminfo() removes all
    ranges over max_pfn or empty.

    But, this only works if the nodes are continuous. Let's have a look at
    the following example:

    We have an SRAT like this:
    SRAT: Node 0 PXM 0 [mem 0x00000000-0x5fffffff]
    SRAT: Node 0 PXM 0 [mem 0x100000000-0x1ffffffffff]
    SRAT: Node 1 PXM 1 [mem 0x20000000000-0x3ffffffffff]
    SRAT: Node 4 PXM 2 [mem 0x40000000000-0x5ffffffffff] hotplug
    SRAT: Node 5 PXM 3 [mem 0x60000000000-0x7ffffffffff] hotplug
    SRAT: Node 2 PXM 4 [mem 0x80000000000-0x9ffffffffff] hotplug
    SRAT: Node 3 PXM 5 [mem 0xa0000000000-0xbffffffffff] hotplug
    SRAT: Node 6 PXM 6 [mem 0xc0000000000-0xdffffffffff] hotplug
    SRAT: Node 7 PXM 7 [mem 0xe0000000000-0xfffffffffff] hotplug

    On boot, only node 0,1,2,3 exist.

    And the numa_meminfo will look like this:
    numa_meminfo.nr_blks = 9
    1. on node 0: [0, 60000000]
    2. on node 0: [100000000, 20000000000]
    3. on node 1: [20000000000, 40000000000]
    4. on node 4: [40000000000, 60000000000]
    5. on node 5: [60000000000, 80000000000]
    6. on node 2: [80000000000, a0000000000]
    7. on node 3: [a0000000000, a0800000000]
    8. on node 6: [c0000000000, a0800000000]
    9. on node 7: [e0000000000, a0800000000]

    And numa_cleanup_meminfo() will merge 1 and 2, and remove 8,9 because the
    end address is over max_pfn, which is a0800000000. But 4 and 5 are not
    removed because their end addresses are less then max_pfn. But in fact,
    node 4 and 5 don't exist.

    In a word, numa_cleanup_meminfo() is not able to handle holes between nodes.

    Since memory ranges in node 4 and 5 are in numa_meminfo, in
    numa_register_memblks(), node 4 and 5 will be mistakenly set to online.

    If you run lscpu, it will show:
    NUMA node0 CPU(s): 0-14,128-142
    NUMA node1 CPU(s): 15-29,143-157
    NUMA node2 CPU(s):
    NUMA node3 CPU(s):
    NUMA node4 CPU(s): 62-76,190-204
    NUMA node5 CPU(s): 78-92,206-220

    In this patch, we use memblock_overlaps_region() to check if ranges in
    numa_meminfo overlap with ranges in memory_block. Since memory_block
    contains all available memory at boot time, if they overlap, it means the
    ranges exist. If not, then remove them from numa_meminfo.

    After this patch, lscpu will show:
    NUMA node0 CPU(s): 0-14,128-142
    NUMA node1 CPU(s): 15-29,143-157
    NUMA node4 CPU(s): 62-76,190-204
    NUMA node5 CPU(s): 78-92,206-220

    Signed-off-by: Tang Chen
    Reviewed-by: Yasuaki Ishimatsu
    Cc: Thomas Gleixner
    Cc: Tejun Heo
    Cc: Luiz Capitulino
    Cc: Xishi Qiu
    Cc: Will Deacon
    Cc: Vladimir Murzin
    Cc: Fabian Frederick
    Cc: Alexander Kuleshov
    Cc: Baoquan He
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tang Chen
     
  • memblock_overlaps_region() checks if the given memblock region
    intersects a region in memblock. If so, it returns the index of the
    intersected region.

    But its only caller is memblock_is_region_reserved(), and it returns 0
    if false, non-zero if true.

    Both of these should return bool.

    Signed-off-by: Tang Chen
    Cc: Thomas Gleixner
    Cc: Tejun Heo
    Cc: Yasuaki Ishimatsu
    Cc: Luiz Capitulino
    Cc: Xishi Qiu
    Cc: Will Deacon
    Cc: Vladimir Murzin
    Cc: Fabian Frederick
    Cc: Alexander Kuleshov
    Cc: Baoquan He
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tang Chen
     
  • Each memblock_region has flags to indicates the type of this range. For
    the overlap case, memblock_add_range() inserts the lower part and leave the
    upper part as indicated in the overlapped region.

    If the flags of the new range differs from the overlapped region, the
    information recorded is not correct.

    This patch adds a WARN_ON when the flags of the new range differs from the
    overlapped region.

    Signed-off-by: Wei Yang
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wei Yang
     

05 Sep, 2015

1 commit

  • Each memblock_region has nid to indicates the Node ID of this range. For
    the overlap case, memblock_add_range() inserts the lower part and leave
    the upper part as indicated in the overlapped region.

    If the nid of the new range differs from the overlapped region, the
    information recorded is not correct.

    This patch adds a WARN_ON when the nid of the new range differs from the
    overlapped region.

    Signed-off-by: Wei Yang
    Acked-by: David Rientjes
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wei Yang
     

01 Jul, 2015

2 commits

  • __free_pages_bootmem prepares a page for release to the buddy allocator
    and assumes that the struct page is initialised. Parallel initialisation
    of struct pages defers initialisation and __free_pages_bootmem can be
    called for struct pages that cannot yet map struct page to PFN. This
    patch passes PFN to __free_pages_bootmem with no other functional change.

    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Nate Zimmer
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Struct page initialisation had been identified as one of the reasons why
    large machines take a long time to boot. Patches were posted a long time ago
    to defer initialisation until they were first used. This was rejected on
    the grounds it should not be necessary to hurt the fast paths. This series
    reuses much of the work from that time but defers the initialisation of
    memory to kswapd so that one thread per node initialises memory local to
    that node.

    After applying the series and setting the appropriate Kconfig variable I
    see this in the boot log on a 64G machine

    [ 7.383764] kswapd 0 initialised deferred memory in 188ms
    [ 7.404253] kswapd 1 initialised deferred memory in 208ms
    [ 7.411044] kswapd 3 initialised deferred memory in 216ms
    [ 7.411551] kswapd 2 initialised deferred memory in 216ms

    On a 1TB machine, I see

    [ 8.406511] kswapd 3 initialised deferred memory in 1116ms
    [ 8.428518] kswapd 1 initialised deferred memory in 1140ms
    [ 8.435977] kswapd 0 initialised deferred memory in 1148ms
    [ 8.437416] kswapd 2 initialised deferred memory in 1148ms

    Once booted the machine appears to work as normal. Boot times were measured
    from the time shutdown was called until ssh was available again. In the
    64G case, the boot time savings are negligible. On the 1TB machine, the
    savings were 16 seconds.

    Nate Zimmer said:

    : On an older 8 TB box with lots and lots of cpus the boot time, as
    : measure from grub to login prompt, the boot time improved from 1484
    : seconds to exactly 1000 seconds.

    Waiman Long said:

    : I ran a bootup timing test on a 12-TB 16-socket IvyBridge-EX system. From
    : grub menu to ssh login, the bootup time was 453s before the patch and 265s
    : after the patch - a saving of 188s (42%).

    Daniel Blueman said:

    : On a 7TB, 1728-core NumaConnect system with 108 NUMA nodes, we're seeing
    : stock 4.0 boot in 7136s. This drops to 2159s, or a 70% reduction with
    : this patchset. Non-temporal PMD init (https://lkml.org/lkml/2015/4/23/350)
    : drops this to 1045s.

    This patch (of 13):

    As part of initializing struct page's in 2MiB chunks, we noticed that at
    the end of free_all_bootmem(), there was nothing which had forced the
    reserved/allocated 4KiB pages to be initialized.

    This helper function will be used for that expansion.

    Signed-off-by: Robin Holt
    Signed-off-by: Nate Zimmer
    Signed-off-by: Mel Gorman
    Tested-by: Nate Zimmer
    Tested-by: Waiman Long
    Tested-by: Daniel J Blueman
    Acked-by: Pekka Enberg
    Cc: Robin Holt
    Cc: Dave Hansen
    Cc: Waiman Long
    Cc: Scott Norton
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     

25 Jun, 2015

2 commits

  • Try to allocate all boot time kernel data structures from mirrored
    memory.

    If we run out of mirrored memory print warnings, but fall back to using
    non-mirrored memory to make sure that we still boot.

    By number of bytes, most of what we allocate at boot time is the page
    structures. 64 bytes per 4K page on x86_64 ... or about 1.5% of total
    system memory. For workloads where the bulk of memory is allocated to
    applications this may represent a useful improvement to system
    availability since 1.5% of total memory might be a third of the memory
    allocated to the kernel.

    Signed-off-by: Tony Luck
    Cc: Xishi Qiu
    Cc: Hanjun Guo
    Cc: Xiexiuqi
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Yinghai Lu
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Luck
     
  • Some high end Intel Xeon systems report uncorrectable memory errors as a
    recoverable machine check. Linux has included code for some time to
    process these and just signal the affected processes (or even recover
    completely if the error was in a read only page that can be replaced by
    reading from disk).

    But we have no recovery path for errors encountered during kernel code
    execution. Except for some very specific cases were are unlikely to ever
    be able to recover.

    Enter memory mirroring. Actually 3rd generation of memory mirroing.

    Gen1: All memory is mirrored
    Pro: No s/w enabling - h/w just gets good data from other side of the
    mirror
    Con: Halves effective memory capacity available to OS/applications

    Gen2: Partial memory mirror - just mirror memory begind some memory controllers
    Pro: Keep more of the capacity
    Con: Nightmare to enable. Have to choose between allocating from
    mirrored memory for safety vs. NUMA local memory for performance

    Gen3: Address range partial memory mirror - some mirror on each memory
    controller
    Pro: Can tune the amount of mirror and keep NUMA performance
    Con: I have to write memory management code to implement

    The current plan is just to use mirrored memory for kernel allocations.
    This has been broken into two phases:

    1) This patch series - find the mirrored memory, use it for boot time
    allocations

    2) Wade into mm/page_alloc.c and define a ZONE_MIRROR to pick up the
    unused mirrored memory from mm/memblock.c and only give it out to
    select kernel allocations (this is still being scoped because
    page_alloc.c is scary).

    This patch (of 3):

    Add extra "flags" to memblock to allow selection of memory based on
    attribute. No functional changes

    Signed-off-by: Tony Luck
    Cc: Xishi Qiu
    Cc: Hanjun Guo
    Cc: Xiexiuqi
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Yinghai Lu
    Cc: Naoya Horiguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Luck
     

16 Apr, 2015

1 commit

  • memblock_reserve() calls memblock_reserve_region() which prints debugging
    information if 'memblock=debug' was passed on the command line. This
    patch adds the same behaviour, but for memblock_add function().

    [akpm@linux-foundation.org: s/memblock_memory/memblock_add/ in message]
    Signed-off-by: Alexander Kuleshov
    Cc: Martin Schwidefsky
    Cc: Philipp Hachtmann
    Cc: Fabian Frederick
    Cc: Catalin Marinas
    Cc: Emil Medve
    Cc: Akinobu Mita
    Cc: Tang Chen
    Cc: Tony Luck
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Kuleshov
     

15 Apr, 2015

1 commit


14 Dec, 2014

1 commit

  • There is a lot of duplication in the rubric around actually setting or
    clearing a mem region flag. Create a new helper function to do this and
    reduce each of memblock_mark_hotplug() and memblock_clear_hotplug() to a
    single line.

    This will be useful if someone were to add a new mem region flag - which
    I hope to be doing some day soon. But it looks like a plausible cleanup
    even without that - so I'd like to get it out of the way now.

    Signed-off-by: Tony Luck
    Cc: Santosh Shilimkar
    Cc: Tang Chen
    Cc: Grygorii Strashko
    Cc: Zhang Yanfei
    Cc: Philipp Hachtmann
    Cc: Yinghai Lu
    Cc: Emil Medve
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Luck
     

11 Sep, 2014

1 commit

  • Let memblock skip the hotpluggable memory regions in __next_mem_range(),
    it is used to to prevent memblock from allocating hotpluggable memory
    for the kernel at early time. The code is the same as __next_mem_range_rev().

    Clear hotpluggable flag before releasing free pages to the buddy
    allocator. If we don't clear hotpluggable flag in
    free_low_memory_core_early(), the memory which marked hotpluggable flag
    will not free to buddy allocator. Because __next_mem_range() will skip
    them.

    free_low_memory_core_early
    for_each_free_mem_range
    for_each_mem_range
    __next_mem_range

    [akpm@linux-foundation.org: fix warning]
    Signed-off-by: Xishi Qiu
    Cc: Tejun Heo
    Cc: Tang Chen
    Cc: Zhang Yanfei
    Cc: Wen Congyang
    Cc: "Rafael J. Wysocki"
    Cc: "H. Peter Anvin"
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xishi Qiu
     

30 Aug, 2014

1 commit

  • In memblock_find_in_range_node(), we defined ret as int. But it should
    be phys_addr_t because it is used to store the return value from
    __memblock_find_range_bottom_up().

    The bug has not been triggered because when allocating low memory near
    the kernel end, the "int ret" won't turn out to be negative. When we
    started to allocate memory on other nodes, and the "int ret" could be
    minus. Then the kernel will panic.

    A simple way to reproduce this: comment out the following code in
    numa_init(),

    memblock_set_bottom_up(false);

    and the kernel won't boot.

    Reported-by: Xishi Qiu
    Signed-off-by: Tang Chen
    Tested-by: Xishi Qiu
    Cc: [3.13+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tang Chen
     

07 Jun, 2014

1 commit

  • Kmemleak could ignore memory blocks allocated via memblock_alloc()
    leading to false positives during scanning. This patch adds the
    corresponding callbacks and removes kmemleak_free_* calls in
    mm/nobootmem.c to avoid duplication.

    The kmemleak_alloc() in mm/nobootmem.c is kept since
    __alloc_memory_core_early() does not use memblock_alloc() directly.

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     

05 Jun, 2014

2 commits

  • Replace ((x) >> PAGE_SHIFT) with the pfn macro.

    Signed-off-by: Fabian Frederick
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabian Frederick
     
  • This introduces memblock_alloc_range() which allocates memblock from the
    specified range of physical address. I would like to use this function
    to specify the location of CMA.

    Signed-off-by: Akinobu Mita
    Cc: Marek Szyprowski
    Cc: Konrad Rzeszutek Wilk
    Cc: David Woodhouse
    Cc: Don Dutile
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Andi Kleen
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

20 May, 2014

2 commits

  • Add the physmem list to the memblock structure. This list only exists
    if HAVE_MEMBLOCK_PHYS_MAP is selected and contains the unmodified
    list of physically available memory. It differs from the memblock
    memory list as it always contains all memory ranges even if the
    memory has been restricted, e.g. by use of the mem= kernel parameter.

    Signed-off-by: Philipp Hachtmann
    Signed-off-by: Martin Schwidefsky

    Philipp Hachtmann
     
  • Refactor the memblock code and extend the memblock API to make it
    more flexible. With the extended API it is simple to define and
    work with additional memory lists.

    The static functions memblock_add_region and __memblock_remove are
    renamed to memblock_add_range and meblock_remove_range and added to
    the memblock API.

    The __next_free_mem_range and __next_free_mem_range_rev functions
    are replaced with calls to the more generic list walkers
    __next_mem_range and __next_mem_range_rev.

    To walk an arbitrary memory list two new macros for_each_mem_range
    and for_each_mem_range_rev are added. These new macros are used
    to define for_each_free_mem_range and for_each_free_mem_range_reverse.

    Signed-off-by: Philipp Hachtmann
    Signed-off-by: Martin Schwidefsky

    Philipp Hachtmann
     

08 Apr, 2014

2 commits


12 Mar, 2014

1 commit

  • Apart from setting the limit of memblock, it's also useful to be able
    to get the limit to avoid recalculating it every time. Add the function
    to do so.

    Acked-by: Catalin Marinas
    Acked-by: Santosh Shilimkar
    Acked-by: Andrew Morton
    Acked-by: Nicolas Pitre
    Signed-off-by: Laura Abbott
    Signed-off-by: Russell King

    Laura Abbott
     

30 Jan, 2014

1 commit

  • In original bootmem wrapper for memblock, we have limit checking.

    Add it to memblock_virt_alloc, to address arm and x86 booting crash.

    Signed-off-by: Yinghai Lu
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Reported-by: Kevin Hilman
    Tested-by: Kevin Hilman
    Reported-by: Olof Johansson
    Tested-by: Olof Johansson
    Reported-by: Konrad Rzeszutek Wilk
    Tested-by: Konrad Rzeszutek Wilk
    Cc: Dave Hansen
    Cc: Santosh Shilimkar
    Cc: "Strashko, Grygorii"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     

28 Jan, 2014

1 commit

  • In original __alloc_memory_core_early() for bootmem wrapper, we do not
    align size silently.

    We should not do that, as later free with old size will leave some range
    not freed.

    It's obvious that code is copied from memblock_base_nid(), and that code
    is wrong for the same reason.

    Also remove that in memblock_alloc_base.

    Signed-off-by: Yinghai Lu
    Acked-by: Santosh Shilimkar
    Cc: Dave Hansen
    Cc: Russell King
    Cc: Konrad Rzeszutek Wilk
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     

24 Jan, 2014

2 commits

  • get_allocated_memblock_reserved_regions_info() should work if it is
    compiled in. Extended the ifdef around
    get_allocated_memblock_memory_regions_info() to include
    get_allocated_memblock_reserved_regions_info() as well. Similar changes
    in nobootmem.c/free_low_memory_core_early() where the two functions are
    called.

    [akpm@linux-foundation.org: cleanup]
    Signed-off-by: Philipp Hachtmann
    Cc: qiuxishi
    Cc: David Howells
    Cc: Daeseok Youn
    Cc: Jiang Liu
    Acked-by: Yinghai Lu
    Cc: Zhang Yanfei
    Cc: Santosh Shilimkar
    Cc: Grygorii Strashko
    Cc: Tang Chen
    Cc: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Philipp Hachtmann
     
  • When calling free_all_bootmem() the free areas under memblock's control
    are released to the buddy allocator. Additionally the reserved list is
    freed if it was reallocated by memblock. The same should apply for the
    memory list.

    Signed-off-by: Philipp Hachtmann
    Reviewed-by: Tejun Heo
    Cc: Joonsoo Kim
    Cc: Johannes Weiner
    Cc: Tang Chen
    Cc: Toshi Kani
    Cc: Jianguo Wu
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Philipp Hachtmann
     

22 Jan, 2014

2 commits

  • Check nid parameter and produce warning if it has deprecated
    MAX_NUMNODES value. Also re-assign NUMA_NO_NODE value to the nid
    parameter in this case.

    These will help to identify the wrong API usage (the caller) and make
    code simpler.

    Signed-off-by: Grygorii Strashko
    Signed-off-by: Santosh Shilimkar
    Cc: Yinghai Lu
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Grygorii Strashko
     
  • Introduce memblock memory allocation APIs which allow to support PAE or
    LPAE extension on 32 bits archs where the physical memory start address
    can be beyond 4GB. In such cases, existing bootmem APIs which operate
    on 32 bit addresses won't work and needs memblock layer which operates
    on 64 bit addresses.

    So we add equivalent APIs so that we can replace usage of bootmem with
    memblock interfaces. Architectures already converted to NO_BOOTMEM use
    these new memblock interfaces. The architectures which are still not
    converted to NO_BOOTMEM continue to function as is because we still
    maintain the fal lback option of bootmem back-end supporting these new
    interfaces. So no functional change as such.

    In long run, once all the architectures moves to NO_BOOTMEM, we can get
    rid of bootmem layer completely. This is one step to remove the core
    code dependency with bootmem and also gives path for architectures to
    move away from bootmem.

    The proposed interface will became active if both CONFIG_HAVE_MEMBLOCK
    and CONFIG_NO_BOOTMEM are specified by arch. In case
    !CONFIG_NO_BOOTMEM, the memblock() wrappers will fallback to the
    existing bootmem apis so that arch's not converted to NO_BOOTMEM
    continue to work as is.

    The meaning of MEMBLOCK_ALLOC_ACCESSIBLE and MEMBLOCK_ALLOC_ANYWHERE
    is kept same.

    [akpm@linux-foundation.org: s/depricated/deprecated/]
    Signed-off-by: Grygorii Strashko
    Signed-off-by: Santosh Shilimkar
    Cc: Yinghai Lu
    Cc: Tejun Heo
    Cc: "Rafael J. Wysocki"
    Cc: Arnd Bergmann
    Cc: Christoph Lameter
    Cc: Greg Kroah-Hartman
    Cc: H. Peter Anvin
    Cc: Johannes Weiner
    Cc: KAMEZAWA Hiroyuki
    Cc: Konrad Rzeszutek Wilk
    Cc: Michal Hocko
    Cc: Paul Walmsley
    Cc: Pavel Machek
    Cc: Russell King
    Cc: Tony Lindgren
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Santosh Shilimkar