13 Jan, 2012

1 commit

  • node_to_cpumask() has been replaced by cpumask_of_node(), and wholly
    removed since commit 29c337a0 ("cpumask: remove obsolete node_to_cpumask
    now everyone uses cpumask_of_node").

    So update the comments for setup_node_to_cpumask_map().

    Signed-off-by: Wanlong Gao
    Acked-by: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanlong Gao
     

12 Jan, 2012

1 commit

  • * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    x86/numa: Add constraints check for nid parameters
    mm, x86: Remove debug_pagealloc_enabled
    x86/mm: Initialize high mem before free_all_bootmem()
    arch/x86/kernel/e820.c: quiet sparse noise about plain integer as NULL pointer
    arch/x86/kernel/e820.c: Eliminate bubble sort from sanitize_e820_map()
    x86: Fix mmap random address range
    x86, mm: Unify zone_sizes_init()
    x86, mm: Prepare zone_sizes_init() for unification
    x86, mm: Use max_low_pfn for ZONE_NORMAL on 64-bit
    x86, mm: Wrap ZONE_DMA32 with CONFIG_ZONE_DMA32
    x86, mm: Use max_pfn instead of highend_pfn
    x86, mm: Move zone init from paging_init() on 64-bit
    x86, mm: Use MAX_DMA_PFN for ZONE_DMA on 32-bit

    Linus Torvalds
     

09 Dec, 2011

1 commit

  • This patch adds constraint checks to the numa_set_distance()
    function.

    When the check triggers (this should not happen normally) it
    emits a warning and avoids a store to a negative index in
    numa_distance[] array - i.e. avoids memory corruption.

    Negative ids can be passed when the pxm-to-nids mapping is not
    properly filled while parsing the SRAT.

    Signed-off-by: Petr Holasek
    Acked-by: David Rientjes
    Cc: Anton Arapov
    Link: http://lkml.kernel.org/r/20111208121640.GA2229@dhcp-27-244.brq.redhat.com
    Signed-off-by: Ingo Molnar

    Petr Holasek
     

15 Jul, 2011

4 commits

  • Other than sanity check and debug message, the x86 specific version of
    memblock reserve/free functions are simple wrappers around the generic
    versions - memblock_reserve/free().

    This patch adds debug messages with caller identification to the
    generic versions and replaces x86 specific ones and kills them.
    arch/x86/include/asm/memblock.h and arch/x86/mm/memblock.c are empty
    after this change and removed.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310462166-31469-14-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • memblock_x86_hole_size() calculates the total size of holes in a given
    range according to memblock and is used by numa emulation code and
    numa_meminfo_cover_memory().

    Since conversion to MEMBLOCK_NODE_MAP, absent_pages_in_range() also
    uses memblock and gives the same result. This patch replaces
    memblock_x86_hole_size() uses with absent_pages_in_range(). After the
    conversion the x86 function doesn't have any user left and is killed.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310462166-31469-12-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • From 5732e1247898d67cbf837585150fe9f68974671d Mon Sep 17 00:00:00 2001
    From: Tejun Heo
    Date: Thu, 14 Jul 2011 11:22:16 +0200

    Convert x86 to HAVE_MEMBLOCK_NODE_MAP. The only difference in memory
    handling is that allocations can't no longer cross node boundaries
    whether they're node affine or not, which shouldn't matter at all.

    This conversion will enable further simplification of boot memory
    handling.

    -v2: Fix build failure on !NUMA configurations discovered by hpa.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/20110714094423.GG3455@htj.dyndns.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     
  • With the previous changes, generic NUMA aware memblock API has feature
    parity with memblock_x86_find_in_range_node(). There currently are
    two users - x86 setup_node_data() and __alloc_memory_core_early() in
    nobootmem.c.

    This patch converts the former to use memblock_alloc_nid() and the
    latter memblock_find_range_in_node(), and kills
    memblock_x86_find_in_range_node() and related functions including
    find_memory_early_core_early() in page_alloc.c.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310460395-30913-9-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     

14 Jul, 2011

1 commit

  • 25818f0f28 (memblock: Make MEMBLOCK_ERROR be 0) thankfully made
    MEMBLOCK_ERROR 0 and there already are codes which expect error return
    to be 0. There's no point in keeping MEMBLOCK_ERROR around. End its
    misery.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/1310457490-3356-6-git-send-email-tj@kernel.org
    Cc: Yinghai Lu
    Cc: Benjamin Herrenschmidt
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     

13 Jul, 2011

1 commit

  • SPARSEMEM w/o VMEMMAP and DISCONTIGMEM, both used only on 32bit, use
    sections array to map pfn to nid which is limited in granularity. If
    NUMA nodes are laid out such that the mapping cannot be accurate, boot
    will fail triggering BUG_ON() in mminit_verify_page_links().

    On 32bit, it's 512MiB w/ PAE and SPARSEMEM. This seems to have been
    granular enough until commit 2706a0bf7b (x86, NUMA: Enable
    CONFIG_AMD_NUMA on 32bit too). Apparently, there is a machine which
    aligns NUMA nodes to 128MiB and has only AMD NUMA but not SRAT. This
    led to the following BUG_ON().

    On node 0 totalpages: 2096615
    DMA zone: 32 pages used for memmap
    DMA zone: 0 pages reserved
    DMA zone: 3927 pages, LIFO batch:0
    Normal zone: 1740 pages used for memmap
    Normal zone: 220978 pages, LIFO batch:31
    HighMem zone: 16405 pages used for memmap
    HighMem zone: 1853533 pages, LIFO batch:31
    BUG: Int 6: CR2 (null)
    EDI (null) ESI 00000002 EBP 00000002 ESP c1543ecc
    EBX f2400000 EDX 00000006 ECX (null) EAX 00000001
    err (null) EIP c16209aa CS 00000060 flg 00010002
    Stack: f2400000 00220000 f7200800 c1620613 00220000 01000000 04400000 00238000
    (null) f7200000 00000002 f7200b58 f7200800 c1620929 000375fe (null)
    f7200b80 c16395f0 00200a02 f7200a80 (null) 000375fe 00000002 (null)
    Pid: 0, comm: swapper Not tainted 2.6.39-rc5-00181-g2706a0b #17
    Call Trace:
    [] ? early_fault+0x2e/0x2e
    [] ? mminit_verify_page_links+0x12/0x42
    [] ? memmap_init_zone+0xaf/0x10c
    [] ? free_area_init_node+0x2b9/0x2e3
    [] ? free_area_init_nodes+0x3f2/0x451
    [] ? paging_init+0x112/0x118
    [] ? setup_arch+0x791/0x82f
    [] ? start_kernel+0x6a/0x257

    This patch implements node_map_pfn_alignment() which determines
    maximum internode alignment and update numa_register_memblks() to
    reject NUMA configuration if alignment exceeds the pfn -> nid mapping
    granularity of the memory model as determined by PAGES_PER_SECTION.

    This makes the problematic machine boot w/ flatmem by rejecting the
    NUMA config and provides protection against crazy NUMA configurations.

    Signed-off-by: Tejun Heo
    Link: http://lkml.kernel.org/r/20110712074534.GB2872@htj.dyndns.org
    LKML-Reference:
    Reported-and-Tested-by: Hans Rosenfeld
    Cc: Conny Seidel
    Signed-off-by: H. Peter Anvin

    Tejun Heo
     

02 May, 2011

10 commits

  • During testing 32bit numa unifying code from tj, found one system with
    more than 64g fails to use numa. It turns out we do not trim numa
    meminfo correctly against max_pfn in case start address of a node is
    higher than 64GiB. Bug fix made it to tip tree.

    This patch moves the checking and trimming to a separate loop. So we
    don't need to compare low/high in following merge loops. It makes the
    code more readable.

    Also it makes the node merge printouts less strange. On a 512GiB numa
    system with 32bit,

    before:
    > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
    > NUMA: Node 0 [0,80000000) + [100000000,1080000000) -> [0,1000000000)

    after:
    > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
    > NUMA: Node 0 [0,80000000) + [100000000,1000000000) -> [0,1000000000)

    Signed-off-by: Yinghai Lu
    [Updated patch description and comment slightly.]
    Signed-off-by: Tejun Heo

    Yinghai Lu
     
  • After using memblock to replace bootmem, that function only sets up
    node_data now.

    Change the name to reflect what it actually does.

    tj: Minor adjustment to the patch description.

    Signed-off-by: Yinghai Lu
    Signed-off-by: Tejun Heo

    Yinghai Lu
     
  • numa_init_array() no longer has users outside of numa.c. Make it
    static.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • With both _numa_init() methods converted and the rest of init code
    adjusted, numa_32.c now can switch from the 32bit only init code to
    the common one in numa.c.

    * Shim get_memcfg_*()'s are dropped and initmem_init() calls
    x86_numa_init(), which is updated to handle NUMAQ.

    * All boilerplate operations including node range limiting, pgdat
    alloc/init are handled by numa_init(). 32bit only implementation is
    removed.

    * 32bit numa_add_memblk(), numa_set_distance() and
    memory_add_physaddr_to_nid() removed and common versions in
    numa_32.c enabled for 32bit.

    This change causes the following behavior changes.

    * NODE_DATA()->node_start_pfn/node_spanned_pages properly initialized
    for 32bit too.

    * Much more sanity checks and configuration cleanups.

    * Proper handling of node distances.

    * The same NUMA init messages as 64bit.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • setup_node_bootmem() is taken from 64bit and doesn't use remap
    allocator. It's about to be shared with 32bit so add support for it.
    If NODE_DATA is remapped, it's noted in the debug message and node
    locality check is skipped as the __pa() of the remapped address
    doesn't reflect the actual physical address.

    On 64bit, remap allocator becomes noop and doesn't affect the
    behavior.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • Code moved from numa_64.c has assumption that long is 64bit in several
    places. This patch removes the assumption by using {s|u}64_t
    explicity, using PFN_PHYS() for page number -> addr conversions and
    adjusting printf formats.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • Generic NUMA init code was moved to numa.c from numa_64.c but is still
    guaraded by CONFIG_X86_64. This patch removes the compile guard and
    enables compiling on 32bit.

    * numa_add_memblk() and numa_set_distance() clash with the shim
    implementation in numa_32.c and are left out.

    * memory_add_physaddr_to_nid() clashes with 32bit implementation and
    is left out.

    * MAX_DMA_PFN definition in dma.h moved out of !CONFIG_X86_32.

    * node_data definition in numa_32.c removed in favor of the one in
    numa.c.

    There are places where ulong is assumed to be 64bit. The next patch
    will fix them up. Note that although the code is compiled it isn't
    used yet and this patch doesn't cause any functional change.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • Move the generic 64bit NUMA init machinery from numa_64.c to numa.c.

    * node_data[], numa_mem_info and numa_distance
    * numa_add_memblk[_to](), numa_remove_memblk[_from]()
    * numa_set_distance() and friends
    * numa_init() and all the numa_meminfo handling helpers called from it
    * dummy_numa_init()
    * memory_add_physaddr_to_nid()

    A new function x86_numa_init() is added and the content of
    numa_64.c::initmem_init() is moved into it. initmem_init() now simply
    calls x86_numa_init().

    Constants and numa_off declaration are moved from numa_{32|64}.h to
    numa.h.

    This is code reorganization and doesn't involve any functional change.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • Move numa_nodes_parsed from numa_64.[hc] to numa.[hc] to prepare for
    NUMA init path unification.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     
  • Currently, the only meaningful user of apic->x86_32_numa_cpu_node() is
    NUMAQ which returns valid mapping only after CPU is initialized during
    SMP bringup; thus, the previous patch to set apicid -> node in
    setup_local_APIC() makes __apicid_to_node[] always contain the correct
    mapping whether custom apic->x86_32_numa_cpu_node() is used or not.

    So, there is no reason to keep separate 32bit implementation. We can
    always consult __apicid_to_node[]. Move 64bit implementation from
    numa_64.c to numa.c and remove 32bit implementation from numa_32.c.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"

    Tejun Heo
     

21 Apr, 2011

1 commit

  • The cpunode mappings under CONFIG_DEBUG_PER_CPU_MAPS=y
    when NUMA emulation is enabled is currently broken because it does
    not iterate through every emulated node and bind cpus that have
    affinity to it.

    NUMA emulation should bind each cpu to every local node to
    accurately represent the true NUMA topology of the underlying
    machine.

    debug_cpumask_set_cpu() needs to be fixed at the same time so
    that the debugging information that it emits shows the new
    cpumask of the node being assigned when the cpu is being added
    or removed.

    It can now take responsibility of setting or clearing the cpu
    itself to remove the need for duplicate code.

    Also change its last parameter, "enable", to have the correct bool
    type since it can only be true or false.

    -v2: Fix the return statements, by Kosaki Motohiro

    Acked-and-Tested-by: KOSAKI Motohiro
    Signed-off-by: David Rientjes
    Cc: Andreas Herrmann
    Cc: Tejun Heo
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/alpine.DEB.2.00.1104201918470.12634@chino.kir.corp.google.com
    Signed-off-by: Ingo Molnar

    David Rientjes
     

14 Feb, 2011

1 commit

  • CONFIG_DEBUG_PER_CPU_MAPS may return NUMA_NO_NODE when an
    early_cpu_to_node() mapping hasn't been initialized. In such a
    case, it emits a warning and continues without an issue but
    callers may try to use the return value to index into an array.

    We can catch those errors and fail silently since a warning has
    already been emitted. No current user of numa_add_cpu()
    requires this error checking to avoid a crash, but it's better
    to be proactive in case a future user happens to have a bug and
    a user tries to diagnose it with CONFIG_DEBUG_PER_CPU_MAPS.

    Reported-by: Jesper Juhl
    Signed-off-by: David Rientjes
    Cc: Tejun Heo
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    David Rientjes
     

28 Jan, 2011

4 commits

  • Now that everything else is unified, NUMA initialization can be
    unified too.

    * numa_init_array() and init_cpu_to_node() are moved from
    numa_64 to numa.

    * numa_32::initmem_init() is updated to call numa_init_array()
    and setup_arch() to call init_cpu_to_node() on 32bit too.

    * x86_cpu_to_node_map is now initialized to NUMA_NO_NODE on
    32bit too. This is safe now as numa_init_array() will initialize
    it early during boot.

    This makes NUMA mapping fully initialized before
    setup_per_cpu_areas() on 32bit too and thus makes the first
    percpu chunk which contains all the static variables and some of
    dynamic area allocated with NUMA affinity correctly considered.

    Signed-off-by: Tejun Heo
    Cc: yinghai@kernel.org
    Cc: brgerst@gmail.com
    Cc: gorcunov@gmail.com
    Cc: shaohui.zheng@intel.com
    Cc: rientjes@google.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Reported-by: Eric Dumazet
    Reviewed-by: Pekka Enberg

    Tejun Heo
     
  • x86_32 has been managing node_to_cpumask_map explicitly from
    map_cpu_to_node() and friends in a rather ugly way. With
    previous changes, it's now possible to share the code with
    64bit.

    * When CONFIG_NUMA_EMU is disabled, numa_add/remove_cpu() are
    implemented in numa.c and shared by 32 and 64bit. CONFIG_NUMA_EMU
    versions still live in numa_64.c.

    NUMA_EMU's dependency on 64bit is planned to be removed and the
    above should go away together.

    * identify_cpu() now calls numa_add_cpu() for 32bit too. This
    makes the explicit mask management from map_cpu_to_node() unnecessary.

    * The whole x86_32 specific map_cpu_to_node() chunk is no longer
    necessary. Dropped.

    Signed-off-by: Tejun Heo
    Reviewed-by: Pekka Enberg
    Cc: eric.dumazet@gmail.com
    Cc: yinghai@kernel.org
    Cc: brgerst@gmail.com
    Cc: gorcunov@gmail.com
    Cc: shaohui.zheng@intel.com
    Cc: rientjes@google.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Cc: David Rientjes
    Cc: Shaohui Zheng

    Tejun Heo
     
  • Unlike 64bit, 32bit has been using its own cpu_to_node_map[] for
    CPU -> NUMA node mapping. Replace it with early_percpu variable
    x86_cpu_to_node_map and share the mapping code with 64bit.

    * USE_PERCPU_NUMA_NODE_ID is now enabled for 32bit too.

    * x86_cpu_to_node_map and numa_set/clear_node() are moved from
    numa_64 to numa. For now, on 32bit, x86_cpu_to_node_map is initialized
    with 0 instead of NUMA_NO_NODE. This is to avoid introducing unexpected
    behavior change and will be updated once init path is unified.

    * srat_detect_node() is now enabled for x86_32 too. It calls
    numa_set_node() and initializes the mapping making explicit
    cpu_to_node_map[] updates from map/unmap_cpu_to_node() unnecessary.

    Signed-off-by: Tejun Heo
    Cc: eric.dumazet@gmail.com
    Cc: yinghai@kernel.org
    Cc: brgerst@gmail.com
    Cc: gorcunov@gmail.com
    Cc: penberg@kernel.org
    Cc: shaohui.zheng@intel.com
    Cc: rientjes@google.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Cc: David Rientjes

    Tejun Heo
     
  • The mapping between cpu/apicid and node is done via
    apicid_to_node[] on 64bit and apicid_2_node[] +
    apic->x86_32_numa_cpu_node() on 32bit. This difference makes it
    difficult to further unify 32 and 64bit NUMA handling.

    This patch unifies it by replacing both apicid_to_node[] and
    apicid_2_node[] with __apicid_to_node[] array, which is accessed
    by two accessors - set_apicid_to_node() and numa_cpu_node(). On
    64bit, numa_cpu_node() always consults __apicid_to_node[]
    directly while 32bit goes through apic->numa_cpu_node() method
    to allow apic implementations to override it.

    srat_detect_node() for amd cpus contains workaround for broken
    NUMA configuration which assumes relationship between APIC ID,
    HT node ID and NUMA topology. Leave it to access
    __apicid_to_node[] directly as mapping through CPU might result
    in undesirable behavior change. The comment is reformatted and
    updated to note the ugliness.

    Signed-off-by: Tejun Heo
    Reviewed-by: Pekka Enberg
    Cc: eric.dumazet@gmail.com
    Cc: yinghai@kernel.org
    Cc: brgerst@gmail.com
    Cc: gorcunov@gmail.com
    Cc: shaohui.zheng@intel.com
    Cc: rientjes@google.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Cc: David Rientjes

    Tejun Heo
     

19 Jan, 2011

1 commit

  • In order to be able to suppress the use of SRAT tables that
    32-bit Linux can't deal with (in one case known to lead to a
    non-bootable system, unless disabling ACPI altogether), move the
    "numa=" option handling to common code.

    Signed-off-by: Jan Beulich
    Reviewed-by: Thomas Renninger
    Cc: Tejun Heo
    Cc: Thomas Renninger
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Jan Beulich
     

31 May, 2010

2 commits


28 May, 2010

1 commit

  • Some workloads that create a large number of small files tend to assign
    too many pages to node 0 (multi-node systems). Part of the reason is that
    the rotor (in cpuset_mem_spread_node()) used to assign nodes starts at
    node 0 for newly created tasks.

    This patch changes the rotor to be initialized to a random node number of
    the cpuset.

    [akpm@linux-foundation.org: fix layout]
    [Lee.Schermerhorn@hp.com: Define stub numa_random() for !NUMA configuration]
    Signed-off-by: Jack Steiner
    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Paul Menage
    Cc: Jack Steiner
    Cc: Robin Holt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jack Steiner
     

13 Mar, 2009

4 commits

  • Impact: fix (CONFIG_MAXSMP=y only) boot crash

    c032ef60d1aa9af33730b7a35bbea751b131adc1 "cpumask: convert
    node_to_cpumask_map[] to cpumask_var_t" didn't get this one
    conversion. There was a compile warning, but I missed it.

    Reported-by: Ingo Molnar
    Signed-off-by: Rusty Russell
    Cc: Mike Travis
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Rusty Russell
     
  • Impact: cleanup

    We are removing cpumask_t in favour of struct cpumask: mainly as a
    marker of what code is now CONFIG_CPUMASK_OFFSTACK-safe.

    The only non-trivial change here is vector_allocation_domain():
    explicitly clear the mask and set the first word, rather than using
    assignment.

    Signed-off-by: Rusty Russell

    Rusty Russell
     
  • Impact: reduce kernel memory usage when CONFIG_CPUMASK_OFFSTACK=y

    Straightforward conversion: done for 32 and 64 bit kernels.
    node_to_cpumask_map is now a cpumask_var_t array.

    64-bit used to be a dynamic cpumask_t array, and 32-bit used to be a
    static cpumask_t array.

    Signed-off-by: Rusty Russell

    Rusty Russell
     
  • Impact: cleanup

    We take the 64-bit code and use it on 32-bit as well. The new file
    is called mm/numa.c.

    In a minor cleanup, we use cpu_none_mask instead of declaring a local
    cpu_mask_none.

    Signed-off-by: Rusty Russell

    Rusty Russell