16 Aug, 2008

1 commit

  • The idea of the implementation of this fix is from Michael Ellerman.

    This function has two loops, but they each interpret the memory_limit
    value differently. The first loop interprets it as a "size limit"
    whereas the second loop interprets it as an "address limit".

    Before the second loop runs, reset memory_limit to lmb_end_of_DRAM()
    so that it all works out.

    Signed-off-by: David S. Miller
    Acked-by: Michael Ellerman

    David S. Miller
     

19 May, 2008

1 commit


13 May, 2008

2 commits

  • Having to muck with the build and set DEBUG just to
    get lmb_dump_all() to print things isn't very useful.

    So use pr_info() and use an early boot param
    "lmb=debug" so we can simply ask users to reboot
    with this option when we need some debugging from
    them.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • When allocating, if we will align up the size when making
    the reservation, we should also align the size for the
    check that the space is actually available.

    The simplest thing is to just aling the size up from
    the beginning, then we can use plain 'size' throughout.

    Signed-off-by: David S. Miller

    David S. Miller
     

29 Apr, 2008

2 commits

  • Provide walk_memory_resource() for 64-bit powerpc. PowerPC maintains
    logical memory region mapping in the lmb.memory structure. Walk
    through these structures and do the callbacks for the contiguous
    chunks.

    Signed-off-by: Badari Pulavarty
    Cc: Yasunori Goto
    Cc: Benjamin Herrenschmidt
    Signed-off-by: Andrew Morton
    Signed-off-by: Paul Mackerras

    Badari Pulavarty
     
  • The powerpc kernel maintains information about logical memory blocks
    in the lmb.memory structure, which is initialized and updated at boot
    time, but not when memory is added or removed while the kernel is
    running.

    This adds a hotplug memory notifier which updates lmb.memory when
    memory is added or removed. This information is useful for eHEA
    driver to find out the memory layout and holes.

    NOTE: No special locking is needed for lmb_add() and lmb_remove().
    Calls to these are serialized by caller. (pSeries_reconfig_chain).

    Signed-off-by: Badari Pulavarty
    Cc: Yasunori Goto
    Cc: Benjamin Herrenschmidt
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Paul Mackerras

    Badari Pulavarty
     

24 Apr, 2008

1 commit

  • Changeset d9024df02ffe74d723d97d552f86de3b34beb8cc ("[LMB] Restructure
    allocation loops to avoid unsigned underflow") removed the alignment
    of the 'size' argument to call lmb_add_region() done by __lmb_alloc_base().

    In doing so it reintroduced the bug fixed by changeset
    eea89e13a9c61d3928223d2f9bf2295e22e0efb6 ("[LMB]: Fix bug in
    __lmb_alloc_base().").

    This puts it back.

    Signed-off-by: David S. Miller

    David S. Miller
     

15 Apr, 2008

3 commits

  • There is a potential bug in __lmb_alloc_base where we subtract `size'
    from the base address of a reserved region without checking whether
    the subtraction could wrap around and produce a very large unsigned
    value. In fact it probably isn't possible to hit the bug in practice
    since it would only occur in the situation where we can't satisfy the
    allocation request and there is a reserved region starting at 0.

    This fixes the potential bug by breaking out of the loop when we get
    to the point where the base of the reserved region is less than the
    size requested. This also restructures the loop to be a bit easier to
    follow.

    The same logic got copied into lmb_alloc_nid_unreserved, so this makes
    a similar change there. Here the bug is more likely to be hit because
    the outer loop (in lmb_alloc_nid) goes through the memory regions in
    increasing order rather than decreasing order as __lmb_alloc_base
    does, and we are therefore more likely to hit the case where we are
    testing against a reserved region with a base address of 0.

    Signed-off-by: Paul Mackerras

    Paul Mackerras
     
  • This makes no semantic changes. It fixes the whitespace and formatting
    a bit, gets rid of a local DBG macro and uses the equivalent pr_debug
    instead, and restructures one while loop that had a function call and
    assignment in the condition to be a bit more readable. Some comments
    about functions being called with relocation disabled were also removed
    as they would just be confusing to most readers now that the code is
    in lib/.

    Signed-off-by: Paul Mackerras

    Paul Mackerras
     
  • A variant of lmb_alloc() that tries to allocate memory on a specified
    NUMA node 'nid' but falls back to normal lmb_alloc() if that fails.

    The caller provides a 'nid_range' function pointer which assists the
    allocator. It is given args 'start', 'end', and pointer to integer
    'this_nid'.

    It places at 'this_nid' the NUMA node id that corresponds to 'start',
    and returns the end address within 'start' to 'end' at which memory
    assosciated with 'nid' ends.

    This callback allows a platform to use lmb_alloc_nid() in just
    about any context, even ones in which early_pfn_to_nid() might
    not be working yet.

    This function will be used by the NUMA setup code on sparc64, and also
    it can be used by powerpc, replacing it's hand crafted
    "careful_allocation()" function in arch/powerpc/mm/numa.c

    If x86 ever converts it's NUMA support over to using the LMB helpers,
    it can use this too as it has something entirely similar.

    Signed-off-by: David S. Miller
    Signed-off-by: Paul Mackerras

    David S. Miller
     

20 Feb, 2008

1 commit

  • We introduced a bug in fixing lmb_add_region to handle an initial
    region being non-zero. Before that fix it was impossible to insert a
    region at the head of the list since the first region always started
    at zero.

    Now that its possible for the first region to be non-zero we need to
    check to see if the new region should be added at the head and if so
    actually add it.

    Signed-off-by: Kumar Gala
    Signed-off-by: David S. Miller

    Kumar Gala
     

14 Feb, 2008

4 commits