11 Aug, 2017

1 commit

  • As Tetsuo points out:
    "Commit 385386cff4c6 ("mm: vmstat: move slab statistics from zone to
    node counters") broke "Slab:" field of /proc/meminfo . It shows nearly
    0kB"

    In addition to /proc/meminfo, this problem also affects the slab
    counters OOM/allocation failure info dumps, can cause early -ENOMEM from
    overcommit protection, and miscalculate image size requirements during
    suspend-to-disk.

    This is because the patch in question switched the slab counters from
    the zone level to the node level, but forgot to update the global
    accessor functions to read the aggregate node data instead of the
    aggregate zone data.

    Use global_node_page_state() to access the global slab counters.

    Fixes: 385386cff4c6 ("mm: vmstat: move slab statistics from zone to node counters")
    Link: http://lkml.kernel.org/r/20170801134256.5400-1-hannes@cmpxchg.org
    Signed-off-by: Johannes Weiner
    Reported-by: Tetsuo Handa
    Acked-by: Michal Hocko
    Cc: Josef Bacik
    Cc: Vladimir Davydov
    Cc: Stefan Agner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

07 Jul, 2017

1 commit

  • This header always exists, so doesn't require an ifdef around its
    inclusion. When CONFIG_ARCH_HAS_SET_MEMORY=y it includes the asm
    header, otherwise it provides empty versions of the set_memory_xx()
    routines.

    Link: http://lkml.kernel.org/r/1498717781-29151-2-git-send-email-mpe@ellerman.id.au
    Signed-off-by: Michael Ellerman
    Acked-by: Kees Cook
    Acked-by: Laura Abbott
    Cc: Daniel Borkmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Ellerman
     

27 Jun, 2017

2 commits


23 May, 2017

1 commit


14 May, 2017

1 commit


09 May, 2017

1 commit


02 Mar, 2017

1 commit


08 Feb, 2017

1 commit

  • Both of these options are poorly named. The features they provide are
    necessary for system security and should not be considered debug only.
    Change the names to CONFIG_STRICT_KERNEL_RWX and
    CONFIG_STRICT_MODULE_RWX to better describe what these options do.

    Signed-off-by: Laura Abbott
    Acked-by: Jessica Yu
    Signed-off-by: Kees Cook

    Laura Abbott
     

25 Dec, 2016

1 commit


13 Sep, 2016

1 commit

  • PAGE_POISONING_ZERO disables zeroing new pages on alloc, they are
    poisoned (zeroed) as they become available.
    In the hibernate use case, free pages will appear in the system without
    being cleared, left there by the loading kernel.

    This patch will make sure free pages are cleared on resume when
    PAGE_POISONING_ZERO is enabled. We free the pages just after resume
    because we can't do it later: going through any device resume code might
    allocate some memory and invalidate the free pages bitmap.

    Thus we don't need to disable hibernation when PAGE_POISONING_ZERO is
    enabled.

    Signed-off-by: Anisse Astier
    Reviewed-by: Kees Cook
    Acked-by: Pavel Machek
    Signed-off-by: Rafael J. Wysocki

    Anisse Astier
     

18 Aug, 2016

1 commit


16 Aug, 2016

1 commit

  • rtree_next_node() walks the linked list of leaf nodes to find the next
    block of pages in the struct memory_bitmap. If it walks off the end of
    the list of nodes, it walks the list of memory zones to find the next
    region of memory. If it walks off the end of the list of zones, it
    returns false.

    This leaves the struct bm_position's node and zone pointers pointing
    at their respective struct list_heads in struct mem_zone_bm_rtree.

    memory_bm_find_bit() uses struct bm_position's node and zone pointers
    to avoid walking lists and trees if the next bit appears in the same
    node/zone. It handles these values being stale.

    Swap rtree_next_node()s 'step then test' to 'test-next then step',
    this means if we reach the end of memory we return false and leave
    the node and zone pointers as they were.

    This fixes a panic on resume using AMD Seattle with 64K pages:
    [ 6.868732] Freezing user space processes ... (elapsed 0.000 seconds) done.
    [ 6.875753] Double checking all user space processes after OOM killer disable... (elapsed 0.000 seconds)
    [ 6.896453] PM: Using 3 thread(s) for decompression.
    [ 6.896453] PM: Loading and decompressing image data (5339 pages)...
    [ 7.318890] PM: Image loading progress: 0%
    [ 7.323395] Unable to handle kernel paging request at virtual address 00800040
    [ 7.330611] pgd = ffff000008df0000
    [ 7.334003] [00800040] *pgd=00000083fffe0003, *pud=00000083fffe0003, *pmd=00000083fffd0003, *pte=0000000000000000
    [ 7.344266] Internal error: Oops: 96000005 [#1] PREEMPT SMP
    [ 7.349825] Modules linked in:
    [ 7.352871] CPU: 2 PID: 1 Comm: swapper/0 Tainted: G W I 4.8.0-rc1 #4737
    [ 7.360512] Hardware name: AMD Overdrive/Supercharger/Default string, BIOS ROD1002C 04/08/2016
    [ 7.369109] task: ffff8003c0220000 task.stack: ffff8003c0280000
    [ 7.375020] PC is at set_bit+0x18/0x30
    [ 7.378758] LR is at memory_bm_set_bit+0x24/0x30
    [ 7.383362] pc : [] lr : [] pstate: 60000045
    [ 7.390743] sp : ffff8003c0283b00
    [ 7.473551]
    [ 7.475031] Process swapper/0 (pid: 1, stack limit = 0xffff8003c0280020)
    [ 7.481718] Stack: (0xffff8003c0283b00 to 0xffff8003c0284000)
    [ 7.800075] Call trace:
    [ 7.887097] [] set_bit+0x18/0x30
    [ 7.891876] [] duplicate_memory_bitmap.constprop.38+0x54/0x70
    [ 7.899172] [] snapshot_write_next+0x22c/0x47c
    [ 7.905166] [] load_image_lzo+0x754/0xa88
    [ 7.910725] [] swsusp_read+0x144/0x230
    [ 7.916025] [] load_image_and_restore+0x58/0x90
    [ 7.922105] [] software_resume+0x2f0/0x338
    [ 7.927752] [] do_one_initcall+0x38/0x11c
    [ 7.933314] [] kernel_init_freeable+0x14c/0x1ec
    [ 7.939395] [] kernel_init+0x10/0xfc
    [ 7.944520] [] ret_from_fork+0x10/0x40
    [ 7.949820] Code: d2800022 8b400c21 f9800031 9ac32043 (c85f7c22)
    [ 7.955909] ---[ end trace 0024a5986e6ff323 ]---
    [ 7.960529] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b

    Here struct mem_zone_bm_rtree's start_pfn has been returned instead of
    struct rtree_node's addr as the node/zone pointers are corrupt after
    we walked off the end of the lists during mark_unsafe_pages().

    This behaviour was exposed by commit 6dbecfd345a6 ("PM / hibernate:
    Simplify mark_unsafe_pages()"), which caused mark_unsafe_pages() to call
    duplicate_memory_bitmap(), which uses memory_bm_find_bit() after walking
    off the end of the memory bitmap.

    Fixes: 3a20cb177961 (PM / Hibernate: Implement position keeping in radix tree)
    Signed-off-by: James Morse
    [ rjw: Subject ]
    Signed-off-by: Rafael J. Wysocki

    James Morse
     

29 Jul, 2016

1 commit

  • This moves the LRU lists from the zone to the node and related data such
    as counters, tracing, congestion tracking and writeback tracking.

    Unfortunately, due to reclaim and compaction retry logic, it is
    necessary to account for the number of LRU pages on both zone and node
    logic. Most reclaim logic is based on the node counters but the retry
    logic uses the zone counters which do not distinguish inactive and
    active sizes. It would be possible to leave the LRU counters on a
    per-zone basis but it's a heavier calculation across multiple cache
    lines that is much more frequent than the retry checks.

    Other than the LRU counters, this is mostly a mechanical patch but note
    that it introduces a number of anomalies. For example, the scans are
    per-zone but using per-node counters. We also mark a node as congested
    when a zone is congested. This causes weird problems that are fixed
    later but is easier to review.

    In the event that there is excessive overhead on 32-bit systems due to
    the nodes being on LRU then there are two potential solutions

    1. Long-term isolation of highmem pages when reclaim is lowmem

    When pages are skipped, they are immediately added back onto the LRU
    list. If lowmem reclaim persisted for long periods of time, the same
    highmem pages get continually scanned. The idea would be that lowmem
    keeps those pages on a separate list until a reclaim for highmem pages
    arrives that splices the highmem pages back onto the LRU. It potentially
    could be implemented similar to the UNEVICTABLE list.

    That would reduce the skip rate with the potential corner case is that
    highmem pages have to be scanned and reclaimed to free lowmem slab pages.

    2. Linear scan lowmem pages if the initial LRU shrink fails

    This will break LRU ordering but may be preferable and faster during
    memory pressure than skipping LRU pages.

    Link: http://lkml.kernel.org/r/1467970510-21195-4-git-send-email-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Johannes Weiner
    Acked-by: Vlastimil Babka
    Cc: Hillf Danton
    Cc: Joonsoo Kim
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

10 Jul, 2016

4 commits

  • Make it possible to protect all pages holding image data during
    hibernate image restoration by setting them read-only (so as to
    catch attempts to write to those pages after image data have been
    stored in them).

    This adds overhead to image restoration code (it may cause large
    page mappings to be split as a result of page flags changes) and
    the errors it protects against should never happen in theory, so
    the feature is only active after passing hibernate=protect_image
    to the command line of the restore kernel.

    Also it only is built if CONFIG_DEBUG_RODATA is set.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     
  • One branch of an if/else statement in __register_nosave_region() is
    formatted against the kernel coding style which causes the code to
    look slightly odd. To fix that, add missing braces to it.

    No functional changes.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     
  • Many comments in kernel/power/snapshot.c do not follow the general
    comment formatting rules. They look odd, some of them are outdated
    too, some are hard to parse and generally difficult to understand.

    Clean them up to make them easier to comprehend.

    No functional changes.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     
  • The formatting of some function headers in kernel/power/snapshot.c
    is not consistent with the general kernel coding style and with the
    formatting of some other function headers in the same file.

    Make all of them follow the same formatting convention.

    No functional changes.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     

02 Jul, 2016

3 commits

  • One of the memory bitmaps used by the hibernation image restoration
    code is freed after the image has been loaded.

    That is not quite efficient, though, because the memory pages used
    for building that bitmap are known to be safe (ie. they were not
    used by the image kernel before hibernation) and the arch-specific
    code finalizing the image restoration may need them. In that case
    it needs to allocate those pages again via the memory management
    subsystem, check if they are really safe again by consulting the
    other bitmaps and so on.

    To avoid that, recycle those pages by putting them into the global
    list of known safe pages so that they can be given to the arch code
    right away when necessary.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     
  • Rework mark_unsafe_pages() to use a simpler method of clearing
    all bits in free_pages_map and to set the bits for the "unsafe"
    pages (ie. pages that were used by the image kernel before
    hibernation) with the help of duplicate_memory_bitmap().

    For this purpose, move the pfn_valid() check from mark_unsafe_pages()
    to unpack_orig_pfns() where the "unsafe" pages are discovered.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     
  • The core image restoration code preallocates some safe pages
    (ie. pages that weren't used by the image kernel before hibernation)
    for future use before allocating the bulk of memory for loading the
    image data. Those safe pages are then freed so they can be allocated
    again (with the memory management subsystem's help). That's done to
    ensure that there will be enough safe pages for temporary data
    structures needed during image restoration.

    However, it is not really necessary to free those pages after they
    have been allocated. They can be added to the (global) list of
    safe pages right away and then picked up from there when needed
    without freeing.

    That reduces the overhead related to using safe pages, especially
    in the arch-specific code, so modify the code accordingly.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     

07 Nov, 2015

1 commit

  • …d avoiding waking kswapd

    __GFP_WAIT has been used to identify atomic context in callers that hold
    spinlocks or are in interrupts. They are expected to be high priority and
    have access one of two watermarks lower than "min" which can be referred
    to as the "atomic reserve". __GFP_HIGH users get access to the first
    lower watermark and can be called the "high priority reserve".

    Over time, callers had a requirement to not block when fallback options
    were available. Some have abused __GFP_WAIT leading to a situation where
    an optimisitic allocation with a fallback option can access atomic
    reserves.

    This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
    cannot sleep and have no alternative. High priority users continue to use
    __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and
    are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify
    callers that want to wake kswapd for background reclaim. __GFP_WAIT is
    redefined as a caller that is willing to enter direct reclaim and wake
    kswapd for background reclaim.

    This patch then converts a number of sites

    o __GFP_ATOMIC is used by callers that are high priority and have memory
    pools for those requests. GFP_ATOMIC uses this flag.

    o Callers that have a limited mempool to guarantee forward progress clear
    __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
    into this category where kswapd will still be woken but atomic reserves
    are not used as there is a one-entry mempool to guarantee progress.

    o Callers that are checking if they are non-blocking should use the
    helper gfpflags_allow_blocking() where possible. This is because
    checking for __GFP_WAIT as was done historically now can trigger false
    positives. Some exceptions like dm-crypt.c exist where the code intent
    is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
    flag manipulations.

    o Callers that built their own GFP flags instead of starting with GFP_KERNEL
    and friends now also need to specify __GFP_KSWAPD_RECLAIM.

    The first key hazard to watch out for is callers that removed __GFP_WAIT
    and was depending on access to atomic reserves for inconspicuous reasons.
    In some cases it may be appropriate for them to use __GFP_HIGH.

    The second key hazard is callers that assembled their own combination of
    GFP flags instead of starting with something like GFP_KERNEL. They may
    now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless
    if it's missed in most cases as other activity will wake kswapd.

    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Acked-by: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Vitaly Wool <vitalywool@gmail.com>
    Cc: Rik van Riel <riel@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

    Mel Gorman
     

07 Apr, 2015

1 commit


04 Feb, 2015

1 commit


24 Jan, 2015

1 commit


03 Nov, 2014

1 commit

  • This patch migrates swsusp_show_speed and its callers to using ktime_t instead
    of 'struct timeval' which suffers from the y2038 problem.

    Changes to swsusp_show_speed:
    - use ktime_t for start and stop times
    - pass start and stop times by value
    Calling functions affected:
    - load_image
    - load_image_lzo
    - save_image
    - save_image_lzo
    - hibernate_preallocate_memory
    Design decisions:
    - use ktime_t to preserve same granularity of reporting as before
    - use centisecs logic as before to avoid 'div by zero' issues caused by
    using seconds and nanoseconds directly
    - use monotonic time (ktime_get()) since we only care about elapsed time.

    Signed-off-by: Tina Ruchandani
    Suggested-by: Arnd Bergmann
    Reviewed-by: Arnd Bergmann
    Acked-by: Pavel Machek
    Signed-off-by: Rafael J. Wysocki

    Tina Ruchandani
     

01 Oct, 2014

1 commit

  • The existing implementation of swsusp_free iterates over all
    pfns in the system and checks every bit in the two memory
    bitmaps.

    This doesn't scale very well with large numbers of pfns,
    especially when the bitmaps are not populated very densly.
    Change the algorithm to iterate over the set bits in the
    bitmaps instead to make it scale better in large memory
    configurations.

    Also add a memory_bm_clear_current() helper function that
    clears the bit for the last position returned from the
    memory bitmap.

    This new version adds a !NULL check for the memory bitmaps
    before they are walked. Not doing so causes a kernel crash
    when the bitmaps are NULL.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Rafael J. Wysocki

    Joerg Roedel
     

25 Sep, 2014

1 commit

  • Revert commit 6efde38f0769 (PM / Hibernate: Iterate over set bits
    instead of PFNs in swsusp_free()) that introduced a NULL pointer
    dereference during system resume from hibernation:

    BUG: unable to handle kernel NULL pointer dereference at (null)
    IP: [] swsusp_free+0x21/0x190
    PGD b39c2067 PUD b39c1067 PMD 0
    Oops: 0000 [#1] SMP
    Modules linked in:
    CPU: 1 PID: 4898 Comm: s2disk Tainted: G C 3.17-rc5-amd64 #1 Debian 3.17~rc5-1~exp1
    Hardware name: LENOVO 2776LEG/2776LEG, BIOS 6EET55WW (3.15 ) 12/19/2011
    task: ffff88023155ea40 ti: ffff8800b3b14000 task.ti: ffff8800b3b14000
    RIP: 0010:[] []
    swsusp_free+0x21/0x190
    RSP: 0018:ffff8800b3b17ea8 EFLAGS: 00010246
    RAX: 0000000000000000 RBX: ffff8800b39bab00 RCX: 0000000000000001
    RDX: ffff8800b39bab10 RSI: ffff8800b39bab00 RDI: 0000000000000000
    RBP: 0000000000000010 R08: 0000000000000000 R09: 0000000000000000
    R10: ffff8800b39bab10 R11: 0000000000000246 R12: ffffea0000000000
    R13: ffff880232f485a0 R14: ffff88023ac27cd8 R15: ffff880232927590
    FS: 00007f406d83b700(0000) GS:ffff88023bc80000(0000)
    knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    CR2: 0000000000000000 CR3: 00000000b3a62000 CR4: 00000000000007e0
    Stack:
    ffff8800b39bab00 0000000000000010 ffff880232927590 ffffffff810acb4a
    ffff8800b39bab00 ffffffff811a955a ffff8800b39bab10 0000000000000000
    ffff88023155f098 ffffffff81a6b8c0 ffff88023155ea40 0000000000000007
    Call Trace:
    [] ? snapshot_release+0x2a/0xb0
    [] ? __fput+0xca/0x1d0
    [] ? task_work_run+0x97/0xd0
    [] ? do_notify_resume+0x69/0xa0
    [] ? int_signal+0x12/0x17
    Code: 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 41 54 48 8b 05 ba 62 9c 00 49 bc 00 00 00 00 00 ea ff ff 48 8b 3d a1 62 9c 00 55 53 8b 10 48 89 50 18 48 8b 52 20 48 c7 40 28 00 00 00 00 c7 40
    RIP [] swsusp_free+0x21/0x190
    RSP
    CR2: 0000000000000000
    ---[ end trace f02be86a1ec0cccb ]---

    due to forbidden_pages_map being NULL in swsusp_free().

    Fixes: 6efde38f0769 "PM / Hibernate: Iterate over set bits instead of PFNs in swsusp_free()"
    Reported-by: Bjørn Mork
    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     

07 Aug, 2014

1 commit

  • When the machine doesn't well handle the e820 persistent when hibernate
    resuming, then it may cause page fault when writing image to snapshot
    buffer:

    [ 17.929495] BUG: unable to handle kernel paging request at ffff880069d4f000
    [ 17.933469] IP: [] load_image_lzo+0x810/0xe40
    [ 17.933469] PGD 2194067 PUD 77ffff067 PMD 2197067 PTE 0
    [ 17.933469] Oops: 0002 [#1] SMP
    ...

    The ffff880069d4f000 page is in e820 reserved region of resume boot
    kernel:

    [ 0.000000] BIOS-e820: [mem 0x0000000069d4f000-0x0000000069e12fff] reserved
    ...
    [ 0.000000] PM: Registered nosave memory: [mem 0x69d4f000-0x69e12fff]

    So snapshot.c mark the pfn to forbidden pages map. But, this
    page is also in the memory bitmap in snapshot image because it's an
    original page used by image kernel, so it will also mark as an
    unsafe(free) page in prepare_image().

    That means the page in e820 when resuming mark as "forbidden" and
    "free", it causes get_buffer() treat it as an allocated unsafe page.
    Then snapshot_write_next() return this page to load_image, load_image
    writing content to this address, but this page didn't really allocated
    . So, we got page fault.

    Although the root cause is from BIOS, I think aggressive check and
    significant message in kernel will better then a page fault for
    issue tracking, especially when serial console unavailable.

    This patch adds code in mark_unsafe_pages() for check does free pages in
    nosave region. If so, then it print message and return fault to stop whole
    S4 resume process:

    [ 8.166004] PM: Image loading progress: 0%
    [ 8.658717] PM: 0x6796c000 in e820 nosave region: [mem 0x6796c000-0x6796cfff]
    [ 8.918737] PM: Read 2511940 kbytes in 1.04 seconds (2415.32 MB/s)
    [ 8.926633] PM: Error -14 resuming
    [ 8.933534] PM: Failed to load hibernation image, recovering.

    Reviewed-by: Takashi Iwai
    Acked-by: Pavel Machek
    Signed-off-by: Lee, Chun-Yi
    [rjw: Subject]
    Signed-off-by: Rafael J. Wysocki

    Lee, Chun-Yi
     

29 Jul, 2014

6 commits

  • When a memory bitmap is fully populated on a large memory
    machine (several TB of RAM) it can take more than a minute
    to walk through all bits. This causes the soft lockup
    detector on these machine to report warnings.

    Avoid this by touching the soft lockup watchdog in the
    memory bitmap walking code.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Rafael J. Wysocki

    Joerg Roedel
     
  • The radix tree implementatio is proved to work the same as
    the old implementation now. So the old implementation can be
    removed to finish the switch to the radix tree for the
    memory bitmaps.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Rafael J. Wysocki

    Joerg Roedel
     
  • The existing implementation of swsusp_free iterates over all
    pfns in the system and checks every bit in the two memory
    bitmaps.

    This doesn't scale very well with large numbers of pfns,
    especially when the bitmaps are not populated very densly.
    Change the algorithm to iterate over the set bits in the
    bitmaps instead to make it scale better in large memory
    configurations.

    Also add a memory_bm_clear_current() helper function that
    clears the bit for the last position returned from the
    memory bitmap.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Rafael J. Wysocki

    Joerg Roedel
     
  • Add code to remember the last position that was requested in
    the radix tree. Use it as a cache for faster linear walking
    of the bitmap in the memory_bm_rtree_next_pfn() function
    which is also added with this patch.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Rafael J. Wysocki

    Joerg Roedel
     
  • Add a function to find a bit in the radix tree for a given
    pfn. Also add code to the memory bitmap wrapper functions to
    use the radix tree together with the existing memory bitmap
    implementation.

    On read accesses compare the results of both bitmaps to make
    sure the radix tree behaves the same way.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Rafael J. Wysocki

    Joerg Roedel
     
  • This patch adds the code to allocate and build the radix
    tree to store the memory bitmap. The old data structure is
    left in place until the radix tree implementation is
    finished.

    Signed-off-by: Joerg Roedel
    Signed-off-by: Rafael J. Wysocki

    Joerg Roedel
     

06 May, 2014

1 commit


08 Apr, 2014

1 commit


12 Mar, 2014

1 commit


22 Jan, 2014

1 commit

  • Switch to memblock interfaces for early memory allocator instead of
    bootmem allocator. No functional change in beahvior than what it is in
    current code from bootmem users points of view.

    Archs already converted to NO_BOOTMEM now directly use memblock
    interfaces instead of bootmem wrappers build on top of memblock. And
    the archs which still uses bootmem, these new apis just fallback to
    exiting bootmem APIs.

    Acked-by: "Rafael J. Wysocki"
    Signed-off-by: Santosh Shilimkar
    Cc: Arnd Bergmann
    Cc: Christoph Lameter
    Cc: Greg Kroah-Hartman
    Cc: Grygorii Strashko
    Cc: H. Peter Anvin
    Cc: Johannes Weiner
    Cc: KAMEZAWA Hiroyuki
    Cc: Konrad Rzeszutek Wilk
    Cc: Michal Hocko
    Cc: Paul Walmsley
    Cc: Pavel Machek
    Cc: Russell King
    Cc: Tejun Heo
    Cc: Tony Lindgren
    Cc: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Santosh Shilimkar
     

19 Nov, 2013

1 commit