02 Jul, 2021

1 commit

  • Pull percpu updates from Dennis Zhou:

    - percpu chunk depopulation - depopulate backing pages for chunks with
    empty pages when we exceed a global threshold without those pages.
    This lets us reclaim a portion of memory that would previously be
    lost until the full chunk would be freed (possibly never).

    - memcg accounting cleanup - previously separate chunks were managed
    for normal allocations and __GFP_ACCOUNT allocations. These are now
    consolidated which cleans up the code quite a bit.

    - a few misc clean ups for clang warnings

    * 'for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu:
    percpu: optimize locking in pcpu_balance_workfn()
    percpu: initialize best_upa variable
    percpu: rework memcg accounting
    mm, memcg: introduce mem_cgroup_kmem_disabled()
    mm, memcg: mark cgroup_memory_nosocket, nokmem and noswap as __ro_after_init
    percpu: make symbol 'pcpu_free_slot' static
    percpu: implement partial chunk depopulation
    percpu: use pcpu_free_slot instead of pcpu_nr_slots - 1
    percpu: factor out pcpu_check_block_hint()
    percpu: split __pcpu_balance_workfn()
    percpu: fix a comment about the chunks ordering

    Linus Torvalds
     

06 Jun, 2021

1 commit

  • The current implementation of the memcg accounting of the percpu
    memory is based on the idea of having two separate sets of chunks for
    accounted and non-accounted memory. This approach has an advantage
    of not wasting any extra memory for memcg data for non-accounted
    chunks, however it complicates the code and leads to a higher chunks
    number due to a lower chunk utilization.

    Instead of having two chunk types it's possible to declare all* chunks
    memcg-aware unless the kernel memory accounting is disabled globally
    by a boot option. The size of objcg_array is usually small in
    comparison to chunks themselves (it obviously depends on the number of
    CPUs), so even if some chunk will have no accounted allocations, the
    memory waste isn't significant and will likely be compensated by
    a higher chunk utilization. Also, with time more and more percpu
    allocations will likely become accounted.

    * The first chunk is initialized before the memory cgroup subsystem,
    so we don't know for sure whether we need to allocate obj_cgroups.
    Because it's small, let's make it free for use. Then we don't need
    to allocate obj_cgroups for it.

    Signed-off-by: Roman Gushchin
    Signed-off-by: Dennis Zhou

    Roman Gushchin
     

07 May, 2021

1 commit

  • Fix ~94 single-word typos in locking code comments, plus a few
    very obvious grammar mistakes.

    Link: https://lkml.kernel.org/r/20210322212624.GA1963421@gmail.com
    Link: https://lore.kernel.org/r/20210322205203.GB1959563@gmail.com
    Signed-off-by: Ingo Molnar
    Reviewed-by: Matthew Wilcox (Oracle)
    Reviewed-by: Randy Dunlap
    Cc: Bhaskar Chowdhury
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

22 Apr, 2021

1 commit

  • From Roman ("percpu: partial chunk depopulation"):
    In our [Facebook] production experience the percpu memory allocator is
    sometimes struggling with returning the memory to the system. A typical
    example is a creation of several thousands memory cgroups (each has
    several chunks of the percpu data used for vmstats, vmevents,
    ref counters etc). Deletion and complete releasing of these cgroups
    doesn't always lead to a shrinkage of the percpu memory, so that
    sometimes there are several GB's of memory wasted.

    The underlying problem is the fragmentation: to release an underlying
    chunk all percpu allocations should be released first. The percpu
    allocator tends to top up chunks to improve the utilization. It means
    new small-ish allocations (e.g. percpu ref counters) are placed onto
    almost filled old-ish chunks, effectively pinning them in memory.

    This patchset solves this problem by implementing a partial depopulation
    of percpu chunks: chunks with many empty pages are being asynchronously
    depopulated and the pages are returned to the system.

    To illustrate the problem the following script can be used:
    --

    cd /sys/fs/cgroup

    mkdir percpu_test
    echo "+memory" > percpu_test/cgroup.subtree_control

    cat /proc/meminfo | grep Percpu

    for i in `seq 1 1000`; do
    mkdir percpu_test/cg_"${i}"
    for j in `seq 1 10`; do
    mkdir percpu_test/cg_"${i}"_"${j}"
    done
    done

    cat /proc/meminfo | grep Percpu

    for i in `seq 1 1000`; do
    for j in `seq 1 10`; do
    rmdir percpu_test/cg_"${i}"_"${j}"
    done
    done

    sleep 10

    cat /proc/meminfo | grep Percpu

    for i in `seq 1 1000`; do
    rmdir percpu_test/cg_"${i}"
    done

    rmdir percpu_test
    --

    It creates 11000 memory cgroups and removes every 10 out of 11.
    It prints the initial size of the percpu memory, the size after
    creating all cgroups and the size after deleting most of them.

    Results:
    vanilla:
    ./percpu_test.sh
    Percpu: 7488 kB
    Percpu: 481152 kB
    Percpu: 481152 kB

    with this patchset applied:
    ./percpu_test.sh
    Percpu: 7488 kB
    Percpu: 481408 kB
    Percpu: 135552 kB

    The total size of the percpu memory was reduced by more than 3.5 times.

    This patch:

    This patch implements partial depopulation of percpu chunks.

    As of now, a chunk can be depopulated only as a part of the final
    destruction, if there are no more outstanding allocations. However
    to minimize a memory waste it might be useful to depopulate a
    partially filed chunk, if a small number of outstanding allocations
    prevents the chunk from being fully reclaimed.

    This patch implements the following depopulation process: it scans
    over the chunk pages, looks for a range of empty and populated pages
    and performs the depopulation. To avoid races with new allocations,
    the chunk is previously isolated. After the depopulation the chunk is
    sidelined to a special list or freed. New allocations prefer using
    active chunks to sidelined chunks. If a sidelined chunk is used, it is
    reintegrated to the active lists.

    The depopulation is scheduled on the free path if the chunk is all of
    the following:
    1) has more than 1/4 of total pages free and populated
    2) the system has enough free percpu pages aside of this chunk
    3) isn't the reserved chunk
    4) isn't the first chunk
    If it's already depopulated but got free populated pages, it's a good
    target too. The chunk is moved to a special slot,
    pcpu_to_depopulate_slot, chunk->isolated is set, and the balance work
    item is scheduled. On isolation, these pages are removed from the
    pcpu_nr_empty_pop_pages. It is constantly replaced to the
    to_depopulate_slot when it meets these qualifications.

    pcpu_reclaim_populated() iterates over the to_depopulate_slot until it
    becomes empty. The depopulation is performed in the reverse direction to
    keep populated pages close to the beginning. Depopulated chunks are
    sidelined to preferentially avoid them for new allocations. When no
    active chunk can suffice a new allocation, sidelined chunks are first
    checked before creating a new chunk.

    Signed-off-by: Roman Gushchin
    Co-developed-by: Dennis Zhou
    Signed-off-by: Dennis Zhou
    Tested-by: Pratik Sampat
    Signed-off-by: Dennis Zhou

    Roman Gushchin
     

09 Apr, 2021

1 commit

  • nr_empty_pop_pages is used to guarantee that there are some free
    populated pages to satisfy atomic allocations. Accounted and
    non-accounted allocations are using separate sets of chunks,
    so both need to have a surplus of empty pages.

    This commit makes pcpu_nr_empty_pop_pages and the corresponding logic
    per chunk type.

    [Dennis]
    This issue came up as I was reviewing [1] and realized I missed this.
    Simultaneously, it was reported btrfs was seeing failed atomic
    allocations in fsstress tests [2] and [3].

    [1] https://lore.kernel.org/linux-mm/20210324190626.564297-1-guro@fb.com/
    [2] https://lore.kernel.org/linux-mm/20210401185158.3275.409509F4@e16-tech.com/
    [3] https://lore.kernel.org/linux-mm/CAL3q7H5RNBjCi708GH7jnczAOe0BLnacT9C+OBgA-Dx9jhB6SQ@mail.gmail.com/

    Fixes: 3c7be18ac9a0 ("mm: memcg/percpu: account percpu memory to memory cgroups")
    Cc: stable@vger.kernel.org # 5.9+
    Signed-off-by: Roman Gushchin
    Tested-by: Filipe Manana
    Signed-off-by: Dennis Zhou

    Roman Gushchin
     

13 Aug, 2020

1 commit

  • Percpu memory is becoming more and more widely used by various subsystems,
    and the total amount of memory controlled by the percpu allocator can make
    a good part of the total memory.

    As an example, bpf maps can consume a lot of percpu memory, and they are
    created by a user. Also, some cgroup internals (e.g. memory controller
    statistics) can be quite large. On a machine with many CPUs and big
    number of cgroups they can consume hundreds of megabytes.

    So the lack of memcg accounting is creating a breach in the memory
    isolation. Similar to the slab memory, percpu memory should be accounted
    by default.

    To implement the perpcu accounting it's possible to take the slab memory
    accounting as a model to follow. Let's introduce two types of percpu
    chunks: root and memcg. What makes memcg chunks different is an
    additional space allocated to store memcg membership information. If
    __GFP_ACCOUNT is passed on allocation, a memcg chunk should be be used.
    If it's possible to charge the corresponding size to the target memory
    cgroup, allocation is performed, and the memcg ownership data is recorded.
    System-wide allocations are performed using root chunks, so there is no
    additional memory overhead.

    To implement a fast reparenting of percpu memory on memcg removal, we
    don't store mem_cgroup pointers directly: instead we use obj_cgroup API,
    introduced for slab accounting.

    [akpm@linux-foundation.org: fix CONFIG_MEMCG_KMEM=n build errors and warning]
    [akpm@linux-foundation.org: move unreachable code, per Roman]
    [cuibixuan@huawei.com: mm/percpu: fix 'defined but not used' warning]
    Link: http://lkml.kernel.org/r/6d41b939-a741-b521-a7a2-e7296ec16219@huawei.com

    Signed-off-by: Roman Gushchin
    Signed-off-by: Bixuan Cui
    Signed-off-by: Andrew Morton
    Reviewed-by: Shakeel Butt
    Acked-by: Dennis Zhou
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Johannes Weiner
    Cc: Joonsoo Kim
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Tejun Heo
    Cc: Tobin C. Harding
    Cc: Vlastimil Babka
    Cc: Waiman Long
    Cc: Bixuan Cui
    Cc: Michal Koutný
    Cc: Stephen Rothwell
    Link: http://lkml.kernel.org/r/20200623184515.4132564-3-guro@fb.com
    Signed-off-by: Linus Torvalds

    Roman Gushchin
     

14 Mar, 2019

3 commits

  • As mentioned in the last patch, a chunk's hints are no different than a
    block just responsible for more bits. This converts chunk level hints to
    use a pcpu_block_md to maintain them. This lets us reuse the same hint
    helper functions as a block. The left_free and right_free are unused by
    the chunk's pcpu_block_md.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Peng Fan

    Dennis Zhou
     
  • In reality, a chunk is just a block covering a larger number of bits.
    The hints themselves are one in the same. Rather than maintaining the
    hints separately, first introduce nr_bits to genericize
    pcpu_block_update() to correctly maintain block->right_free. The next
    patch will convert chunk hints to be managed as a pcpu_block_md.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Peng Fan

    Dennis Zhou
     
  • Fragmentation can cause both blocks and chunks to have an early
    first_firee bit available, but only able to satisfy allocations much
    later on. This patch introduces a scan_hint to help mitigate some
    unnecessary scanning.

    The scan_hint remembers the largest area prior to the contig_hint. If
    the contig_hint == scan_hint, then scan_hint_start > contig_hint_start.
    This is necessary for scan_hint discovery when refreshing a block.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Peng Fan

    Dennis Zhou
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

27 Jul, 2017

4 commits

  • This patch adds chunk->contig_bits_start to keep track of the contig
    hint's offset and the check to skip the chunk if it does not fit. If
    the chunk's contig hint starting offset cannot satisfy an allocation,
    the allocator assumes there is enough memory pressure in this chunk to
    either use a different chunk or create a new one. This accepts a less
    tight packing for a smoother latency curve.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • This patch adds first_bit to keep track of the first free bit in the
    bitmap. This hint helps prevent scanning of fully allocated blocks.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • This patch introduces the bitmap metadata blocks and adds the skeleton
    of the code that will be used to maintain these blocks. Each chunk's
    bitmap is made up of full metadata blocks. These blocks maintain basic
    metadata to help prevent scanning unnecssarily to update hints. Full
    scanning methods are used for the skeleton and will be replaced in the
    coming patches. A number of helper functions are added as well to do
    conversion of pages to blocks and manage offsets. Comments will be
    updated as the final version of each function is added.

    There exists a relationship between PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE,
    the region size, and unit_size. Every chunk's region (including offsets)
    is page aligned at the beginning to preserve alignment. The end is
    aligned to LCM(PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE) to ensure that the end
    can fit with the populated page map which is by page and every metadata
    block is fully accounted for. The unit_size is already page aligned, but
    must also be aligned with PCPU_BITMAP_BLOCK_SIZE to ensure full metadata
    blocks.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The percpu memory allocator is experiencing scalability issues when
    allocating and freeing large numbers of counters as in BPF.
    Additionally, there is a corner case where iteration is triggered over
    all chunks if the contig_hint is the right size, but wrong alignment.

    This patch replaces the area map allocator with a basic bitmap allocator
    implementation. Each subsequent patch will introduce new features and
    replace full scanning functions with faster non-scanning options when
    possible.

    Implementation:
    This patchset removes the area map allocator in favor of a bitmap
    allocator backed by metadata blocks. The primary goal is to provide
    consistency in performance and memory footprint with a focus on small
    allocations (< 64 bytes). The bitmap removes the heavy memmove from the
    freeing critical path and provides a consistent memory footprint. The
    metadata blocks provide a bound on the amount of scanning required by
    maintaining a set of hints.

    In an effort to make freeing fast, the metadata is updated on the free
    path if the new free area makes a page free, a block free, or spans
    across blocks. This causes the chunk's contig hint to potentially be
    smaller than what it could allocate by up to the smaller of a page or a
    block. If the chunk's contig hint is contained within a block, a check
    occurs and the hint is kept accurate. Metadata is always kept accurate
    on allocation, so there will not be a situation where a chunk has a
    later contig hint than available.

    Evaluation:
    I have primarily done testing against a simple workload of allocation of
    1 million objects (2^20) of varying size. Deallocation was done by in
    order, alternating, and in reverse. These numbers were collected after
    rebasing ontop of a80099a152. I present the worst-case numbers here:

    Area Map Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 310 | 4770
    16B | 557 | 1325
    64B | 436 | 273
    256B | 776 | 131
    1024B | 3280 | 122

    Bitmap Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 490 | 70
    16B | 515 | 75
    64B | 610 | 80
    256B | 950 | 100
    1024B | 3520 | 200

    This data demonstrates the inability for the area map allocator to
    handle less than ideal situations. In the best case of reverse
    deallocation, the area map allocator was able to perform within range
    of the bitmap allocator. In the worst case situation, freeing took
    nearly 5 seconds for 1 million 4-byte objects. The bitmap allocator
    dramatically improves the consistency of the free path. The small
    allocations performed nearly identical regardless of the freeing
    pattern.

    While it does add to the allocation latency, the allocation scenario
    here is optimal for the area map allocator. The area map allocator runs
    into trouble when it is allocating in chunks where the latter half is
    full. It is difficult to replicate this, so I present a variant where
    the pages are second half filled. Freeing was done sequentially. Below
    are the numbers for this scenario:

    Area Map Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 4118 | 4892
    16B | 1651 | 1163
    64B | 598 | 285
    256B | 771 | 158
    1024B | 3034 | 160

    Bitmap Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 481 | 67
    16B | 506 | 69
    64B | 636 | 75
    256B | 892 | 90
    1024B | 3262 | 147

    The data shows a parabolic curve of performance for the area map
    allocator. This is due to the memmove operation being the dominant cost
    with the lower object sizes as more objects are packed in a chunk and at
    higher object sizes, the traversal of the chunk slots is the dominating
    cost. The bitmap allocator suffers this problem as well. The above data
    shows the inability to scale for the allocation path with the area map
    allocator and that the bitmap allocator demonstrates consistent
    performance in general.

    The second problem of additional scanning can result in the area map
    allocator completing in 52 minutes when trying to allocate 1 million
    4-byte objects with 8-byte alignment. The same workload takes
    approximately 16 seconds to complete for the bitmap allocator.

    V2:
    Fixed a bug in pcpu_alloc_first_chunk end_offset was setting the bitmap
    using bytes instead of bits.

    Added a comment to pcpu_cnt_pop_pages to explain bitmap_weight.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     

26 Jul, 2017

5 commits

  • pcpu_nr_empty_pop_pages is used to ensure there are a handful of free
    pages around to serve atomic allocations. A new field, nr_empty_pop_pages,
    is added to the pcpu_chunk struct to keep track of the number of empty
    pages. This field is needed as the number of empty populated pages is
    globally tracked and deltas are used to update in the bitmap allocator.
    Pages that contain a hidden area are not considered to be empty. This
    new field is exposed in percpu_stats.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • Originally, the first chunk was served by one or two chunks, each
    given a region they are responsible for. Despite this, the arithmetic
    was based off of the true base_addr of the chunk making it be overly
    inclusive.

    This patch moves the base_addr of chunks that are responsible for the
    first chunk. The base_addr must remain page aligned to keep the
    address alignment correct, so it is the beginning of the region served
    page aligned down. start_offset holds where the region served begins
    from this new base_addr.

    The corresponding percpu address checks are modified to be more specific
    as a result. The first chunk considers only the dynamic region and both
    first chunk and reserved chunk checks ignore the static region. The
    static region addresses should never be passed into the allocator. There
    is no impact here besides distinguishing the first chunk and making the
    checks specific.

    The percpu pointer to physical address is left intact as addresses are
    not given out in the non-allocated portion of percpu memory.

    nr_pages is added to pcpu_chunk to keep track of the size of the entire
    region served containing both start_offset and end_offset. This variable
    will be used to manage the bitmap allocator.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The area map allocator manages the first chunk area by hiding all but
    the region it is responsible for serving in the area map. To align this
    with the populated page bitmap, end_offset is introduced to keep track
    of the delta to end page aligned. The area map is appended with the
    page aligned end when necessary to be in line with how the bitmap
    allocator requires the ending to be aligned with the LCM of PAGE_SIZE
    and the size of each bitmap block. percpu_stats is updated to ignore
    this region when present.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • Prior this variable was used to manage statistics when the first chunk
    had a reserved region. The previous patch introduced start_offset to
    keep track of the offset by value rather than boolean. Therefore,
    has_reserved can be removed.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The reserved chunk arithmetic uses a global variable
    pcpu_reserved_chunk_limit that is set in the first chunk init code to
    hide a portion of the area map. The bitmap allocator to come will
    eventually move the base_addr up and require both the reserved chunk
    and static chunk to maintain this offset. pcpu_reserved_chunk_limit is
    removed and start_offset is added.

    The first chunk that is circulated and is pcpu_first_chunk serves the
    dynamic region, the region following the reserved region. The reserved
    chunk address check will temporarily use the first chunk to identify its
    address range. A following patch will increase the base_addr and remove
    this. If there is no reserved chunk, this will check the static region
    and return false because those values should never be passed into the
    allocator.

    Lastly, when linking in the first chunk, make sure to count the right
    free region for the number of empty populated pages.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     

17 Jul, 2017

1 commit


22 Jun, 2017

1 commit

  • From 2c06e795162cb306c9707ec51d3e1deadb37f573 Mon Sep 17 00:00:00 2001
    From: Dennis Zhou
    Date: Wed, 21 Jun 2017 10:17:09 -0700

    Commit 30a5b5367ef9 ("percpu: expose statistics about percpu memory via
    debugfs") introduces percpu memory statistics. pcpu_stats_chunk_alloc
    takes the spin lock and disables/enables irqs on creation of a chunk. Irqs
    are not enabled when the first chunk is initialized and thus kernels are
    failing to boot with kernel debugging enabled. Fixed by changing _irq to
    _irqsave and _irqrestore.

    Fixes: 30a5b5367ef9 ("percpu: expose statistics about percpu memory via debugfs")
    Signed-off-by: Dennis Zhou
    Reported-by: Alexander Levin
    Signed-off-by: Tejun Heo

    Dennis Zhou
     

21 Jun, 2017

2 commits

  • There is limited visibility into the use of percpu memory leaving us
    unable to reason about correctness of parameters and overall use of
    percpu memory. These counters and statistics aim to help understand
    basic statistics about percpu memory such as number of allocations over
    the lifetime, allocation sizes, and fragmentation.

    New Config: PERCPU_STATS

    Signed-off-by: Dennis Zhou
    Signed-off-by: Tejun Heo

    Dennis Zhou
     
  • Migrates pcpu_chunk definition and a few percpu static variables to an
    internal header file from mm/percpu.c. These will be used with debugfs
    to expose statistics about percpu memory improving visibility regarding
    allocations and fragmentation.

    Signed-off-by: Dennis Zhou
    Signed-off-by: Tejun Heo

    Dennis Zhou