08 Oct, 2018

1 commit

  • The commit ca460b3c9627 ("percpu: introduce bitmap metadata blocks")
    introduced bitmap metadata blocks. These metadata blocks are allocated
    whenever a new chunk is created, but they are never freed. Fix it.

    Fixes: ca460b3c9627 ("percpu: introduce bitmap metadata blocks")
    Signed-off-by: Mike Rapoport
    Cc: stable@vger.kernel.org
    Signed-off-by: Dennis Zhou

    Mike Rapoport
     

23 Aug, 2018

1 commit

  • Currently, percpu memory only exposes allocation and utilization
    information via debugfs. This more or less is only really useful for
    understanding the fragmentation and allocation information at a per-chunk
    level with a few global counters. This is also gated behind a config.
    BPF and cgroup, for example, have seen an increase in use causing
    increased use of percpu memory. Let's make it easier for someone to
    identify how much memory is being used.

    This patch adds the "Percpu" stat to meminfo to more easily look up how
    much percpu memory is in use. This number includes the cost for all
    allocated backing pages and not just insight at the per a unit, per chunk
    level. Metadata is excluded. I think excluding metadata is fair because
    the backing memory scales with the numbere of cpus and can quickly
    outweigh the metadata. It also makes this calculation light.

    Link: http://lkml.kernel.org/r/20180807184723.74919-1-dennisszhou@gmail.com
    Signed-off-by: Dennis Zhou
    Acked-by: Tejun Heo
    Acked-by: Roman Gushchin
    Reviewed-by: Andrew Morton
    Acked-by: David Rientjes
    Acked-by: Vlastimil Babka
    Cc: Johannes Weiner
    Cc: Christoph Lameter
    Cc: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dennis Zhou (Facebook)
     

03 Apr, 2018

1 commit

  • Pul removal of obsolete architecture ports from Arnd Bergmann:
    "This removes the entire architecture code for blackfin, cris, frv,
    m32r, metag, mn10300, score, and tile, including the associated device
    drivers.

    I have been working with the (former) maintainers for each one to
    ensure that my interpretation was right and the code is definitely
    unused in mainline kernels. Many had fond memories of working on the
    respective ports to start with and getting them included in upstream,
    but also saw no point in keeping the port alive without any users.

    In the end, it seems that while the eight architectures are extremely
    different, they all suffered the same fate: There was one company in
    charge of an SoC line, a CPU microarchitecture and a software
    ecosystem, which was more costly than licensing newer off-the-shelf
    CPU cores from a third party (typically ARM, MIPS, or RISC-V). It
    seems that all the SoC product lines are still around, but have not
    used the custom CPU architectures for several years at this point. In
    contrast, CPU instruction sets that remain popular and have actively
    maintained kernel ports tend to all be used across multiple licensees.

    [ See the new nds32 port merged in the previous commit for the next
    generation of "one company in charge of an SoC line, a CPU
    microarchitecture and a software ecosystem" - Linus ]

    The removal came out of a discussion that is now documented at
    https://lwn.net/Articles/748074/. Unlike the original plans, I'm not
    marking any ports as deprecated but remove them all at once after I
    made sure that they are all unused. Some architectures (notably tile,
    mn10300, and blackfin) are still being shipped in products with old
    kernels, but those products will never be updated to newer kernel
    releases.

    After this series, we still have a few architectures without mainline
    gcc support:

    - unicore32 and hexagon both have very outdated gcc releases, but the
    maintainers promised to work on providing something newer. At least
    in case of hexagon, this will only be llvm, not gcc.

    - openrisc, risc-v and nds32 are still in the process of finishing
    their support or getting it added to mainline gcc in the first
    place. They all have patched gcc-7.3 ports that work to some
    degree, but complete upstream support won't happen before gcc-8.1.
    Csky posted their first kernel patch set last week, their situation
    will be similar

    [ Palmer Dabbelt points out that RISC-V support is in mainline gcc
    since gcc-7, although gcc-7.3.0 is the recommended minimum - Linus ]"

    This really says it all:

    2498 files changed, 95 insertions(+), 467668 deletions(-)

    * tag 'arch-removal' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: (74 commits)
    MAINTAINERS: UNICORE32: Change email account
    staging: iio: remove iio-trig-bfin-timer driver
    tty: hvc: remove tile driver
    tty: remove bfin_jtag_comm and hvc_bfin_jtag drivers
    serial: remove tile uart driver
    serial: remove m32r_sio driver
    serial: remove blackfin drivers
    serial: remove cris/etrax uart drivers
    usb: Remove Blackfin references in USB support
    usb: isp1362: remove blackfin arch glue
    usb: musb: remove blackfin port
    usb: host: remove tilegx platform glue
    pwm: remove pwm-bfin driver
    i2c: remove bfin-twi driver
    spi: remove blackfin related host drivers
    watchdog: remove bfin_wdt driver
    can: remove bfin_can driver
    mmc: remove bfin_sdh driver
    input: misc: remove blackfin rotary driver
    input: keyboard: remove bf54x driver
    ...

    Linus Torvalds
     

26 Mar, 2018

1 commit


20 Mar, 2018

2 commits

  • In case of memory deficit and low percpu memory pages,
    pcpu_balance_workfn() takes pcpu_alloc_mutex for a long
    time (as it makes memory allocations itself and waits
    for memory reclaim). If tasks doing pcpu_alloc() are
    choosen by OOM killer, they can't exit, because they
    are waiting for the mutex.

    The patch makes pcpu_alloc() to care about killing signal
    and use mutex_lock_killable(), when it's allowed by GFP
    flags. This guarantees, a task does not miss SIGKILL
    from OOM killer.

    Signed-off-by: Kirill Tkhai
    Signed-off-by: Tejun Heo

    Kirill Tkhai
     
  • microblaze build broke due to missing declaration of the
    cond_resched() invocation added recently. Let's include linux/sched.h
    explicitly.

    Signed-off-by: Tejun Heo
    Reported-by: kbuild test robot

    Tejun Heo
     

24 Feb, 2018

1 commit

  • When a large BPF percpu map is destroyed, I have seen
    pcpu_balance_workfn() holding cpu for hundreds of milliseconds.

    On KASAN config and 112 hyperthreads, average time to destroy a chunk
    is ~4 ms.

    [ 2489.841376] destroy chunk 1 in 4148689 ns
    ...
    [ 2490.093428] destroy chunk 32 in 4072718 ns

    Signed-off-by: Eric Dumazet
    Signed-off-by: Tejun Heo

    Eric Dumazet
     

18 Feb, 2018

3 commits

  • The prior patch added support for passing gfp flags through to the
    underlying allocators. This patch allows users to pass along gfp flags
    (currently only __GFP_NORETRY and __GFP_NOWARN) to the underlying
    allocators. This should allow users to decide if they are ok with
    failing allocations recovering in a more graceful way.

    Additionally, gfp passing was done as additional flags in the previous
    patch. Instead, change this to caller passed semantics. GFP_KERNEL is
    also removed as the default flag. It continues to be used for internally
    caused underlying percpu allocations.

    V2:
    Removed gfp_percpu_mask in favor of doing it inline.
    Removed GFP_KERNEL as a default flag for __alloc_percpu_gfp.

    Signed-off-by: Dennis Zhou
    Suggested-by: Daniel Borkmann
    Acked-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Dennis Zhou
     
  • Percpu memory using the vmalloc area based chunk allocator lazily
    populates chunks by first requesting the full virtual address space
    required for the chunk and subsequently adding pages as allocations come
    through. To ensure atomic allocations can succeed, a workqueue item is
    used to maintain a minimum number of empty pages. In certain scenarios,
    such as reported in [1], it is possible that physical memory becomes
    quite scarce which can result in either a rather long time spent trying
    to find free pages or worse, a kernel panic.

    This patch adds support for __GFP_NORETRY and __GFP_NOWARN passing them
    through to the underlying allocators. This should prevent any
    unnecessary panics potentially caused by the workqueue item. The passing
    of gfp around is as additional flags rather than a full set of flags.
    The next patch will change these to caller passed semantics.

    V2:
    Added const modifier to gfp flags in the balance path.
    Removed an extra whitespace.

    [1] https://lkml.org/lkml/2018/2/12/551

    Signed-off-by: Dennis Zhou
    Suggested-by: Daniel Borkmann
    Reported-by: syzbot+adb03f3f0bb57ce3acda@syzkaller.appspotmail.com
    Acked-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Dennis Zhou
     
  • At some point the function declaration parameters got out of sync with
    the function definitions in percpu-vm.c and percpu-km.c. This patch
    makes them match again.

    Signed-off-by: Dennis Zhou
    Acked-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Dennis Zhou
     

28 Nov, 2017

1 commit

  • Commit 438a506180 ("percpu: don't forget to free the temporary struct
    pcpu_alloc_info") uncovered a problem on the CRIS architecture where
    the bootmem allocator is initialized with virtual addresses. Given it
    has:

    #define __va(x) ((void *)((unsigned long)(x) | 0x80000000))

    then things just work out because the end result is the same whether you
    give this a physical or a virtual address.

    Untill you call memblock_free_early(__pa(address)) that is, because
    values from __pa() don't match with the virtual addresses stuffed in the
    bootmem allocator anymore.

    Avoid freeing the temporary pcpu_alloc_info memory on that architecture
    until they fix things up to let the kernel boot like it did before.

    Signed-off-by: Nicolas Pitre
    Signed-off-by: Tejun Heo
    Fixes: 438a506180 ("percpu: don't forget to free the temporary struct pcpu_alloc_info")

    Nicolas Pitre
     

16 Nov, 2017

1 commit


19 Oct, 2017

1 commit

  • Add an option for pcpu_alloc() to support __GFP_NOWARN flag.
    Currently, we always throw a warning when size or alignment
    is unsupported (and also dump stack on failed allocation
    requests). The warning itself is harmless since we return
    NULL anyway for any failed request, which callers are
    required to handle anyway. However, it becomes harmful when
    panic_on_warn is set.

    The rationale for the WARN() in pcpu_alloc() is that it can
    be tracked when larger than supported allocation requests are
    made such that allocations limits can be tweaked if warranted.
    This makes sense for in-kernel users, however, there are users
    of pcpu allocator where allocation size is derived from user
    space requests, e.g. when creating BPF maps. In these cases,
    the requests should fail gracefully without throwing a splat.

    The current work-around was to check allocation size against
    the upper limit of PCPU_MIN_UNIT_SIZE from call-sites for
    bailing out prior to a call to pcpu_alloc() in order to
    avoid throwing the WARN(). This is bad in multiple ways since
    PCPU_MIN_UNIT_SIZE is an implementation detail, and having
    the checks on call-sites only complicates the code for no
    good reason. Thus, lets fix it generically by supporting the
    __GFP_NOWARN flag that users can then use with calling the
    __alloc_percpu_gfp() helper instead.

    Signed-off-by: Daniel Borkmann
    Cc: Tejun Heo
    Cc: Mark Rutland
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

04 Oct, 2017

1 commit


28 Sep, 2017

1 commit

  • The iterator functions pcpu_next_md_free_region and
    pcpu_next_fit_region use the block offset to determine if they have
    checked the area in the prior iteration. However, this causes an issue
    when the block offset is greater than subsequent block contig hints. If
    within the iterator it moves to check subsequent blocks, it may fail in
    the second predicate due to the block offset not being cleared. Thus,
    this causes the allocator to skip over blocks leading to false failures
    when allocating from the reserved chunk. While this happens in the
    general case as well, it will only fail if it cannot allocate a new
    chunk.

    This patch resets the block offset to 0 to pass the second predicate
    when checking subseqent blocks within the iterator function.

    Signed-off-by: Dennis Zhou
    Reported-and-tested-by: Luis Henriques
    Signed-off-by: Tejun Heo

    Dennis Zhou
     

27 Jul, 2017

10 commits

  • The other patches contain a lot of information, so adding this
    information in a separate patch. It adds my copyright and a brief
    explanation of how the bitmap allocator works. There is a minor typo as
    well in the prior explanation so that is fixed.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The simple, and expensive, way to find a free area is to iterate over
    the entire bitmap until an area is found that fits the allocation size
    and alignment. This patch makes use of an iterate that find an area to
    check by using the block level contig hints. It will only return an area
    that can fit the size and alignment request. If the request can fit
    inside a block, it returns the first_free bit to start checking from to
    see if it can be fulfilled prior to the contig hint. The pcpu_alloc_area
    check has a bound of a block size added in case it is wrong.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The largest free region will either be a block level contig hint or an
    aggregate over the left_free and right_free areas of blocks. This is a
    much smaller set of free areas that need to be checked than a full
    traverse.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The bitmap allocator must keep metadata consistent. The easiest way is
    to scan after every allocation for each affected block and the entire
    chunk. This is rather expensive.

    The free path can take advantage of current contig hints to prevent
    scanning within the start and end block. If a scan is needed, it can
    be done by scanning backwards from the start and forwards from the end
    to identify the entire free area this can be combined with. The blocks
    can then be updated by some basic checks rather than complete block
    scans.

    A chunk scan happens when the freed area makes a page free, a block
    free, or spans across blocks. This is necessary as the contig hint at
    this point could span across blocks. The check uses the minimum of page
    size and the block size to allow for variable sized blocks. There is a
    tradeoff here with not updating after every free. It is possible a
    contig hint in one block can be merged with the contig hint in the next
    block. This means the contig hint can be off by up to a page. However,
    if the chunk's contig hint is contained in one block, the contig hint
    will be accurate.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • Metadata is kept per block to keep track of where the contig hints are.
    Scanning can be avoided when the contig hints are not broken. In that
    case, left and right contigs have to be managed manually.

    This patch changes the allocation path hint updating to only scan when
    contig hints are broken.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • This patch makes the contig hint starting offset optimization from the
    previous patch as honest as it can be. For both chunk and block starting
    offsets, make sure it keeps the starting offset with the best alignment.

    The block skip optimization is added in a later patch when the
    pcpu_find_block_fit iterator is swapped in.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • This patch adds chunk->contig_bits_start to keep track of the contig
    hint's offset and the check to skip the chunk if it does not fit. If
    the chunk's contig hint starting offset cannot satisfy an allocation,
    the allocator assumes there is enough memory pressure in this chunk to
    either use a different chunk or create a new one. This accepts a less
    tight packing for a smoother latency curve.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • This patch adds first_bit to keep track of the first free bit in the
    bitmap. This hint helps prevent scanning of fully allocated blocks.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • This patch introduces the bitmap metadata blocks and adds the skeleton
    of the code that will be used to maintain these blocks. Each chunk's
    bitmap is made up of full metadata blocks. These blocks maintain basic
    metadata to help prevent scanning unnecssarily to update hints. Full
    scanning methods are used for the skeleton and will be replaced in the
    coming patches. A number of helper functions are added as well to do
    conversion of pages to blocks and manage offsets. Comments will be
    updated as the final version of each function is added.

    There exists a relationship between PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE,
    the region size, and unit_size. Every chunk's region (including offsets)
    is page aligned at the beginning to preserve alignment. The end is
    aligned to LCM(PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE) to ensure that the end
    can fit with the populated page map which is by page and every metadata
    block is fully accounted for. The unit_size is already page aligned, but
    must also be aligned with PCPU_BITMAP_BLOCK_SIZE to ensure full metadata
    blocks.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The percpu memory allocator is experiencing scalability issues when
    allocating and freeing large numbers of counters as in BPF.
    Additionally, there is a corner case where iteration is triggered over
    all chunks if the contig_hint is the right size, but wrong alignment.

    This patch replaces the area map allocator with a basic bitmap allocator
    implementation. Each subsequent patch will introduce new features and
    replace full scanning functions with faster non-scanning options when
    possible.

    Implementation:
    This patchset removes the area map allocator in favor of a bitmap
    allocator backed by metadata blocks. The primary goal is to provide
    consistency in performance and memory footprint with a focus on small
    allocations (< 64 bytes). The bitmap removes the heavy memmove from the
    freeing critical path and provides a consistent memory footprint. The
    metadata blocks provide a bound on the amount of scanning required by
    maintaining a set of hints.

    In an effort to make freeing fast, the metadata is updated on the free
    path if the new free area makes a page free, a block free, or spans
    across blocks. This causes the chunk's contig hint to potentially be
    smaller than what it could allocate by up to the smaller of a page or a
    block. If the chunk's contig hint is contained within a block, a check
    occurs and the hint is kept accurate. Metadata is always kept accurate
    on allocation, so there will not be a situation where a chunk has a
    later contig hint than available.

    Evaluation:
    I have primarily done testing against a simple workload of allocation of
    1 million objects (2^20) of varying size. Deallocation was done by in
    order, alternating, and in reverse. These numbers were collected after
    rebasing ontop of a80099a152. I present the worst-case numbers here:

    Area Map Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 310 | 4770
    16B | 557 | 1325
    64B | 436 | 273
    256B | 776 | 131
    1024B | 3280 | 122

    Bitmap Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 490 | 70
    16B | 515 | 75
    64B | 610 | 80
    256B | 950 | 100
    1024B | 3520 | 200

    This data demonstrates the inability for the area map allocator to
    handle less than ideal situations. In the best case of reverse
    deallocation, the area map allocator was able to perform within range
    of the bitmap allocator. In the worst case situation, freeing took
    nearly 5 seconds for 1 million 4-byte objects. The bitmap allocator
    dramatically improves the consistency of the free path. The small
    allocations performed nearly identical regardless of the freeing
    pattern.

    While it does add to the allocation latency, the allocation scenario
    here is optimal for the area map allocator. The area map allocator runs
    into trouble when it is allocating in chunks where the latter half is
    full. It is difficult to replicate this, so I present a variant where
    the pages are second half filled. Freeing was done sequentially. Below
    are the numbers for this scenario:

    Area Map Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 4118 | 4892
    16B | 1651 | 1163
    64B | 598 | 285
    256B | 771 | 158
    1024B | 3034 | 160

    Bitmap Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 481 | 67
    16B | 506 | 69
    64B | 636 | 75
    256B | 892 | 90
    1024B | 3262 | 147

    The data shows a parabolic curve of performance for the area map
    allocator. This is due to the memmove operation being the dominant cost
    with the lower object sizes as more objects are packed in a chunk and at
    higher object sizes, the traversal of the chunk slots is the dominating
    cost. The bitmap allocator suffers this problem as well. The above data
    shows the inability to scale for the allocation path with the area map
    allocator and that the bitmap allocator demonstrates consistent
    performance in general.

    The second problem of additional scanning can result in the area map
    allocator completing in 52 minutes when trying to allocate 1 million
    4-byte objects with 8-byte alignment. The same workload takes
    approximately 16 seconds to complete for the bitmap allocator.

    V2:
    Fixed a bug in pcpu_alloc_first_chunk end_offset was setting the bitmap
    using bytes instead of bits.

    Added a comment to pcpu_cnt_pop_pages to explain bitmap_weight.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     

26 Jul, 2017

13 commits

  • The area map allocator only used a bitmap for the backing page state.
    The new bitmap allocator will use bitmaps to manage the allocation
    region in addition to this.

    This patch generalizes the bitmap iterators so they can be reused with
    the bitmap allocator.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • This patch increases the minimum allocation size of percpu memory to
    4-bytes. This change will help minimize the metadata overhead
    associated with the bitmap allocator. The assumption is that most
    allocations will be of objects or structs greater than 2 bytes with
    integers or longs being used rather than shorts.

    The first chunk regions are now aligned with the minimum allocation
    size. The reserved region is expected to be set as a multiple of the
    minimum allocation size. The static region is aligned up and the delta
    is removed from the dynamic size. This works because the dynamic size is
    increased to be page aligned. If the static size is not minimum
    allocation size aligned, then there must be a gap that is added to the
    dynamic size. The dynamic size will never be smaller than the set value.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • pcpu_nr_empty_pop_pages is used to ensure there are a handful of free
    pages around to serve atomic allocations. A new field, nr_empty_pop_pages,
    is added to the pcpu_chunk struct to keep track of the number of empty
    pages. This field is needed as the number of empty populated pages is
    globally tracked and deltas are used to update in the bitmap allocator.
    Pages that contain a hidden area are not considered to be empty. This
    new field is exposed in percpu_stats.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The populated bitmap represents the state of the pages the chunk serves.
    Prior, the bitmap was marked completely used as the first chunk was
    allocated and immutable. This is misleading because the first chunk may
    not be completely filled. Additionally, with moving the base_addr up in
    the previous patch, the population check no longer corresponds to what
    was being checked.

    This patch modifies the population map to be only the number of pages
    the region serves and to make what it was checking correspond correctly
    again. The change is to remove any misunderstanding between the size of
    the populated bitmap and the actual size of it. The work function page
    iterators now use nr_pages for the check rather than pcpu_unit_pages
    because nr_populated is now chunk specific. Without this, the work
    function would try to populate the remainder of these chunks despite it
    not serving any more than nr_pages when nr_pages is set less than
    pcpu_unit_pages.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The percpu address checks for the reserved and dynamic region chunks are
    now specific to each region. The address checking logic can be combined
    taking advantage of the global references to the dynamic and static
    region chunks.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • Originally, the first chunk was served by one or two chunks, each
    given a region they are responsible for. Despite this, the arithmetic
    was based off of the true base_addr of the chunk making it be overly
    inclusive.

    This patch moves the base_addr of chunks that are responsible for the
    first chunk. The base_addr must remain page aligned to keep the
    address alignment correct, so it is the beginning of the region served
    page aligned down. start_offset holds where the region served begins
    from this new base_addr.

    The corresponding percpu address checks are modified to be more specific
    as a result. The first chunk considers only the dynamic region and both
    first chunk and reserved chunk checks ignore the static region. The
    static region addresses should never be passed into the allocator. There
    is no impact here besides distinguishing the first chunk and making the
    checks specific.

    The percpu pointer to physical address is left intact as addresses are
    not given out in the non-allocated portion of percpu memory.

    nr_pages is added to pcpu_chunk to keep track of the size of the entire
    region served containing both start_offset and end_offset. This variable
    will be used to manage the bitmap allocator.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • There is no need to have the static chunk and dynamic chunk be named
    separately as the allocations are sequential. This preemptively solves
    the misnomer problem with the base_addrs being moved up in the following
    patch. It also removes a ternary operation deciding the first chunk.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The area map allocator manages the first chunk area by hiding all but
    the region it is responsible for serving in the area map. To align this
    with the populated page bitmap, end_offset is introduced to keep track
    of the delta to end page aligned. The area map is appended with the
    page aligned end when necessary to be in line with how the bitmap
    allocator requires the ending to be aligned with the LCM of PAGE_SIZE
    and the size of each bitmap block. percpu_stats is updated to ignore
    this region when present.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • Create a common allocator for first chunk initialization,
    pcpu_alloc_first_chunk. Comments for this function will be added in a
    later patch once the bitmap allocator is added.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • There is logic for setting variables in the static chunk init code that
    could be consolidated with the dynamic chunk init code. This combines
    this logic to setup for combining the allocation paths. reserved_size is
    used as the conditional as a dynamic region will always exist.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • Prior this variable was used to manage statistics when the first chunk
    had a reserved region. The previous patch introduced start_offset to
    keep track of the offset by value rather than boolean. Therefore,
    has_reserved can be removed.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The reserved chunk arithmetic uses a global variable
    pcpu_reserved_chunk_limit that is set in the first chunk init code to
    hide a portion of the area map. The bitmap allocator to come will
    eventually move the base_addr up and require both the reserved chunk
    and static chunk to maintain this offset. pcpu_reserved_chunk_limit is
    removed and start_offset is added.

    The first chunk that is circulated and is pcpu_first_chunk serves the
    dynamic region, the region following the reserved region. The reserved
    chunk address check will temporarily use the first chunk to identify its
    address range. A following patch will increase the base_addr and remove
    this. If there is no reserved chunk, this will check the static region
    and return false because those values should never be passed into the
    allocator.

    Lastly, when linking in the first chunk, make sure to count the right
    free region for the number of empty populated pages.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • The first chunk is handled as a special case as it is composed of the
    static, reserved, and dynamic regions. The code handles each case
    individually. The next several patches will merge these code paths and
    lay the foundation for the bitmap allocator.

    This patch modifies logic to enforce that a dynamic region exists and
    changes the area map to account for that. This brings the logic closer
    to the dynamic chunk's init logic.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     

17 Jul, 2017

2 commits

  • The header comment for percpu memory is a little hard to parse and is
    not super clear about how the first chunk is managed. This adds a
    little more clarity to the situation.

    There is also quite a bit of tricky logic in the pcpu_build_alloc_info.
    This adds a restructure of a comment to add a little more information.
    Unfortunately, you will still have to piece together a handful of other
    comments too, but should help direct you to the meaningful comments.

    Signed-off-by: Dennis Zhou
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     
  • Percpu memory holds a minimum threshold of pages that are populated
    in order to serve atomic percpu memory requests. This change makes it
    easier to verify that there are a minimum number of populated pages
    lying around.

    Signed-off-by: Dennis Zhou
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)