15 Sep, 2009

1 commit


01 Sep, 2009

1 commit

  • percpu incorrectly assumed that cpu0 was always there which led to the
    following warning and eventual oops on sparc machines w/o cpu0.

    WARNING: at mm/percpu.c:651 pcpu_map+0xdc/0x100()
    Modules linked in:
    Call Trace:
    [000000000045eb70] warn_slowpath_common+0x50/0xa0
    [000000000045ebdc] warn_slowpath_null+0x1c/0x40
    [00000000004d493c] pcpu_map+0xdc/0x100
    [00000000004d59a4] pcpu_alloc+0x3e4/0x4e0
    [00000000004d5af8] __alloc_percpu+0x18/0x40
    [00000000005b112c] __percpu_counter_init+0x4c/0xc0
    ...
    Unable to handle kernel NULL pointer dereference
    ...
    I7:
    Disabling lock debugging due to kernel taint
    Caller[000000000053c1b0]: sysfs_new_dirent+0x30/0x120
    Caller[000000000053c7a4]: create_dir+0x24/0xc0
    Caller[000000000053c870]: sysfs_create_dir+0x30/0x80
    Caller[00000000005990e8]: kobject_add_internal+0xc8/0x200
    ...
    Kernel panic - not syncing: Attempted to kill the idle task!

    This patch fixes the problem by backporting parts from devel branch to
    make percpu core not depend on the existence of cpu0.

    Signed-off-by: Tejun Heo
    Reported-by: Meelis Roos
    Cc: David Miller

    Tejun Heo
     

14 Aug, 2009

18 commits

  • With x86 converted to embedding allocator, lpage doesn't have any user
    left. Kill it along with cpa handling code.

    Signed-off-by: Tejun Heo
    Cc: Jan Beulich

    Tejun Heo
     
  • Now that percpu core can handle very sparse units, given that vmalloc
    space is large enough, embedding first chunk allocator can use any
    memory to build the first chunk. This patch teaches
    pcpu_embed_first_chunk() about distances between cpus and to use
    alloc/free callbacks to allocate node specific areas for each group
    and use them for the first chunk.

    This brings the benefits of embedding allocator to NUMA configurations
    - no extra TLB pressure with the flexibility of unified dynamic
    allocator and no need to restructure arch code to build memory layout
    suitable for percpu. With units put into atom_size aligned groups
    according to cpu distances, using large page for dynamic chunks is
    also easily possible with falling back to reuglar pages if large
    allocation fails.

    Embedding allocator users are converted to specify NULL
    cpu_distance_fn, so this patch doesn't cause any visible behavior
    difference. Following patches will convert them.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • ai->groups[] contains which units need to be put consecutively and at
    what offset from the chunk base address. Compile this information
    into pcpu_group_offsets[] and pcpu_group_sizes[] in
    pcpu_setup_first_chunk() and use them to allocate sparse vm areas
    using pcpu_get_vm_areas().

    This will be used to allow directly using sparse NUMA memories as
    percpu areas.

    Signed-off-by: Tejun Heo
    Cc: Nick Piggin

    Tejun Heo
     
  • The only thing percpu allocator wants to know about a vmalloc area is
    the base address. Instead of requiring chunk->vm, add
    chunk->base_addr which contains the necessary value. This simplifies
    the code a bit and makes the dummy first_vm unnecessary. This change
    will ease allowing a chunk to be mapped by multiple vms.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Currently units are mapped sequentially into address space. This
    patch adds pcpu_unit_offsets[] which allows units to be mapped to
    arbitrary offsets from the chunk base address. This is necessary to
    allow sparse embedding which might would need to allocate address
    ranges and memory areas which aren't aligned to unit size but
    allocation atom size (page or large page size). This also simplifies
    things a bit by removing the need to calculate offset from unit
    number.

    With this change, there's no need for the arch code to know
    pcpu_unit_size. Update pcpu_setup_first_chunk() and first chunk
    allocators to return regular 0 or -errno return code instead of unit
    size or -errno.

    Signed-off-by: Tejun Heo
    Cc: David S. Miller

    Tejun Heo
     
  • Till now, non-linear cpu->unit map was expressed using an integer
    array which maps each cpu to a unit and used only by lpage allocator.
    Although how many units have been placed in a single contiguos area
    (group) is known while building unit_map, the information is lost when
    the result is recorded into the unit_map array. For lpage allocator,
    as all allocations are done by lpages and whether two adjacent lpages
    are in the same group or not is irrelevant, this didn't cause any
    problem. Non-linear cpu->unit mapping will be used for sparse
    embedding and this grouping information is necessary for that.

    This patch introduces pcpu_alloc_info which contains all the
    information necessary for initializing percpu allocator.
    pcpu_alloc_info contains array of pcpu_group_info which describes how
    units are grouped and mapped to cpus. pcpu_group_info also has
    base_offset field to specify its offset from the chunk's base address.
    pcpu_build_alloc_info() initializes this field as if all groups are
    allocated back-to-back as is currently done but this will be used to
    sparsely place groups.

    pcpu_alloc_info is a rather complex data structure which contains a
    flexible array which in turn points to nested cpu_map arrays.

    * pcpu_alloc_alloc_info() and pcpu_free_alloc_info() are provided to
    help dealing with pcpu_alloc_info.

    * pcpu_lpage_build_unit_map() is updated to build pcpu_alloc_info,
    generalized and renamed to pcpu_build_alloc_info().
    @cpu_distance_fn may be NULL indicating that all cpus are of
    LOCAL_DISTANCE.

    * pcpul_lpage_dump_cfg() is updated to process pcpu_alloc_info,
    generalized and renamed to pcpu_dump_alloc_info(). It now also
    prints which group each alloc unit belongs to.

    * pcpu_setup_first_chunk() now takes pcpu_alloc_info instead of the
    separate parameters. All first chunk allocators are updated to use
    pcpu_build_alloc_info() to build alloc_info and call
    pcpu_setup_first_chunk() with it. This has the side effect of
    packing units for sparse possible cpus. ie. if cpus 0, 2 and 4 are
    possible, they'll be assigned unit 0, 1 and 2 instead of 0, 2 and 4.

    * x86 setup_pcpu_lpage() is updated to deal with alloc_info.

    * sparc64 setup_per_cpu_areas() is updated to build alloc_info.

    Although the changes made by this patch are pretty pervasive, it
    doesn't cause any behavior difference other than packing of sparse
    cpus. It mostly changes how information is passed among
    initialization functions and makes room for more flexibility.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: David Miller

    Tejun Heo
     
  • Unit map handling will be generalized and extended and used for
    embedding sparse first chunk and other purposes. Relocate two
    unit_map related functions upward in preparation. This patch just
    moves the code without any actual change.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • pcpu_fc_alloc_fn_t is about to see more interesting usage, add @align
    parameter.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Now that all actual first chunk allocation and copying happen in the
    first chunk allocators and helpers, there's no reason for
    pcpu_setup_first_chunk() to try to determine @dyn_size automatically.
    The only left user is page first chunk allocator. Make it determine
    dyn_size like other allocators and make @dyn_size mandatory for
    pcpu_setup_first_chunk().

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • First chunk allocators assume percpu areas have been linked using one
    of PERCPU_*() macros and depend on __per_cpu_load symbol defined by
    those macros, so there isn't much point in passing in static area size
    explicitly when it can be easily calculated from __per_cpu_start and
    __per_cpu_end. Drop @static_size from all percpu first chunk
    allocators and helpers.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Now that all first chunk allocators are in mm/percpu.c, it makes sense
    to make generalize percpu_alloc kernel parameter. Define PCPU_FC_*
    and set pcpu_chosen_fc using early_param() in mm/percpu.c. Arch code
    can use the set value to determine which first chunk allocator to use.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • There's no need to build unused first chunk allocators in. Define
    CONFIG_NEED_PER_CPU_*_FIRST_CHUNK and let archs enable them
    selectively.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Page size isn't always 4k depending on arch and configuration. Rename
    4k first chunk allocator to page.

    Signed-off-by: Tejun Heo
    Cc: David Howells

    Tejun Heo
     
  • Improve percpu boot messages such that they're uniform and contain
    more information.

    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter

    Tejun Heo
     
  • pcpu_reclaim() calls pcpu_depopulate_chunk() which makes use of pages
    array and bitmap returned by pcpu_get_pages_and_bitmap() and thus
    should be called under pcpu_alloc_mutex. pcpu_reclaim() released the
    mutex before calling depopulate leading to double free and other
    strange problems caused by the unexpected concurrent usages of pages
    array and bitmap. Fix it.

    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter

    Tejun Heo
     
  • Conflicts:
    arch/sparc/kernel/smp_64.c
    arch/x86/kernel/cpu/perf_counter.c
    arch/x86/kernel/setup_percpu.c
    drivers/cpufreq/cpufreq_ondemand.c
    mm/percpu.c

    Conflicts in core and arch percpu codes are mostly from commit
    ed78e1e078dd44249f88b1dd8c76dafb39567161 which substituted many
    num_possible_cpus() with nr_cpu_ids. As for-next branch has moved all
    the first chunk allocators into mm/percpu.c, the changes are moved
    from arch code to mm/percpu.c.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • get_vm_area() only accepts VM_* flags, not GFP_*.

    And according to the doc of get_vm_area(), here should be
    VM_ALLOC.

    Signed-off-by: WANG Cong
    Acked-by: Tejun Heo
    Cc: Ingo Molnar

    Amerigo Wang
     
  • percpu code has been assuming num_possible_cpus() == nr_cpu_ids which
    is incorrect if cpu_possible_map contains holes. This causes percpu
    code to access beyond allocated memories and vmalloc areas. On a
    sparc64 machine with cpus 0 and 2 (u60), this triggers the following
    warning or fails boot.

    WARNING: at /devel/tj/os/work/mm/vmalloc.c:106 vmap_page_range_noflush+0x1f0/0x240()
    Modules linked in:
    Call Trace:
    [00000000004b17d0] vmap_page_range_noflush+0x1f0/0x240
    [00000000004b1840] map_vm_area+0x20/0x60
    [00000000004b1950] __vmalloc_area_node+0xd0/0x160
    [0000000000593434] deflate_init+0x14/0xe0
    [0000000000583b94] __crypto_alloc_tfm+0xd4/0x1e0
    [00000000005844f0] crypto_alloc_base+0x50/0xa0
    [000000000058b898] alg_test_comp+0x18/0x80
    [000000000058dad4] alg_test+0x54/0x180
    [000000000058af00] cryptomgr_test+0x40/0x60
    [0000000000473098] kthread+0x58/0x80
    [000000000042b590] kernel_thread+0x30/0x60
    [0000000000472fd0] kthreadd+0xf0/0x160
    ---[ end trace 429b268a213317ba ]---

    This patch fixes generic percpu functions and sparc64
    setup_per_cpu_areas() so that they handle sparse cpu_possible_map
    properly.

    Please note that on x86, cpu_possible_map() doesn't contain holes and
    thus num_possible_cpus() == nr_cpu_ids and this patch doesn't cause
    any behavior difference.

    Signed-off-by: Tejun Heo
    Acked-by: David S. Miller
    Cc: Ingo Molnar

    Tejun Heo
     

04 Jul, 2009

10 commits

  • Large page first chunk allocator is primarily used for NUMA machines;
    however, its NUMA handling is extremely simplistic. Regardless of
    their proximity, each cpu is put into separate large page just to
    return most of the allocated space back wasting large amount of
    vmalloc space and increasing cache footprint.

    This patch teachs NUMA details to large page allocator. Given
    processor proximity information, pcpu_lpage_build_unit_map() will find
    fitting cpu -> unit mapping in which cpus in LOCAL_DISTANCE share the
    same large page and not too much virtual address space is wasted.

    This greatly reduces the unit and thus chunk size and wastes much less
    address space for the first chunk. For example, on 4/4 NUMA machine,
    the original code occupied 16MB of virtual space for the first chunk
    while the new code only uses 4MB - one 2MB page for each node.

    [ Impact: much better space efficiency on NUMA machines ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Jan Beulich
    Cc: Andi Kleen
    Cc: David Miller

    Tejun Heo
     
  • Currently cpu and unit are always identity mapped. To allow more
    efficient large page support on NUMA and lazy allocation for possible
    but offline cpus, cpu -> unit mapping needs to be non-linear and/or
    sparse. This can be easily implemented by adding a cpu -> unit
    mapping array and using it whenever looking up the matching unit for a
    cpu.

    The only unusal conversion is in pcpu_chunk_addr_search(). The passed
    in address is unit0 based and unit0 might not be in use so it needs to
    be converted to address of an in-use unit. This is easily done by
    adding the unit offset for the current processor.

    [ Impact: allows non-linear/sparse cpu -> unit mapping, no visible change yet ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: David Miller

    Tejun Heo
     
  • percpu core doesn't need to tack all the allocated pages. It needs to
    know whether certain pages are populated and a way to reverse map
    address to page when freeing. This patch drops pcpu_chunk->page[] and
    use populated bitmap and vmalloc_to_page() lookup instead. Using
    vmalloc_to_page() exclusively is also possible but complicates first
    chunk handling, inflates cache footprint and prevents non-standard
    memory allocation for percpu memory.

    pcpu_chunk->page[] was used to track each page's allocation and
    allowed asymmetric population which happens during failure path;
    however, with single bitmap for all units, this is no longer possible.
    Bite the bullet and rewrite (de)populate functions so that things are
    done in clearly separated steps such that asymmetric population
    doesn't happen. This makes the (de)population process much more
    modular and will also ease implementing non-standard memory usage in
    the future (e.g. large pages).

    This makes @get_page_fn parameter to pcpu_setup_first_chunk()
    unnecessary. The parameter is dropped and all first chunk helpers are
    updated accordingly. Please note that despite the volume most changes
    to first chunk helpers are symbol renames for variables which don't
    need to be referenced outside of the helper anymore.

    This change reduces memory usage and cache footprint of pcpu_chunk.
    Now only #unit_pages bits are necessary per chunk.

    [ Impact: reduced memory usage and cache footprint for bookkeeping ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: David Miller

    Tejun Heo
     
  • (de)populate functions are about to be reimplemented to drop
    pcpu_chunk->page array. Move a few functions so that the rewrite
    patch doesn't have code movement making it more difficult to read.

    [ Impact: code movement ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar

    Tejun Heo
     
  • Now that all first chunk allocator helpers allocate and map the first
    chunk themselves, there's no need to have optional default alloc/map
    in pcpu_setup_first_chunk(). Drop @populate_pte_fn and only leave
    @dyn_size optional and make all other params mandatory.

    This makes it much easier to follow what pcpu_setup_first_chunk() is
    doing and what actual differences tweaking each parameter results in.

    [ Impact: drop unused code path ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar

    Tejun Heo
     
  • Generalize and move x86 setup_pcpu_lpage() into
    pcpu_lpage_first_chunk(). setup_pcpu_lpage() now is a simple wrapper
    around the generalized version. Other than taking size parameters and
    using arch supplied callbacks to allocate/free/map memory,
    pcpu_lpage_first_chunk() is identical to the original implementation.

    This simplifies arch code and will help converting more archs to
    dynamic percpu allocator.

    While at it, factor out pcpu_calc_fc_sizes() which is common to
    pcpu_embed_first_chunk() and pcpu_lpage_first_chunk().

    [ Impact: code reorganization and generalization ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar

    Tejun Heo
     
  • At first, percpu first chunk was always setup page-by-page by the
    generic code. To add other allocators, different parts of the generic
    initialization was made optional. Now we have three allocators -
    embed, remap and 4k. embed and remap fully handle allocation and
    mapping of the first chunk while 4k still depends on generic code for
    those. This makes the generic alloc/map paths specifci to 4k and
    makes the code unnecessary complicated with optional generic
    behaviors.

    This patch makes the 4k allocator to allocate and map memory directly
    instead of depending on the generic code. The only outside visible
    change is that now dynamic area in the first chunk is allocated
    up-front instead of on-demand. This doesn't make any meaningful
    difference as the area is minimal (usually less than a page, just
    enough to fill the alignment) on 4k allocator. Plus, dynamic area in
    the first chunk usually gets fully used anyway.

    This will allow simplification of pcpu_setpu_first_chunk() and removal
    of chunk->page array.

    [ Impact: no outside visible change other than up-front allocation of dyn area ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar

    Tejun Heo
     
  • Generalize and move x86 setup_pcpu_4k() into pcpu_4k_first_chunk().
    setup_pcpu_4k() now is a simple wrapper around the generalized
    version. Other than taking size parameters and using arch supplied
    callbacks to allocate/free memory, pcpu_4k_first_chunk() is identical
    to the original implementation.

    This simplifies arch code and will help converting more archs to
    dynamic percpu allocator.

    While at it, s/pcpu_populate_pte_fn_t/pcpu_fc_populate_pte_fn_t/ for
    consistency.

    [ Impact: code reorganization and generalization ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar

    Tejun Heo
     
  • The only extra feature @unit_size provides is making dead space at the
    end of the first chunk which doesn't have any valid usecase. Drop the
    parameter. This will increase consistency with generalized 4k
    allocator.

    James Bottomley spotted missing conversion for the default
    setup_per_cpu_areas() which caused build breakage on all arcsh which
    use it.

    [ Impact: drop unused code path ]

    Signed-off-by: Tejun Heo
    Cc: James Bottomley
    Cc: Ingo Molnar

    Tejun Heo
     
  • The @addr passed into pcpu_chunk_addr_search() is unit0 based address
    and thus should be matched inside unit0 area. Currently, when it uses
    chunk size when determining whether the address falls in the first
    chunk. Addresses in unitN where N>0 shouldn't be passed in anyway, so
    this doesn't cause any malfunction but fix it for consistency.

    [ Impact: mostly cleanup ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar

    Tejun Heo
     

24 Jun, 2009

1 commit

  • This patch makes most !CONFIG_HAVE_SETUP_PER_CPU_AREA archs use
    dynamic percpu allocator. The first chunk is allocated using
    embedding helper and 8k is reserved for modules. This ensures that
    the new allocator behaves almost identically to the original allocator
    as long as static percpu variables are concerned, so it shouldn't
    introduce much breakage.

    s390 and alpha use custom SHIFT_PERCPU_PTR() to work around addressing
    range limit the addressing model imposes. Unfortunately, this breaks
    if the address is specified using a variable, so for now, the two
    archs aren't converted.

    The following architectures are affected by this change.

    * sh
    * arm
    * cris
    * mips
    * sparc(32)
    * blackfin
    * avr32
    * parisc (broken, under investigation)
    * m32r
    * powerpc(32)

    As this change makes the dynamic allocator the default one,
    CONFIG_HAVE_DYNAMIC_PER_CPU_AREA is replaced with its invert -
    CONFIG_HAVE_LEGACY_PER_CPU_AREA, which is added to yet-to-be converted
    archs. These archs implement their own setup_per_cpu_areas() and the
    conversion is not trivial.

    * powerpc(64)
    * sparc(64)
    * ia64
    * alpha
    * s390

    Boot and batch alloc/free tests on x86_32 with debug code (x86_32
    doesn't use default first chunk initialization). Compile tested on
    sparc(32), powerpc(32), arm and alpha.

    Kyle McMartin reported that this change breaks parisc. The problem is
    still under investigation and he is okay with pushing this patch
    forward and fixing parisc later.

    [ Impact: use dynamic allocator for most archs w/o custom percpu setup ]

    Signed-off-by: Tejun Heo
    Acked-by: Rusty Russell
    Acked-by: David S. Miller
    Acked-by: Benjamin Herrenschmidt
    Acked-by: Martin Schwidefsky
    Reviewed-by: Christoph Lameter
    Cc: Paul Mundt
    Cc: Russell King
    Cc: Mikael Starvik
    Cc: Ralf Baechle
    Cc: Bryan Wu
    Cc: Kyle McMartin
    Cc: Matthew Wilcox
    Cc: Grant Grundler
    Cc: Hirokazu Takata
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Heiko Carstens
    Cc: Ingo Molnar

    Tejun Heo
     

22 Jun, 2009

2 commits

  • According to Andi, it isn't clear whether lpage allocator is worth the
    trouble as there are many processors where PMD TLB is far scarcer than
    PTE TLB. The advantage or disadvantage probably depends on the actual
    size of percpu area and specific processor. As performance
    degradation due to TLB pressure tends to be highly workload specific
    and subtle, it is difficult to decide which way to go without more
    data.

    This patch implements percpu_alloc kernel parameter to allow selecting
    which first chunk allocator to use to ease debugging and testing.

    While at it, make sure all the failure paths report why something
    failed to help determining why certain allocator isn't working. Also,
    kill the "Great future plan" comment which had already been realized
    quite some time ago.

    [ Impact: allow explicit percpu first chunk allocator selection ]

    Signed-off-by: Tejun Heo
    Reported-by: Jan Beulich
    Cc: Andi Kleen
    Cc: Ingo Molnar

    Tejun Heo
     
  • In pcpu_unmap(), flushing virtual cache on vunmap can't be delayed as
    the page is going to be returned to the page allocator. Only TLB
    flushing can be put off such that vmalloc code can handle it lazily.
    Fix it.

    [ Impact: fix subtle virtual cache flush bug ]

    Signed-off-by: Tejun Heo
    Cc: Nick Piggin
    Cc: Ingo Molnar

    Tejun Heo
     

09 Apr, 2009

2 commits

  • Impact: use page->index for addr to chunk mapping instead of dedicated rbtree

    The rbtree is used to determine the chunk from the virtual address.
    However, we can already determine the page struct from a virtual
    address and there are several unused fields in page struct used by
    vmalloc. Use the index field to store a pointer to the chunk. Then
    there is no need anymore for an rbtree.

    tj: * s/(set|get)_chunk/pcpu_\1_page_chunk/

    * Drop inline from the above two functions and moved them upwards
    so that they are with other simple helpers.

    * Initial pages might not (actually most of the time don't) live
    in the vmalloc area. With the previous patch to manually
    reverse-map both first chunks, this is no longer an issue.
    Removed pcpu_set_chunk() call on initial pages.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Tejun Heo
    Cc: Martin Schwidefsky
    Cc: rusty@rustcorp.com.au
    Cc: Paul Mundt
    Cc: rmk@arm.linux.org.uk
    Cc: starvik@axis.com
    Cc: ralf@linux-mips.org
    Cc: davem@davemloft.net
    Cc: cooloney@kernel.org
    Cc: kyle@mcmartin.ca
    Cc: matthew@wil.cx
    Cc: grundler@parisc-linux.org
    Cc: takata@linux-m32r.org
    Cc: benh@kernel.crashing.org
    Cc: rth@twiddle.net
    Cc: ink@jurassic.park.msu.ru
    Cc: heiko.carstens@de.ibm.com
    Cc: Linus Torvalds
    Cc: Nick Piggin
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Christoph Lameter
     
  • Impact: both first chunks don't use rbtree, no functional change

    There can be two first chunks - reserved and dynamic with the former
    one being optional. Dynamic first chunk was linked on reverse-mapping
    rbtree while the reserved one was mapped manually using the start
    address and reserved offset limit.

    This patch makes both first chunks to be looked up manually without
    using the rbtree. This is to help getting rid of the rbtree.

    Signed-off-by: Tejun Heo
    Cc: Martin Schwidefsky
    Cc: rusty@rustcorp.com.au
    Cc: Paul Mundt
    Cc: rmk@arm.linux.org.uk
    Cc: starvik@axis.com
    Cc: ralf@linux-mips.org
    Cc: davem@davemloft.net
    Cc: cooloney@kernel.org
    Cc: kyle@mcmartin.ca
    Cc: matthew@wil.cx
    Cc: grundler@parisc-linux.org
    Cc: takata@linux-m32r.org
    Cc: benh@kernel.crashing.org
    Cc: rth@twiddle.net
    Cc: ink@jurassic.park.msu.ru
    Cc: heiko.carstens@de.ibm.com
    Cc: Linus Torvalds
    Cc: Nick Piggin
    Cc: Christoph Lameter
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Tejun Heo
     

10 Mar, 2009

3 commits

  • Impact: code reorganization

    Separate out embedding first chunk setup helper from x86 embedding
    first chunk allocator and put it in mm/percpu.c. This will be used by
    the default percpu first chunk allocator and possibly by other archs.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Impact: cleanup, more flexibility for first chunk init

    Non-negative @dyn_size used to be allowed iff @unit_size wasn't auto.
    This restriction stemmed from implementation detail and made things a
    bit less intuitive. This patch allows @dyn_size to be specified
    regardless of @unit_size and swaps the positions of @dyn_size and
    @unit_size so that the parameter order makes more sense (static,
    reserved and dyn sizes followed by enclosing unit_size).

    While at it, add @unit_size >= PCPU_MIN_UNIT_SIZE sanity check.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Impact: generic addr pcpu ptr conversion macros

    There's nothing arch specific about x86 __addr_to_pcpu_ptr() and
    __pcpu_ptr_to_addr(). With proper __per_cpu_load and __per_cpu_start
    defined, they'll do the right thing regardless of actual layout.

    Move these macros from arch/x86/include/asm/percpu.h to mm/percpu.c
    and allow archs to override it as necessary.

    Signed-off-by: Tejun Heo

    Tejun Heo
     

07 Mar, 2009

1 commit

  • Impact: fix deadlock and allow atomic free

    Percpu allocation always uses GFP_KERNEL and whole alloc/free paths
    were protected by single mutex. All percpu allocations have been from
    GFP_KERNEL-safe context and the original allocator had this assumption
    too. However, by protecting both alloc and free paths with the same
    mutex, the new allocator creates free -> alloc -> GFP_KERNEL
    dependency which the original allocator didn't have. This can lead to
    deadlock if free is called from FS or IO paths. Also, in general,
    allocators are expected to allow free to be called from atomic
    context.

    This patch implements finer grained locking to break the deadlock and
    allow atomic free. For details, please read the "Synchronization
    rules" comment.

    While at it, also add CONTEXT: to function comments to describe which
    context they expect to be called from and what they do to it.

    This problem was reported by Thomas Gleixner and Peter Zijlstra.

    http://thread.gmane.org/gmane.linux.kernel/802384

    Signed-off-by: Tejun Heo
    Reported-by: Thomas Gleixner
    Reported-by: Peter Zijlstra

    Tejun Heo
     

06 Mar, 2009

1 commit