17 Feb, 2010

1 commit

  • Add __percpu sparse annotations to core subsystems.

    These annotations are to make sparse consider percpu variables to be
    in a different address space and warn if accessed without going
    through percpu accessors. This patch doesn't affect normal builds.

    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter
    Acked-by: Paul E. McKenney
    Cc: Jens Axboe
    Cc: linux-mm@kvack.org
    Cc: Rusty Russell
    Cc: Dipankar Sarma
    Cc: Peter Zijlstra
    Cc: Andrew Morton
    Cc: Eric Biederman

    Tejun Heo
     

02 Feb, 2010

1 commit


12 Jan, 2010

1 commit

  • __pcpu_ptr_to_addr() can be overridden by the architecture and might not
    behave well if passed a NULL pointer. So avoid calling it until we have
    verified that its arg is not NULL.

    Cc: Rusty Russell
    Cc: Kamalesh Babulal
    Acked-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

05 Jan, 2010

1 commit


08 Dec, 2009

2 commits


25 Nov, 2009

1 commit

  • o kdump functionality reserves a per cpu area at boot time and exports the
    physical address of that area to user space through sys interface. This
    area stores some dump related information like cpu register states etc
    at the time of crash.

    o We were assuming that per cpu area always come from linearly mapped meory
    region and using __pa() to determine physical address.
    With percpu_alloc=page, per cpu area can come from vmalloc region also and
    __pa() breaks.

    o This patch implments a new function to convert per cpu address to
    physical address.

    Before the patch, crash_notes addresses looked as follows.

    cpu0 60fffff49800
    cpu1 60fffff60800
    cpu2 60fffff77800

    These are bogus phsyical addresses.

    After the patch, address are following.

    cpu0 13eb44000
    cpu1 13eb43000
    cpu2 13eb42000
    cpu3 13eb41000

    These look fine. I got 4G of memory and /proc/iomem tell me following.

    100000000-13fffffff : System RAM

    tj: * added missing asm/io.h include reported by Stephen Rothwell
    * repositioned per_cpu_ptr_phys() in percpu.c and added comment.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Tejun Heo
    Cc: Stephen Rothwell

    Vivek Goyal
     

12 Nov, 2009

1 commit

  • pcpu_extend_area_map() had the following two bugs.

    * It should return 1 if pcpu_lock was dropped and reacquired but it
    returned 0. This could lead to oops if free_percpu() races with
    area map extension.

    * pcpu_mem_free() was called under pcpu_lock. pcpu_mem_free() might
    end up calling vfree() which isn't IRQ safe. This could lead to
    deadlock through lock order inversion via IRQ.

    In addition, Linus pointed out that the temporary lock dropping and
    subtle three-way return value of pcpu_extend_area_map() was very ugly
    and suggested to split the function into two - pcpu_need_to_extend()
    and pcpu_extend_area_map().

    This patch restructures pcpu_extend_area_map() as suggested and fixes
    the two bugs.

    Signed-off-by: Tejun Heo
    Acked-by: Linus Torvalds
    Cc: Ingo Molnar

    Tejun Heo
     

29 Oct, 2009

1 commit

  • Make the following changes to remove some sparse warnings.

    * Make DEFINE_PER_CPU_SECTION() declare __pcpu_unique_* before
    defining it.

    * Annotate pcpu_extend_area_map() that it is entered with pcpu_lock
    held, releases it and then reacquires it.

    * Make percpu related macros use unique nested variable names.

    * While at it, add pcpu prefix to __size_call[_return]() macros as
    to-be-implemented sparse annotations will add percpu specific stuff
    to these macros.

    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter
    Cc: Rusty Russell

    Tejun Heo
     

28 Oct, 2009

1 commit

  • pcpu_alloc() and pcpu_extend_area_map() perform a series of
    spin_lock_irq()/spin_unlock_irq() calls, which make them unsafe
    with respect to being called from contexts which have IRQs off.

    This patch converts the code to perform save/restore of flags instead,
    making pcpu_alloc() (or __alloc_percpu() respectively) to be called
    from early kernel startup stage, where IRQs are off.

    This is needed for proper initialization of per-cpu rq_weight data from
    sched_init().

    tj: added comment explaining why irqsave/restore is used in alloc path.

    Signed-off-by: Jiri Kosina
    Acked-by: Ingo Molnar
    Signed-off-by: Tejun Heo

    Jiri Kosina
     

12 Oct, 2009

2 commits


02 Oct, 2009

1 commit


29 Sep, 2009

5 commits

  • Warn and dump stack when percpu allocation fails. percpu allocator is
    still young and unchecked NULL percpu pointer usage can result in
    random memory corruption when combined with the pointer shifting in
    access macros. Allocation failures should be rare and the warning
    message will be disabled after certain times.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • The parameters to pcpu_setup_first_chunk() come from different sources
    depending on architecture and can be quite complex. The function runs
    various sanity checks on the parameters and triggers BUG() if
    something isn't right. However, this is very early during the boot
    and not reporting exactly what the problem is makes debugging even
    harder.

    Add PCPU_SETUP_BUG() macro which prints out enough information about
    the parameters. As the macro still puts separate BUG() for each
    check, it won't lose any information even on the situations where only
    the program counter can be retrieved.

    While at it, also bump pcpu_dump_alloc_info() message to KERN_INFO so
    that it's visible on the console if boot fails to complete.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Embedding first chunk allocator maintains the distances between units
    in the vmalloc area and thus needs vmalloc space to be larger than the
    maximum distances between units; otherwise, it wouldn't be able to
    create any dynamic chunks. This patch makes the embedding first chunk
    allocator check vmalloc space size and if the maximum distance between
    units is larger than 75% of it, print warning and, if page mapping
    allocator is available, fail initialization so that the system falls
    back onto it.

    This should work around percpu allocation failure problems on certain
    sparc64 configurations where distances between NUMA nodes are larger
    than the vmalloc area and makes percpu allocator more robust for
    future configurations.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • pcpu_build_alloc_info() may be called multiple times when percpu is
    falling back to different first chunk allocator. Make it clear static
    buffers so that they don't contain values from previous runs.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • pcpu_setup_first_chunk() incorrectly used NR_CPUS as the impossible
    unit number while unit number can equal and go over NR_CPUS with
    sparse unit map. This triggers BUG_ON() spuriously on machines which
    have non-power-of-two number of cpus. Use UINT_MAX instead.

    Signed-off-by: Tejun Heo
    Reported-and-tested-by: Tony Vroon

    Tejun Heo
     

15 Sep, 2009

1 commit


01 Sep, 2009

1 commit

  • percpu incorrectly assumed that cpu0 was always there which led to the
    following warning and eventual oops on sparc machines w/o cpu0.

    WARNING: at mm/percpu.c:651 pcpu_map+0xdc/0x100()
    Modules linked in:
    Call Trace:
    [000000000045eb70] warn_slowpath_common+0x50/0xa0
    [000000000045ebdc] warn_slowpath_null+0x1c/0x40
    [00000000004d493c] pcpu_map+0xdc/0x100
    [00000000004d59a4] pcpu_alloc+0x3e4/0x4e0
    [00000000004d5af8] __alloc_percpu+0x18/0x40
    [00000000005b112c] __percpu_counter_init+0x4c/0xc0
    ...
    Unable to handle kernel NULL pointer dereference
    ...
    I7:
    Disabling lock debugging due to kernel taint
    Caller[000000000053c1b0]: sysfs_new_dirent+0x30/0x120
    Caller[000000000053c7a4]: create_dir+0x24/0xc0
    Caller[000000000053c870]: sysfs_create_dir+0x30/0x80
    Caller[00000000005990e8]: kobject_add_internal+0xc8/0x200
    ...
    Kernel panic - not syncing: Attempted to kill the idle task!

    This patch fixes the problem by backporting parts from devel branch to
    make percpu core not depend on the existence of cpu0.

    Signed-off-by: Tejun Heo
    Reported-by: Meelis Roos
    Cc: David Miller

    Tejun Heo
     

14 Aug, 2009

18 commits

  • With x86 converted to embedding allocator, lpage doesn't have any user
    left. Kill it along with cpa handling code.

    Signed-off-by: Tejun Heo
    Cc: Jan Beulich

    Tejun Heo
     
  • Now that percpu core can handle very sparse units, given that vmalloc
    space is large enough, embedding first chunk allocator can use any
    memory to build the first chunk. This patch teaches
    pcpu_embed_first_chunk() about distances between cpus and to use
    alloc/free callbacks to allocate node specific areas for each group
    and use them for the first chunk.

    This brings the benefits of embedding allocator to NUMA configurations
    - no extra TLB pressure with the flexibility of unified dynamic
    allocator and no need to restructure arch code to build memory layout
    suitable for percpu. With units put into atom_size aligned groups
    according to cpu distances, using large page for dynamic chunks is
    also easily possible with falling back to reuglar pages if large
    allocation fails.

    Embedding allocator users are converted to specify NULL
    cpu_distance_fn, so this patch doesn't cause any visible behavior
    difference. Following patches will convert them.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • ai->groups[] contains which units need to be put consecutively and at
    what offset from the chunk base address. Compile this information
    into pcpu_group_offsets[] and pcpu_group_sizes[] in
    pcpu_setup_first_chunk() and use them to allocate sparse vm areas
    using pcpu_get_vm_areas().

    This will be used to allow directly using sparse NUMA memories as
    percpu areas.

    Signed-off-by: Tejun Heo
    Cc: Nick Piggin

    Tejun Heo
     
  • The only thing percpu allocator wants to know about a vmalloc area is
    the base address. Instead of requiring chunk->vm, add
    chunk->base_addr which contains the necessary value. This simplifies
    the code a bit and makes the dummy first_vm unnecessary. This change
    will ease allowing a chunk to be mapped by multiple vms.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Currently units are mapped sequentially into address space. This
    patch adds pcpu_unit_offsets[] which allows units to be mapped to
    arbitrary offsets from the chunk base address. This is necessary to
    allow sparse embedding which might would need to allocate address
    ranges and memory areas which aren't aligned to unit size but
    allocation atom size (page or large page size). This also simplifies
    things a bit by removing the need to calculate offset from unit
    number.

    With this change, there's no need for the arch code to know
    pcpu_unit_size. Update pcpu_setup_first_chunk() and first chunk
    allocators to return regular 0 or -errno return code instead of unit
    size or -errno.

    Signed-off-by: Tejun Heo
    Cc: David S. Miller

    Tejun Heo
     
  • Till now, non-linear cpu->unit map was expressed using an integer
    array which maps each cpu to a unit and used only by lpage allocator.
    Although how many units have been placed in a single contiguos area
    (group) is known while building unit_map, the information is lost when
    the result is recorded into the unit_map array. For lpage allocator,
    as all allocations are done by lpages and whether two adjacent lpages
    are in the same group or not is irrelevant, this didn't cause any
    problem. Non-linear cpu->unit mapping will be used for sparse
    embedding and this grouping information is necessary for that.

    This patch introduces pcpu_alloc_info which contains all the
    information necessary for initializing percpu allocator.
    pcpu_alloc_info contains array of pcpu_group_info which describes how
    units are grouped and mapped to cpus. pcpu_group_info also has
    base_offset field to specify its offset from the chunk's base address.
    pcpu_build_alloc_info() initializes this field as if all groups are
    allocated back-to-back as is currently done but this will be used to
    sparsely place groups.

    pcpu_alloc_info is a rather complex data structure which contains a
    flexible array which in turn points to nested cpu_map arrays.

    * pcpu_alloc_alloc_info() and pcpu_free_alloc_info() are provided to
    help dealing with pcpu_alloc_info.

    * pcpu_lpage_build_unit_map() is updated to build pcpu_alloc_info,
    generalized and renamed to pcpu_build_alloc_info().
    @cpu_distance_fn may be NULL indicating that all cpus are of
    LOCAL_DISTANCE.

    * pcpul_lpage_dump_cfg() is updated to process pcpu_alloc_info,
    generalized and renamed to pcpu_dump_alloc_info(). It now also
    prints which group each alloc unit belongs to.

    * pcpu_setup_first_chunk() now takes pcpu_alloc_info instead of the
    separate parameters. All first chunk allocators are updated to use
    pcpu_build_alloc_info() to build alloc_info and call
    pcpu_setup_first_chunk() with it. This has the side effect of
    packing units for sparse possible cpus. ie. if cpus 0, 2 and 4 are
    possible, they'll be assigned unit 0, 1 and 2 instead of 0, 2 and 4.

    * x86 setup_pcpu_lpage() is updated to deal with alloc_info.

    * sparc64 setup_per_cpu_areas() is updated to build alloc_info.

    Although the changes made by this patch are pretty pervasive, it
    doesn't cause any behavior difference other than packing of sparse
    cpus. It mostly changes how information is passed among
    initialization functions and makes room for more flexibility.

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: David Miller

    Tejun Heo
     
  • Unit map handling will be generalized and extended and used for
    embedding sparse first chunk and other purposes. Relocate two
    unit_map related functions upward in preparation. This patch just
    moves the code without any actual change.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • pcpu_fc_alloc_fn_t is about to see more interesting usage, add @align
    parameter.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Now that all actual first chunk allocation and copying happen in the
    first chunk allocators and helpers, there's no reason for
    pcpu_setup_first_chunk() to try to determine @dyn_size automatically.
    The only left user is page first chunk allocator. Make it determine
    dyn_size like other allocators and make @dyn_size mandatory for
    pcpu_setup_first_chunk().

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • First chunk allocators assume percpu areas have been linked using one
    of PERCPU_*() macros and depend on __per_cpu_load symbol defined by
    those macros, so there isn't much point in passing in static area size
    explicitly when it can be easily calculated from __per_cpu_start and
    __per_cpu_end. Drop @static_size from all percpu first chunk
    allocators and helpers.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Now that all first chunk allocators are in mm/percpu.c, it makes sense
    to make generalize percpu_alloc kernel parameter. Define PCPU_FC_*
    and set pcpu_chosen_fc using early_param() in mm/percpu.c. Arch code
    can use the set value to determine which first chunk allocator to use.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • There's no need to build unused first chunk allocators in. Define
    CONFIG_NEED_PER_CPU_*_FIRST_CHUNK and let archs enable them
    selectively.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • Page size isn't always 4k depending on arch and configuration. Rename
    4k first chunk allocator to page.

    Signed-off-by: Tejun Heo
    Cc: David Howells

    Tejun Heo
     
  • Improve percpu boot messages such that they're uniform and contain
    more information.

    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter

    Tejun Heo
     
  • pcpu_reclaim() calls pcpu_depopulate_chunk() which makes use of pages
    array and bitmap returned by pcpu_get_pages_and_bitmap() and thus
    should be called under pcpu_alloc_mutex. pcpu_reclaim() released the
    mutex before calling depopulate leading to double free and other
    strange problems caused by the unexpected concurrent usages of pages
    array and bitmap. Fix it.

    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter

    Tejun Heo
     
  • Conflicts:
    arch/sparc/kernel/smp_64.c
    arch/x86/kernel/cpu/perf_counter.c
    arch/x86/kernel/setup_percpu.c
    drivers/cpufreq/cpufreq_ondemand.c
    mm/percpu.c

    Conflicts in core and arch percpu codes are mostly from commit
    ed78e1e078dd44249f88b1dd8c76dafb39567161 which substituted many
    num_possible_cpus() with nr_cpu_ids. As for-next branch has moved all
    the first chunk allocators into mm/percpu.c, the changes are moved
    from arch code to mm/percpu.c.

    Signed-off-by: Tejun Heo

    Tejun Heo
     
  • get_vm_area() only accepts VM_* flags, not GFP_*.

    And according to the doc of get_vm_area(), here should be
    VM_ALLOC.

    Signed-off-by: WANG Cong
    Acked-by: Tejun Heo
    Cc: Ingo Molnar

    Amerigo Wang
     
  • percpu code has been assuming num_possible_cpus() == nr_cpu_ids which
    is incorrect if cpu_possible_map contains holes. This causes percpu
    code to access beyond allocated memories and vmalloc areas. On a
    sparc64 machine with cpus 0 and 2 (u60), this triggers the following
    warning or fails boot.

    WARNING: at /devel/tj/os/work/mm/vmalloc.c:106 vmap_page_range_noflush+0x1f0/0x240()
    Modules linked in:
    Call Trace:
    [00000000004b17d0] vmap_page_range_noflush+0x1f0/0x240
    [00000000004b1840] map_vm_area+0x20/0x60
    [00000000004b1950] __vmalloc_area_node+0xd0/0x160
    [0000000000593434] deflate_init+0x14/0xe0
    [0000000000583b94] __crypto_alloc_tfm+0xd4/0x1e0
    [00000000005844f0] crypto_alloc_base+0x50/0xa0
    [000000000058b898] alg_test_comp+0x18/0x80
    [000000000058dad4] alg_test+0x54/0x180
    [000000000058af00] cryptomgr_test+0x40/0x60
    [0000000000473098] kthread+0x58/0x80
    [000000000042b590] kernel_thread+0x30/0x60
    [0000000000472fd0] kthreadd+0xf0/0x160
    ---[ end trace 429b268a213317ba ]---

    This patch fixes generic percpu functions and sparc64
    setup_per_cpu_areas() so that they handle sparse cpu_possible_map
    properly.

    Please note that on x86, cpu_possible_map() doesn't contain holes and
    thus num_possible_cpus() == nr_cpu_ids and this patch doesn't cause
    any behavior difference.

    Signed-off-by: Tejun Heo
    Acked-by: David S. Miller
    Cc: Ingo Molnar

    Tejun Heo
     

04 Jul, 2009

2 commits

  • Large page first chunk allocator is primarily used for NUMA machines;
    however, its NUMA handling is extremely simplistic. Regardless of
    their proximity, each cpu is put into separate large page just to
    return most of the allocated space back wasting large amount of
    vmalloc space and increasing cache footprint.

    This patch teachs NUMA details to large page allocator. Given
    processor proximity information, pcpu_lpage_build_unit_map() will find
    fitting cpu -> unit mapping in which cpus in LOCAL_DISTANCE share the
    same large page and not too much virtual address space is wasted.

    This greatly reduces the unit and thus chunk size and wastes much less
    address space for the first chunk. For example, on 4/4 NUMA machine,
    the original code occupied 16MB of virtual space for the first chunk
    while the new code only uses 4MB - one 2MB page for each node.

    [ Impact: much better space efficiency on NUMA machines ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: Jan Beulich
    Cc: Andi Kleen
    Cc: David Miller

    Tejun Heo
     
  • Currently cpu and unit are always identity mapped. To allow more
    efficient large page support on NUMA and lazy allocation for possible
    but offline cpus, cpu -> unit mapping needs to be non-linear and/or
    sparse. This can be easily implemented by adding a cpu -> unit
    mapping array and using it whenever looking up the matching unit for a
    cpu.

    The only unusal conversion is in pcpu_chunk_addr_search(). The passed
    in address is unit0 based and unit0 might not be in use so it needs to
    be converted to address of an in-use unit. This is easily done by
    adding the unit offset for the current processor.

    [ Impact: allows non-linear/sparse cpu -> unit mapping, no visible change yet ]

    Signed-off-by: Tejun Heo
    Cc: Ingo Molnar
    Cc: David Miller

    Tejun Heo