24 Feb, 2013

1 commit

  • Make a sweep through mm/ and convert code that uses -1 directly to using
    the more appropriate NUMA_NO_NODE.

    Signed-off-by: David Rientjes
    Reviewed-by: Yasuaki Ishimatsu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

12 Dec, 2012

1 commit


09 Oct, 2012

2 commits

  • In the paranoid case of sysctl kernel.kptr_restrict=2, mask the kernel
    virtual addresses in /proc/vmallocinfo too.

    Signed-off-by: Kees Cook
    Reported-by: Brad Spengler
    Acked-by: KOSAKI Motohiro
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook
     
  • A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
    currently it lost original meaning but still has some effects:

    | effect | alternative flags
    -+------------------------+---------------------------------------------
    1| account as reserved_vm | VM_IO
    2| skip in core dump | VM_IO, VM_DONTDUMP
    3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
    4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP

    This patch removes reserved_vm counter from mm_struct. Seems like nobody
    cares about it, it does not exported into userspace directly, it only
    reduces total_vm showed in proc.

    Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.

    remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
    remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.

    [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
    Signed-off-by: Konstantin Khlebnikov
    Cc: Alexander Viro
    Cc: Carsten Otte
    Cc: Chris Metcalf
    Cc: Cyrill Gorcunov
    Cc: Eric Paris
    Cc: H. Peter Anvin
    Cc: Hugh Dickins
    Cc: Ingo Molnar
    Cc: James Morris
    Cc: Jason Baron
    Cc: Kentaro Takeda
    Cc: Matt Helsley
    Cc: Nick Piggin
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Robert Richter
    Cc: Suresh Siddha
    Cc: Tetsuo Handa
    Cc: Venkatesh Pallipadi
    Acked-by: Linus Torvalds
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov
     

01 Aug, 2012

2 commits

  • If someone calls vb_alloc() (or vm_map_ram() for that matter) to allocate
    0 bytes (0 pages), get_order() returns BITS_PER_LONG - PAGE_CACHE_SHIFT
    and interesting stuff happens. So make debugging such problems easier and
    warn about 0-size allocation.

    [akpm@linux-foundation.org: use WARN_ON-return-value feature]
    Signed-off-by: Jan Kara
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • There's a walk by repeating rb_next to find a suitable hole. Could be
    simply replaced by walk on the sorted vmap_area_list. More simpler and
    efficient.

    Mutation of the list and tree only happens in pair within
    __insert_vmap_area and __free_vmap_area, under protection of
    vmap_area_lock. The patch code is also under vmap_area_lock, so the list
    walk is safe, and consistent with the tree walk.

    Tested on SMP by repeating batch of vmalloc anf vfree for random sizes and
    rounds for hours.

    Signed-off-by: Hong Zhiguo
    Cc: Nick Piggin
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hong zhi guo
     

31 Jul, 2012

1 commit

  • Pull DMA-mapping updates from Marek Szyprowski:
    "Those patches are continuation of my earlier work.

    They contains extensions to DMA-mapping framework to remove limitation
    of the current ARM implementation (like limited total size of DMA
    coherent/write combine buffers), improve performance of buffer sharing
    between devices (attributes to skip cpu cache operations or creation
    of additional kernel mapping for some specific use cases) as well as
    some unification of the common code for dma_mmap_attrs() and
    dma_mmap_coherent() functions. All extensions have been implemented
    and tested for ARM architecture."

    * 'for-linus-for-3.6-rc1' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
    ARM: dma-mapping: add support for DMA_ATTR_SKIP_CPU_SYNC attribute
    common: DMA-mapping: add DMA_ATTR_SKIP_CPU_SYNC attribute
    ARM: dma-mapping: add support for dma_get_sgtable()
    common: dma-mapping: introduce dma_get_sgtable() function
    ARM: dma-mapping: add support for DMA_ATTR_NO_KERNEL_MAPPING attribute
    common: DMA-mapping: add DMA_ATTR_NO_KERNEL_MAPPING attribute
    common: dma-mapping: add support for generic dma_mmap_* calls
    ARM: dma-mapping: fix error path for memory allocation failure
    ARM: dma-mapping: add more sanity checks in arm_dma_mmap()
    ARM: dma-mapping: remove custom consistent dma region
    mm: vmalloc: use const void * for caller argument
    scatterlist: add sg_alloc_table_from_pages function

    Linus Torvalds
     

30 Jul, 2012

2 commits

  • This patch changes dma-mapping subsystem to use generic vmalloc areas
    for all consistent dma allocations. This increases the total size limit
    of the consistent allocations and removes platform hacks and a lot of
    duplicated code.

    Atomic allocations are served from special pool preallocated on boot,
    because vmalloc areas cannot be reliably created in atomic context.

    Signed-off-by: Marek Szyprowski
    Reviewed-by: Kyungmin Park
    Reviewed-by: Minchan Kim

    Marek Szyprowski
     
  • 'const void *' is a safer type for caller function type. This patch
    updates all references to caller function type.

    Signed-off-by: Marek Szyprowski
    Reviewed-by: Kyungmin Park
    Reviewed-by: Minchan Kim

    Marek Szyprowski
     

24 Jul, 2012

1 commit


30 May, 2012

2 commits

  • The transfer of ->flags causes some of the static mapping virtual
    addresses to be prematurely freed (before the mapping is removed) because
    VM_LAZY_FREE gets "set" if tmp->flags has VM_IOREMAP set. This might
    cause subsequent vmalloc/ioremap calls to fail because it might allocate
    one of the freed virtual address ranges that aren't unmapped.

    va->flags has different types of flags from tmp->flags. If a region with
    VM_IOREMAP set is registered with vm_area_add_early(), it will be removed
    by __purge_vmap_area_lazy().

    Fix vmalloc_init() to correctly initialize vmap_area for the given
    vm_struct.

    Also initialise va->vm. If it is not set, find_vm_area() for the early
    vm regions will always fail.

    Signed-off-by: KyongHo Cho
    Cc: "Olav Haugan"
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KyongHo
     
  • The advantage of kcalloc is, that will prevent integer overflows which
    could result from the multiplication of number of elements and size and
    it is also a bit nicer to read.

    The semantic patch that makes this change is available in
    https://lkml.org/lkml/2011/11/25/107

    Signed-off-by: Thomas Meyer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Meyer
     

20 Mar, 2012

1 commit


13 Jan, 2012

1 commit

  • If either of the vas or vms arrays are not properly kzalloced, then the
    code jumps to the err_free label.

    The err_free label runs a loop to check and free each of the array members
    of the vas and vms arrays which is not required for this situation as none
    of the array members have been allocated till this point.

    Eliminate the extra loop we have to go through by introducing a new label
    err_free2 and then jumping to it.

    [akpm@linux-foundation.org: remove now-unneeded tests]
    Signed-off-by: Kautuk Consul
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kautuk Consul
     

11 Jan, 2012

1 commit

  • vmap_area->private is void* but we don't use the field for various purpose
    but use only for vm_struct. So change it to a vm_struct* with naming to
    improve for readability and type checking.

    Signed-off-by: Minchan Kim
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     

05 Jan, 2012

1 commit


21 Dec, 2011

1 commit


09 Dec, 2011

1 commit

  • Commit f5252e00 ("mm: avoid null pointer access in vm_struct via
    /proc/vmallocinfo") adds newly allocated vm_structs to the vmlist after
    it is fully initialised. Unfortunately, it did not check that
    __vmalloc_area_node() successfully populated the area. In the event of
    allocation failure, the vmalloc area is freed but the pointer to freed
    memory is inserted into the vmlist leading to a a crash later in
    get_vmalloc_info().

    This patch adds a check for ____vmalloc_area_node() failure within
    __vmalloc_node_range. It does not use "goto fail" as in the previous
    error path as a warning was already displayed by __vmalloc_area_node()
    before it called vfree in its failure path.

    Credit goes to Luciano Chavez for doing all the real work of identifying
    exactly where the problem was.

    Signed-off-by: Mel Gorman
    Reported-by: Luciano Chavez
    Tested-by: Luciano Chavez
    Reviewed-by: Rik van Riel
    Acked-by: David Rientjes
    Cc: [3.1.x+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

06 Dec, 2011

1 commit


19 Nov, 2011

1 commit

  • The existing vm_area_register_early() allows for early vmalloc space
    allocation. However upcoming cleanups in the ARM architecture require
    that some fixed locations in the vmalloc area be reserved also very early.

    The name "vm_area_register_early" would have been a good name for the
    reservation part without the allocation. Since it is already in use with
    different semantics, let's create vm_area_add_early() instead.

    Both vm_area_register_early() and vm_area_add_early() can be used together
    meaning that the former is now implemented using the later where it is
    ensured that no conflicting areas are added, but no attempt is made to
    make the allocation scheme in vm_area_register_early() more sophisticated.
    After all, you must know what you're doing when using those functions.

    Signed-off-by: Nicolas Pitre
    Acked-by: Andrew Morton
    Cc: linux-mm@kvack.org

    Nicolas Pitre
     

17 Nov, 2011

1 commit

  • When mapping a foreign page with xenbus_map_ring_valloc() with the
    GNTTABOP_map_grant_ref hypercall, set the GNTMAP_contains_pte flag and
    pass a pointer to the PTE (in init_mm).

    After the page is mapped, the usual fault mechanism can be used to
    update additional MMs. This allows the vmalloc_sync_all() to be
    removed from alloc_vm_area().

    Signed-off-by: David Vrabel
    Acked-by: Andrew Morton
    [v1: Squashed fix by Michal for no-mmu case]
    Signed-off-by: Konrad Rzeszutek Wilk
    Signed-off-by: Michal Simek

    David Vrabel
     

01 Nov, 2011

3 commits

  • Some vmalloc failure paths do not report OOM conditions.

    Add warn_alloc_failed, which also does a dump_stack, to those failure
    paths.

    This allows more site specific vmalloc failure logging message printks to
    be removed.

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Add __attribute__((format (printf...) to the function to validate format
    and arguments. Use vsprintf extension %pV to avoid any possible message
    interleaving. Coalesce format string. Convert printks/pr_warning to
    pr_warn.

    [akpm@linux-foundation.org: use the __printf() macro]
    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • The /proc/vmallocinfo shows information about vmalloc allocations in
    vmlist that is a linklist of vm_struct. It, however, may access pages
    field of vm_struct where a page was not allocated. This results in a null
    pointer access and leads to a kernel panic.

    Why this happens: In __vmalloc_node_range() called from vmalloc(), newly
    allocated vm_struct is added to vmlist at __get_vm_area_node() and then,
    some fields of vm_struct such as nr_pages and pages are set at
    __vmalloc_area_node(). In other words, it is added to vmlist before it is
    fully initialized. At the same time, when the /proc/vmallocinfo is read,
    it accesses the pages field of vm_struct according to the nr_pages field
    at show_numa_info(). Thus, a null pointer access happens.

    The patch adds the newly allocated vm_struct to the vmlist *after* it is
    fully initialized. So, it can avoid accessing the pages field with
    unallocated page when show_numa_info() is called.

    Signed-off-by: Mitsuo Hayasaka
    Cc: Andrew Morton
    Cc: David Rientjes
    Cc: Namhyung Kim
    Cc: "Paul E. McKenney"
    Cc: Jeremy Fitzhardinge
    Cc:
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mitsuo Hayasaka
     

15 Sep, 2011

1 commit

  • Xen backend drivers (e.g., blkback and netback) would sometimes fail to
    map grant pages into the vmalloc address space allocated with
    alloc_vm_area(). The GNTTABOP_map_grant_ref would fail because Xen could
    not find the page (in the L2 table) containing the PTEs it needed to
    update.

    (XEN) mm.c:3846:d0 Could not find L1 PTE for address fbb42000

    netback and blkback were making the hypercall from a kernel thread where
    task->active_mm != &init_mm and alloc_vm_area() was only updating the page
    tables for init_mm. The usual method of deferring the update to the page
    tables of other processes (i.e., after taking a fault) doesn't work as a
    fault cannot occur during the hypercall.

    This would work on some systems depending on what else was using vmalloc.

    Fix this by reverting ef691947d8a3 ("vmalloc: remove vmalloc_sync_all()
    from alloc_vm_area()") and add a comment to explain why it's needed.

    Signed-off-by: David Vrabel
    Cc: Jeremy Fitzhardinge
    Cc: Konrad Rzeszutek Wilk
    Cc: Ian Campbell
    Cc: Keir Fraser
    Cc: [3.0.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Vrabel
     

15 Aug, 2011

1 commit

  • Commit db64fe02258f ("mm: rewrite vmap layer") introduced code that does
    address calculations under the assumption that VMAP_BLOCK_SIZE is a
    power of two. However, this might not be true if CONFIG_NR_CPUS is not
    set to a power of two.

    Wrong vmap_block index/offset values could lead to memory corruption.
    However, this has never been observed in practice (or never been
    diagnosed correctly); what caught this was the BUG_ON in vb_alloc() that
    checks for inconsistent vmap_block indices.

    To fix this, ensure that VMAP_BLOCK_SIZE always is a power of two.

    BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=31572
    Reported-by: Pavel Kysilka
    Reported-by: Matias A. Fonzo
    Signed-off-by: Clemens Ladisch
    Signed-off-by: Stefan Richter
    Cc: Nick Piggin
    Cc: Jeremy Fitzhardinge
    Cc: Krzysztof Helt
    Cc: Andrew Morton
    Cc: 2.6.28+
    Signed-off-by: Linus Torvalds

    Clemens Ladisch
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

21 Jul, 2011

2 commits


27 May, 2011

1 commit

  • …x/kernel/git/jeremy/xen

    * 'upstream/tidy-xen-mmu-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen:
    xen: fix compile without CONFIG_XEN_DEBUG_FS
    Use arbitrary_virt_to_machine() to deal with ioremapped pud updates.
    Use arbitrary_virt_to_machine() to deal with ioremapped pmd updates.
    xen/mmu: remove all ad-hoc stats stuff
    xen: use normal virt_to_machine for ptes
    xen: make a pile of mmu pvop functions static
    vmalloc: remove vmalloc_sync_all() from alloc_vm_area()
    xen: condense everything onto xen_set_pte
    xen: use mmu_update for xen_set_pte_at()
    xen: drop all the special iomap pte paths.

    Linus Torvalds
     

25 May, 2011

2 commits

  • I was tracking down a page allocation failure that ended up in vmalloc().
    Since vmalloc() uses 0-order pages, if somebody asks for an insane amount
    of memory, we'll still get a warning with "order:0" in it. That's not
    very useful.

    During recovery, vmalloc() also nicely frees all of the memory that it got
    up to the point of the failure. That is wonderful, but it also quickly
    hides any issues. We have a much different sitation if vmalloc()
    repeatedly fails 10GB in to:

    vmalloc(100 * 1<] warn_alloc_failed+0x146/0x170
    [ 68.126464] [] ? printk+0x6c/0x70
    [ 68.126791] [] ? alloc_pages_current+0x94/0xe0
    [ 68.127661] [] __vmalloc_node_range+0x237/0x290
    ...

    The 'order' variable is added for clarity when calling warn_alloc_failed()
    to avoid having an unexplained '0' as an argument.

    The 'tmp_mask' is because adding an open-coded '| __GFP_NOWARN' would take
    us over 80 columns for the alloc_pages_node() call. If we are going to
    add a line, it might as well be one that makes the sucker easier to read.

    As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
    solely on an unverified value passed in from userspace. Granted, it's
    under CAP_SYS_ADMIN, but it still frightens me a bit.

    Signed-off-by: Dave Hansen
    Cc: Johannes Weiner
    Cc: David Rientjes
    Cc: Michal Nazarewicz
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen
     
  • The vmap allocator is used to, among other things, allocate per-cpu vmap
    blocks, where each vmap block is naturally aligned to its own size.
    Obviously, leaving a guard page after each vmap area forbids packing vmap
    blocks efficiently and can make the kernel run out of possible vmap blocks
    long before overall vmap space is exhausted.

    The new interface to map a user-supplied page array into linear vmalloc
    space (vm_map_ram) insists on allocating from a vmap block (instead of
    falling back to a custom area) when the area size is below a certain
    threshold. With heavy users of this interface (e.g. XFS) and limited
    vmalloc space on 32-bit, vmap block exhaustion is a real problem.

    Remove the guard page from the core vmap allocator. vmalloc and the old
    vmap interface enforce a guard page on their own at a higher level.

    Note that without this patch, we had accidental guard pages after those
    vm_map_ram areas that happened to be at the end of a vmap block, but not
    between every area. This patch removes this accidental guard page only.

    If we want guard pages after every vm_map_ram area, this should be done
    separately. And just like with vmalloc and the old interface on a
    different level, not in the core allocator.

    Mel pointed out: "If necessary, the guard page could be reintroduced as a
    debugging-only option (CONFIG_DEBUG_PAGEALLOC?). Otherwise it seems
    reasonable."

    Signed-off-by: Johannes Weiner
    Cc: Nick Piggin
    Cc: Dave Chinner
    Acked-by: Mel Gorman
    Cc: Hugh Dickins
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

21 May, 2011

1 commit


23 Mar, 2011

2 commits

  • KM_USER1 is never used for vwrite() path so the caller doesn't need to
    guarantee it is not used. Only the caller should guarantee is KM_USER0
    and it is commented already.

    Signed-off-by: Namhyung Kim
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Namhyung Kim
     
  • Provide a free area cache for the vmalloc virtual address allocator, based
    on the algorithm used by the user virtual memory allocator.

    This reduces the number of rbtree operations and linear traversals over
    the vmap extents in order to find a free area, by starting off at the last
    point that a free area was found.

    The free area cache is reset if areas are freed behind it, or if we are
    searching for a smaller area or alignment than last time. So allocation
    patterns are not changed (verified by corner-case and random test cases in
    userspace testing).

    This solves a regression caused by lazy vunmap TLB purging introduced in
    db64fe02 (mm: rewrite vmap layer). That patch will leave extents in the
    vmap allocator after they are vunmapped, and until a significant number
    accumulate that can be flushed in a single batch. So in a workload that
    vmalloc/vfree frequently, a chain of extents will build up from
    VMALLOC_START address, which have to be iterated over each time (giving an
    O(n) type of behaviour).

    After this patch, the search will start from where it left off, giving
    closer to an amortized O(1).

    This is verified to solve regressions reported Steven in GFS2, and Avi in
    KVM.

    Hugh's update:

    : I tried out the recent mmotm, and on one machine was fortunate to hit
    : the BUG_ON(first->va_start < addr) which seems to have been stalling
    : your vmap area cache patch ever since May.

    : I can get you addresses etc, I did dump a few out; but once I stared
    : at them, it was easier just to look at the code: and I cannot see how
    : you would be so sure that first->va_start < addr, once you've done
    : that addr = ALIGN(max(...), align) above, if align is over 0x1000
    : (align was 0x8000 or 0x4000 in the cases I hit: ioremaps like Steve).

    : I originally got around it by just changing the
    : if (first->va_start < addr) {
    : to
    : while (first->va_start < addr) {
    : without thinking about it any further; but that seemed unsatisfactory,
    : why would we want to loop here when we've got another very similar
    : loop just below it?

    : I am never going to admit how long I've spent trying to grasp your
    : "while (n)" rbtree loop just above this, the one with the peculiar
    : if (!first && tmp->va_start < addr + size)
    : in. That's unfamiliar to me, I'm guessing it's designed to save a
    : subsequent rb_next() in a few circumstances (at risk of then setting
    : a wrong cached_hole_size?); but they did appear few to me, and I didn't
    : feel I could sign off something with that in when I don't grasp it,
    : and it seems responsible for extra code and mistaken BUG_ON below it.

    : I've reverted to the familiar rbtree loop that find_vma() does (but
    : with va_end >= addr as you had, to respect the additional guard page):
    : and then (given that cached_hole_size starts out 0) I don't see the
    : need for any complications below it. If you do want to keep that loop
    : as you had it, please add a comment to explain what it's trying to do,
    : and where addr is relative to first when you emerge from it.

    : Aren't your tests "size first->va_start" forgetting the guard page we want
    : before the next area? I've changed those.

    : I have not changed your many "addr + size - 1 < addr" overflow tests,
    : but have since come to wonder, shouldn't they be "addr + size < addr"
    : tests - won't the vend checks go wrong if addr + size is 0?

    : I have added a few comments - Wolfgang Wander's 2.6.13 description of
    : 1363c3cd8603a913a27e2995dccbd70d5312d8e6 Avoiding mmap fragmentation
    : helped me a lot, perhaps a pointer to that would be good too. And I found
    : it easier to understand when I renamed cached_start slightly and moved the
    : overflow label down.

    : This patch would go after your mm-vmap-area-cache.patch in mmotm.
    : Trivially, nobody is going to get that BUG_ON with this patch, and it
    : appears to work fine on my machines; but I have not given it anything like
    : the testing you did on your original, and may have broken all the
    : performance you were aiming for. Please take a look and test it out
    : integrate with yours if you're satisfied - thanks.

    [akpm@linux-foundation.org: add locking comment]
    Signed-off-by: Nick Piggin
    Signed-off-by: Hugh Dickins
    Reviewed-by: Minchan Kim
    Reported-and-tested-by: Steven Whitehouse
    Reported-and-tested-by: Avi Kivity
    Tested-by: "Barry J. Marson"
    Cc: Prarit Bhargava
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

14 Jan, 2011

5 commits

  • * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (59 commits)
    ACPI / PM: Fix build problems for !CONFIG_ACPI related to NVS rework
    ACPI: fix resource check message
    ACPI / Battery: Update information on info notification and resume
    ACPI: Drop device flag wake_capable
    ACPI: Always check if _PRW is present before trying to evaluate it
    ACPI / PM: Check status of power resources under mutexes
    ACPI / PM: Rename acpi_power_off_device()
    ACPI / PM: Drop acpi_power_nocheck
    ACPI / PM: Drop acpi_bus_get_power()
    Platform / x86: Make fujitsu_laptop use acpi_bus_update_power()
    ACPI / Fan: Rework the handling of power resources
    ACPI / PM: Register power resource devices as soon as they are needed
    ACPI / PM: Register acpi_power_driver early
    ACPI / PM: Add function for updating device power state consistently
    ACPI / PM: Add function for device power state initialization
    ACPI / PM: Introduce __acpi_bus_get_power()
    ACPI / PM: Introduce function for refcounting device power resources
    ACPI / PM: Add functions for manipulating lists of power resources
    ACPI / PM: Prevent acpi_power_get_inferred_state() from making changes
    ACPICA: Update version to 20101209
    ...

    Linus Torvalds
     
  • IS_ERR() already implies unlikely(), so it can be omitted here.

    Signed-off-by: Tobias Klauser
    Reviewed-by: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tobias Klauser
     
  • Four architectures (arm, mips, sparc, x86) use __vmalloc_area() for
    module_init(). Much of the code is duplicated and can be generalized in a
    globally accessible function, __vmalloc_node_range().

    __vmalloc_node() now calls into __vmalloc_node_range() with a range of
    [VMALLOC_START, VMALLOC_END) for functionally equivalent behavior.

    Each architecture may then use __vmalloc_node_range() directly to remove
    the duplication of code.

    Signed-off-by: David Rientjes
    Cc: Christoph Lameter
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: "David S. Miller"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • pcpu_get_vm_areas() only uses GFP_KERNEL allocations, so remove the gfp_t
    formal and use the mask internally.

    Signed-off-by: David Rientjes
    Cc: Christoph Lameter
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • get_vm_area_node() is unused in the kernel and can thus be removed.

    Signed-off-by: David Rientjes
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes