13 Jan, 2012

1 commit

  • If either of the vas or vms arrays are not properly kzalloced, then the
    code jumps to the err_free label.

    The err_free label runs a loop to check and free each of the array members
    of the vas and vms arrays which is not required for this situation as none
    of the array members have been allocated till this point.

    Eliminate the extra loop we have to go through by introducing a new label
    err_free2 and then jumping to it.

    [akpm@linux-foundation.org: remove now-unneeded tests]
    Signed-off-by: Kautuk Consul
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kautuk Consul
     

11 Jan, 2012

1 commit

  • vmap_area->private is void* but we don't use the field for various purpose
    but use only for vm_struct. So change it to a vm_struct* with naming to
    improve for readability and type checking.

    Signed-off-by: Minchan Kim
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     

05 Jan, 2012

1 commit


21 Dec, 2011

1 commit


09 Dec, 2011

1 commit

  • Commit f5252e00 ("mm: avoid null pointer access in vm_struct via
    /proc/vmallocinfo") adds newly allocated vm_structs to the vmlist after
    it is fully initialised. Unfortunately, it did not check that
    __vmalloc_area_node() successfully populated the area. In the event of
    allocation failure, the vmalloc area is freed but the pointer to freed
    memory is inserted into the vmlist leading to a a crash later in
    get_vmalloc_info().

    This patch adds a check for ____vmalloc_area_node() failure within
    __vmalloc_node_range. It does not use "goto fail" as in the previous
    error path as a warning was already displayed by __vmalloc_area_node()
    before it called vfree in its failure path.

    Credit goes to Luciano Chavez for doing all the real work of identifying
    exactly where the problem was.

    Signed-off-by: Mel Gorman
    Reported-by: Luciano Chavez
    Tested-by: Luciano Chavez
    Reviewed-by: Rik van Riel
    Acked-by: David Rientjes
    Cc: [3.1.x+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

06 Dec, 2011

1 commit


19 Nov, 2011

1 commit

  • The existing vm_area_register_early() allows for early vmalloc space
    allocation. However upcoming cleanups in the ARM architecture require
    that some fixed locations in the vmalloc area be reserved also very early.

    The name "vm_area_register_early" would have been a good name for the
    reservation part without the allocation. Since it is already in use with
    different semantics, let's create vm_area_add_early() instead.

    Both vm_area_register_early() and vm_area_add_early() can be used together
    meaning that the former is now implemented using the later where it is
    ensured that no conflicting areas are added, but no attempt is made to
    make the allocation scheme in vm_area_register_early() more sophisticated.
    After all, you must know what you're doing when using those functions.

    Signed-off-by: Nicolas Pitre
    Acked-by: Andrew Morton
    Cc: linux-mm@kvack.org

    Nicolas Pitre
     

17 Nov, 2011

1 commit

  • When mapping a foreign page with xenbus_map_ring_valloc() with the
    GNTTABOP_map_grant_ref hypercall, set the GNTMAP_contains_pte flag and
    pass a pointer to the PTE (in init_mm).

    After the page is mapped, the usual fault mechanism can be used to
    update additional MMs. This allows the vmalloc_sync_all() to be
    removed from alloc_vm_area().

    Signed-off-by: David Vrabel
    Acked-by: Andrew Morton
    [v1: Squashed fix by Michal for no-mmu case]
    Signed-off-by: Konrad Rzeszutek Wilk
    Signed-off-by: Michal Simek

    David Vrabel
     

01 Nov, 2011

3 commits

  • Some vmalloc failure paths do not report OOM conditions.

    Add warn_alloc_failed, which also does a dump_stack, to those failure
    paths.

    This allows more site specific vmalloc failure logging message printks to
    be removed.

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Add __attribute__((format (printf...) to the function to validate format
    and arguments. Use vsprintf extension %pV to avoid any possible message
    interleaving. Coalesce format string. Convert printks/pr_warning to
    pr_warn.

    [akpm@linux-foundation.org: use the __printf() macro]
    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • The /proc/vmallocinfo shows information about vmalloc allocations in
    vmlist that is a linklist of vm_struct. It, however, may access pages
    field of vm_struct where a page was not allocated. This results in a null
    pointer access and leads to a kernel panic.

    Why this happens: In __vmalloc_node_range() called from vmalloc(), newly
    allocated vm_struct is added to vmlist at __get_vm_area_node() and then,
    some fields of vm_struct such as nr_pages and pages are set at
    __vmalloc_area_node(). In other words, it is added to vmlist before it is
    fully initialized. At the same time, when the /proc/vmallocinfo is read,
    it accesses the pages field of vm_struct according to the nr_pages field
    at show_numa_info(). Thus, a null pointer access happens.

    The patch adds the newly allocated vm_struct to the vmlist *after* it is
    fully initialized. So, it can avoid accessing the pages field with
    unallocated page when show_numa_info() is called.

    Signed-off-by: Mitsuo Hayasaka
    Cc: Andrew Morton
    Cc: David Rientjes
    Cc: Namhyung Kim
    Cc: "Paul E. McKenney"
    Cc: Jeremy Fitzhardinge
    Cc:
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mitsuo Hayasaka
     

15 Sep, 2011

1 commit

  • Xen backend drivers (e.g., blkback and netback) would sometimes fail to
    map grant pages into the vmalloc address space allocated with
    alloc_vm_area(). The GNTTABOP_map_grant_ref would fail because Xen could
    not find the page (in the L2 table) containing the PTEs it needed to
    update.

    (XEN) mm.c:3846:d0 Could not find L1 PTE for address fbb42000

    netback and blkback were making the hypercall from a kernel thread where
    task->active_mm != &init_mm and alloc_vm_area() was only updating the page
    tables for init_mm. The usual method of deferring the update to the page
    tables of other processes (i.e., after taking a fault) doesn't work as a
    fault cannot occur during the hypercall.

    This would work on some systems depending on what else was using vmalloc.

    Fix this by reverting ef691947d8a3 ("vmalloc: remove vmalloc_sync_all()
    from alloc_vm_area()") and add a comment to explain why it's needed.

    Signed-off-by: David Vrabel
    Cc: Jeremy Fitzhardinge
    Cc: Konrad Rzeszutek Wilk
    Cc: Ian Campbell
    Cc: Keir Fraser
    Cc: [3.0.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Vrabel
     

15 Aug, 2011

1 commit

  • Commit db64fe02258f ("mm: rewrite vmap layer") introduced code that does
    address calculations under the assumption that VMAP_BLOCK_SIZE is a
    power of two. However, this might not be true if CONFIG_NR_CPUS is not
    set to a power of two.

    Wrong vmap_block index/offset values could lead to memory corruption.
    However, this has never been observed in practice (or never been
    diagnosed correctly); what caught this was the BUG_ON in vb_alloc() that
    checks for inconsistent vmap_block indices.

    To fix this, ensure that VMAP_BLOCK_SIZE always is a power of two.

    BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=31572
    Reported-by: Pavel Kysilka
    Reported-by: Matias A. Fonzo
    Signed-off-by: Clemens Ladisch
    Signed-off-by: Stefan Richter
    Cc: Nick Piggin
    Cc: Jeremy Fitzhardinge
    Cc: Krzysztof Helt
    Cc: Andrew Morton
    Cc: 2.6.28+
    Signed-off-by: Linus Torvalds

    Clemens Ladisch
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

21 Jul, 2011

2 commits


27 May, 2011

1 commit

  • …x/kernel/git/jeremy/xen

    * 'upstream/tidy-xen-mmu-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen:
    xen: fix compile without CONFIG_XEN_DEBUG_FS
    Use arbitrary_virt_to_machine() to deal with ioremapped pud updates.
    Use arbitrary_virt_to_machine() to deal with ioremapped pmd updates.
    xen/mmu: remove all ad-hoc stats stuff
    xen: use normal virt_to_machine for ptes
    xen: make a pile of mmu pvop functions static
    vmalloc: remove vmalloc_sync_all() from alloc_vm_area()
    xen: condense everything onto xen_set_pte
    xen: use mmu_update for xen_set_pte_at()
    xen: drop all the special iomap pte paths.

    Linus Torvalds
     

25 May, 2011

2 commits

  • I was tracking down a page allocation failure that ended up in vmalloc().
    Since vmalloc() uses 0-order pages, if somebody asks for an insane amount
    of memory, we'll still get a warning with "order:0" in it. That's not
    very useful.

    During recovery, vmalloc() also nicely frees all of the memory that it got
    up to the point of the failure. That is wonderful, but it also quickly
    hides any issues. We have a much different sitation if vmalloc()
    repeatedly fails 10GB in to:

    vmalloc(100 * 1<] warn_alloc_failed+0x146/0x170
    [ 68.126464] [] ? printk+0x6c/0x70
    [ 68.126791] [] ? alloc_pages_current+0x94/0xe0
    [ 68.127661] [] __vmalloc_node_range+0x237/0x290
    ...

    The 'order' variable is added for clarity when calling warn_alloc_failed()
    to avoid having an unexplained '0' as an argument.

    The 'tmp_mask' is because adding an open-coded '| __GFP_NOWARN' would take
    us over 80 columns for the alloc_pages_node() call. If we are going to
    add a line, it might as well be one that makes the sucker easier to read.

    As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
    solely on an unverified value passed in from userspace. Granted, it's
    under CAP_SYS_ADMIN, but it still frightens me a bit.

    Signed-off-by: Dave Hansen
    Cc: Johannes Weiner
    Cc: David Rientjes
    Cc: Michal Nazarewicz
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen
     
  • The vmap allocator is used to, among other things, allocate per-cpu vmap
    blocks, where each vmap block is naturally aligned to its own size.
    Obviously, leaving a guard page after each vmap area forbids packing vmap
    blocks efficiently and can make the kernel run out of possible vmap blocks
    long before overall vmap space is exhausted.

    The new interface to map a user-supplied page array into linear vmalloc
    space (vm_map_ram) insists on allocating from a vmap block (instead of
    falling back to a custom area) when the area size is below a certain
    threshold. With heavy users of this interface (e.g. XFS) and limited
    vmalloc space on 32-bit, vmap block exhaustion is a real problem.

    Remove the guard page from the core vmap allocator. vmalloc and the old
    vmap interface enforce a guard page on their own at a higher level.

    Note that without this patch, we had accidental guard pages after those
    vm_map_ram areas that happened to be at the end of a vmap block, but not
    between every area. This patch removes this accidental guard page only.

    If we want guard pages after every vm_map_ram area, this should be done
    separately. And just like with vmalloc and the old interface on a
    different level, not in the core allocator.

    Mel pointed out: "If necessary, the guard page could be reintroduced as a
    debugging-only option (CONFIG_DEBUG_PAGEALLOC?). Otherwise it seems
    reasonable."

    Signed-off-by: Johannes Weiner
    Cc: Nick Piggin
    Cc: Dave Chinner
    Acked-by: Mel Gorman
    Cc: Hugh Dickins
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

21 May, 2011

1 commit


23 Mar, 2011

2 commits

  • KM_USER1 is never used for vwrite() path so the caller doesn't need to
    guarantee it is not used. Only the caller should guarantee is KM_USER0
    and it is commented already.

    Signed-off-by: Namhyung Kim
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Namhyung Kim
     
  • Provide a free area cache for the vmalloc virtual address allocator, based
    on the algorithm used by the user virtual memory allocator.

    This reduces the number of rbtree operations and linear traversals over
    the vmap extents in order to find a free area, by starting off at the last
    point that a free area was found.

    The free area cache is reset if areas are freed behind it, or if we are
    searching for a smaller area or alignment than last time. So allocation
    patterns are not changed (verified by corner-case and random test cases in
    userspace testing).

    This solves a regression caused by lazy vunmap TLB purging introduced in
    db64fe02 (mm: rewrite vmap layer). That patch will leave extents in the
    vmap allocator after they are vunmapped, and until a significant number
    accumulate that can be flushed in a single batch. So in a workload that
    vmalloc/vfree frequently, a chain of extents will build up from
    VMALLOC_START address, which have to be iterated over each time (giving an
    O(n) type of behaviour).

    After this patch, the search will start from where it left off, giving
    closer to an amortized O(1).

    This is verified to solve regressions reported Steven in GFS2, and Avi in
    KVM.

    Hugh's update:

    : I tried out the recent mmotm, and on one machine was fortunate to hit
    : the BUG_ON(first->va_start < addr) which seems to have been stalling
    : your vmap area cache patch ever since May.

    : I can get you addresses etc, I did dump a few out; but once I stared
    : at them, it was easier just to look at the code: and I cannot see how
    : you would be so sure that first->va_start < addr, once you've done
    : that addr = ALIGN(max(...), align) above, if align is over 0x1000
    : (align was 0x8000 or 0x4000 in the cases I hit: ioremaps like Steve).

    : I originally got around it by just changing the
    : if (first->va_start < addr) {
    : to
    : while (first->va_start < addr) {
    : without thinking about it any further; but that seemed unsatisfactory,
    : why would we want to loop here when we've got another very similar
    : loop just below it?

    : I am never going to admit how long I've spent trying to grasp your
    : "while (n)" rbtree loop just above this, the one with the peculiar
    : if (!first && tmp->va_start < addr + size)
    : in. That's unfamiliar to me, I'm guessing it's designed to save a
    : subsequent rb_next() in a few circumstances (at risk of then setting
    : a wrong cached_hole_size?); but they did appear few to me, and I didn't
    : feel I could sign off something with that in when I don't grasp it,
    : and it seems responsible for extra code and mistaken BUG_ON below it.

    : I've reverted to the familiar rbtree loop that find_vma() does (but
    : with va_end >= addr as you had, to respect the additional guard page):
    : and then (given that cached_hole_size starts out 0) I don't see the
    : need for any complications below it. If you do want to keep that loop
    : as you had it, please add a comment to explain what it's trying to do,
    : and where addr is relative to first when you emerge from it.

    : Aren't your tests "size first->va_start" forgetting the guard page we want
    : before the next area? I've changed those.

    : I have not changed your many "addr + size - 1 < addr" overflow tests,
    : but have since come to wonder, shouldn't they be "addr + size < addr"
    : tests - won't the vend checks go wrong if addr + size is 0?

    : I have added a few comments - Wolfgang Wander's 2.6.13 description of
    : 1363c3cd8603a913a27e2995dccbd70d5312d8e6 Avoiding mmap fragmentation
    : helped me a lot, perhaps a pointer to that would be good too. And I found
    : it easier to understand when I renamed cached_start slightly and moved the
    : overflow label down.

    : This patch would go after your mm-vmap-area-cache.patch in mmotm.
    : Trivially, nobody is going to get that BUG_ON with this patch, and it
    : appears to work fine on my machines; but I have not given it anything like
    : the testing you did on your original, and may have broken all the
    : performance you were aiming for. Please take a look and test it out
    : integrate with yours if you're satisfied - thanks.

    [akpm@linux-foundation.org: add locking comment]
    Signed-off-by: Nick Piggin
    Signed-off-by: Hugh Dickins
    Reviewed-by: Minchan Kim
    Reported-and-tested-by: Steven Whitehouse
    Reported-and-tested-by: Avi Kivity
    Tested-by: "Barry J. Marson"
    Cc: Prarit Bhargava
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

14 Jan, 2011

6 commits

  • * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (59 commits)
    ACPI / PM: Fix build problems for !CONFIG_ACPI related to NVS rework
    ACPI: fix resource check message
    ACPI / Battery: Update information on info notification and resume
    ACPI: Drop device flag wake_capable
    ACPI: Always check if _PRW is present before trying to evaluate it
    ACPI / PM: Check status of power resources under mutexes
    ACPI / PM: Rename acpi_power_off_device()
    ACPI / PM: Drop acpi_power_nocheck
    ACPI / PM: Drop acpi_bus_get_power()
    Platform / x86: Make fujitsu_laptop use acpi_bus_update_power()
    ACPI / Fan: Rework the handling of power resources
    ACPI / PM: Register power resource devices as soon as they are needed
    ACPI / PM: Register acpi_power_driver early
    ACPI / PM: Add function for updating device power state consistently
    ACPI / PM: Add function for device power state initialization
    ACPI / PM: Introduce __acpi_bus_get_power()
    ACPI / PM: Introduce function for refcounting device power resources
    ACPI / PM: Add functions for manipulating lists of power resources
    ACPI / PM: Prevent acpi_power_get_inferred_state() from making changes
    ACPICA: Update version to 20101209
    ...

    Linus Torvalds
     
  • IS_ERR() already implies unlikely(), so it can be omitted here.

    Signed-off-by: Tobias Klauser
    Reviewed-by: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tobias Klauser
     
  • Four architectures (arm, mips, sparc, x86) use __vmalloc_area() for
    module_init(). Much of the code is duplicated and can be generalized in a
    globally accessible function, __vmalloc_node_range().

    __vmalloc_node() now calls into __vmalloc_node_range() with a range of
    [VMALLOC_START, VMALLOC_END) for functionally equivalent behavior.

    Each architecture may then use __vmalloc_node_range() directly to remove
    the duplication of code.

    Signed-off-by: David Rientjes
    Cc: Christoph Lameter
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: "David S. Miller"
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • pcpu_get_vm_areas() only uses GFP_KERNEL allocations, so remove the gfp_t
    formal and use the mask internally.

    Signed-off-by: David Rientjes
    Cc: Christoph Lameter
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • get_vm_area_node() is unused in the kernel and can thus be removed.

    Signed-off-by: David Rientjes
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Signed-off-by: Joe Perches
    Acked-by: Pekka Enberg
    Cc: Jiri Kosina
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

12 Jan, 2011

1 commit

  • Generic Hardware Error Source provides a way to report platform
    hardware errors (such as that from chipset). It works in so called
    "Firmware First" mode, that is, hardware errors are reported to
    firmware firstly, then reported to Linux by firmware. This way, some
    non-standard hardware error registers or non-standard hardware link
    can be checked by firmware to produce more valuable hardware error
    information for Linux.

    This patch adds POLL/IRQ/NMI notification types support.

    Because the memory area used to transfer hardware error information
    from BIOS to Linux can be determined only in NMI, IRQ or timer
    handler, but general ioremap can not be used in atomic context, so a
    special version of atomic ioremap is implemented for that.

    Known issue:

    - Error information can not be printed for recoverable errors notified
    via NMI, because printk is not NMI-safe. Will fix this via delay
    printing to IRQ context via irq_work or make printk NMI-safe.

    v2:

    - adjust printk format per comments.

    Signed-off-by: Huang Ying
    Reviewed-by: Andi Kleen
    Signed-off-by: Len Brown

    Huang Ying
     

03 Dec, 2010

1 commit

  • On stock 2.6.37-rc4, running:

    # mount lilith:/export /mnt/lilith
    # find /mnt/lilith/ -type f -print0 | xargs -0 file

    crashes the machine fairly quickly under Xen. Often it results in oops
    messages, but the couple of times I tried just now, it just hung quietly
    and made Xen print some rude messages:

    (XEN) mm.c:2389:d80 Bad type (saw 7400000000000001 != exp
    3000000000000000) for mfn 1d7058 (pfn 18fa7)
    (XEN) mm.c:964:d80 Attempt to create linear p.t. with write perms
    (XEN) mm.c:2389:d80 Bad type (saw 7400000000000010 != exp
    1000000000000000) for mfn 1d2e04 (pfn 1d1fb)
    (XEN) mm.c:2965:d80 Error while pinning mfn 1d2e04

    Which means the domain tried to map a pagetable page RW, which would
    allow it to map arbitrary memory, so Xen stopped it. This is because
    vm_unmap_ram() left some pages mapped in the vmalloc area after NFS had
    finished with them, and those pages got recycled as pagetable pages
    while still having these RW aliases.

    Removing those mappings immediately removes the Xen-visible aliases, and
    so it has no problem with those pages being reused as pagetable pages.
    Deferring the TLB flush doesn't upset Xen because it can flush the TLB
    itself as needed to maintain its invariants.

    When unmapping a region in the vmalloc space, clear the ptes
    immediately. There's no point in deferring this because there's no
    amortization benefit.

    The TLBs are left dirty, and they are flushed lazily to amortize the
    cost of the IPIs.

    This specific motivation for this patch is an oops-causing regression
    since 2.6.36 when using NFS under Xen, triggered by the NFS client's use
    of vm_map_ram() introduced in 56e4ebf877b60 ("NFS: readdir with vmapped
    pages") . XFS also uses vm_map_ram() and could cause similar problems.

    Signed-off-by: Jeremy Fitzhardinge
    Cc: Nick Piggin
    Cc: Bryan Schumaker
    Cc: Trond Myklebust
    Cc: Alex Elder
    Cc: Dave Chinner
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jeremy Fitzhardinge
     

27 Oct, 2010

3 commits


23 Oct, 2010

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
    percpu: update comments to reflect that percpu allocations are always zero-filled
    percpu: Optimize __get_cpu_var()
    x86, percpu: Optimize this_cpu_ptr
    percpu: clear memory allocated with the km allocator
    percpu: fix build breakage on s390 and cleanup build configuration tests
    percpu: use percpu allocator on UP too
    percpu: reduce PCPU_MIN_UNIT_SIZE to 32k
    vmalloc: pcpu_get/free_vm_areas() aren't needed on UP

    Fixed up trivial conflicts in include/linux/percpu.h

    Linus Torvalds
     

17 Sep, 2010

1 commit

  • During the reading of /proc/vmcore the kernel is doing
    ioremap()/iounmap() repeatedly. And the buildup of un-flushed
    vm_area_struct's is causing a great deal of overhead. (rb_next()
    is chewing up most of that time).

    This solution is to provide function set_iounmap_nonlazy(). It
    causes a subsequent call to iounmap() to immediately purge the
    vma area (with try_purge_vmap_area_lazy()).

    With this patch we have seen the time for writing a 250MB
    compressed dump drop from 71 seconds to 44 seconds.

    Signed-off-by: Cliff Wickman
    Cc: Andrew Morton
    Cc: kexec@lists.infradead.org
    Cc:
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cliff Wickman
     

08 Sep, 2010

1 commit


13 Aug, 2010

1 commit

  • * 'stable/xen-swiotlb-0.8.6' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen:
    x86: Detect whether we should use Xen SWIOTLB.
    pci-swiotlb-xen: Add glue code to setup dma_ops utilizing xen_swiotlb_* functions.
    swiotlb-xen: SWIOTLB library for Xen PV guest with PCI passthrough.
    xen/mmu: inhibit vmap aliases rather than trying to clear them out
    vmap: add flag to allow lazy unmap to be disabled at runtime
    xen: Add xen_create_contiguous_region
    xen: Rename the balloon lock
    xen: Allow unprivileged Xen domains to create iomap pages
    xen: use _PAGE_IOMAP in ioremap to do machine mappings

    Fix up trivial conflicts (adding both xen swiotlb and xen pci platform
    driver setup close to each other) in drivers/xen/{Kconfig,Makefile} and
    include/xen/xen-ops.h

    Linus Torvalds
     

10 Aug, 2010

2 commits

  • kmalloc() may fail, if so return -ENOMEM.

    Signed-off-by: Kulikov Vasiliy
    Acked-by: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kulikov Vasiliy
     
  • Use ERR_CAST(x) rather than ERR_PTR(PTR_ERR(x)). The former makes more
    clear what is the purpose of the operation, which otherwise looks like a
    no-op.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @@
    type T;
    T x;
    identifier f;
    @@

    T f (...) { }

    @@
    expression x;
    @@

    - ERR_PTR(PTR_ERR(x))
    + ERR_CAST(x)
    //

    Signed-off-by: Julia Lawall
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Julia Lawall
     

27 Jul, 2010

1 commit