02 Jul, 2009

1 commit

  • This fixes kernel.org bug #13584. The IOVA code attempted to optimise
    the insertion of new ranges into the rbtree, with the unfortunate result
    that some ranges just didn't get inserted into the tree at all. Then
    those ranges would be handed out more than once, and things kind of go
    downhill from there.

    Introduced after 2.6.25 by ddf02886cbe665d67ca750750196ea5bf524b10b
    ("PCI: iova RB tree setup tweak").

    Signed-off-by: David Woodhouse
    Cc: mark gross
    Cc: Andrew Morton
    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    David Woodhouse
     

15 Oct, 2008

1 commit

  • This patch extends the VT-d driver to support KVM

    [Ben: fixed memory pinning]
    [avi: move dma_remapping.h as well]

    Signed-off-by: Kay, Allen M
    Signed-off-by: Weidong Han
    Signed-off-by: Ben-Ami Yassour
    Signed-off-by: Amit Shah
    Acked-by: Mark Gross
    Signed-off-by: Avi Kivity

    Kay, Allen M
     

21 Apr, 2008

1 commit

  • The following patch merges two functions into one allowing for a 3%
    reduction in overhead in locating, allocating and inserting pages for
    use in IOMMU operations.

    Its a bit of a eye-crosser so I welcome any RB-tree / MM experts to take
    a look. It works by re-using some of the information gathered in the
    search for the pages to use in setting up the IOTLB's in the insertion
    of the iova structure into the RB tree.

    Signed-off-by:
    Signed-off-by: Andrew Morton
    Signed-off-by: Greg Kroah-Hartman

    mark gross
     

24 Feb, 2008

1 commit


07 Feb, 2008

1 commit

  • I would like to potentially move the sparc64 IOMMU code over to using
    the nice new drivers/pci/iova.[ch] code for free area management..

    In order to do that we have to detach the IOMMU page size assumptions
    which only really need to exist in the intel-iommu.[ch] code.

    This patch attempts to implement that.

    [akpm@linux-foundation.org: build fix]
    Signed-off-by: David S. Miller
    Acked-by: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Miller
     

22 Oct, 2007

2 commits

  • This patch adds PageSelectiveInvalidation support replacing existing
    DomainSelectiveInvalidation for intel_{map/unmap}_sg() calls and also
    enables to mapping one big contiguous DMA virtual address which is mapped
    to discontiguous physical address for SG map/unmap calls.

    "Doamin selective invalidations" wipes out the IOMMU address translation
    cache based on domain ID where as "Page selective invalidations" wipes out
    the IOMMU address translation cache for that address mask range which is
    more cache friendly when compared to Domain selective invalidations.

    Here is how it is done.
    1) changes to iova.c
    alloc_iova() now takes a bool size_aligned argument, which
    when when set, returns the io virtual address that is
    naturally aligned to 2 ^ x, where x is the order
    of the size requested.

    Returning this io vitual address which is naturally
    aligned helps iommu to do the "page selective
    invalidations" which is IOMMU cache friendly
    over "domain selective invalidations".

    2) Changes to driver/pci/intel-iommu.c
    Clean up intel_{map/unmap}_{single/sg} () calls so that
    s/g map/unamp calls is no more dependent on
    intel_{map/unmap}_single()

    intel_map_sg() now computes the total DMA virtual address
    required and allocates the size aligned total DMA virtual address
    and maps the discontiguous physical address to the allocated
    contiguous DMA virtual address.

    In the intel_unmap_sg() case since the DMA virtual address
    is contiguous and size_aligned, PageSelectiveInvalidation
    is used replacing earlier DomainSelectiveInvalidations.

    Signed-off-by: Anil S Keshavamurthy
    Cc: Greg KH
    Cc: Ashok Raj
    Cc: Suresh B
    Cc: Andi Kleen
    Cc: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Keshavamurthy, Anil S
     
  • This code implements a generic IOVA allocation and management. As per Dave's
    suggestion we are now allocating IO virtual address from Higher DMA limit
    address rather than lower end address and this eliminated the need to preserve
    the IO virtual address for multiple devices sharing the same domain virtual
    address.

    Also this code uses red black trees to store the allocated and reserved iova
    nodes. This showed a good performance improvements over previous linear
    linked list.

    [akpm@linux-foundation.org: remove inlines]
    [akpm@linux-foundation.org: coding style fixes]
    Signed-off-by: Anil S Keshavamurthy
    Cc: Andi Kleen
    Cc: Peter Zijlstra
    Cc: Muli Ben-Yehuda
    Cc: "Siddha, Suresh B"
    Cc: Arjan van de Ven
    Cc: Ashok Raj
    Cc: "David S. Miller"
    Cc: Christoph Lameter
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Keshavamurthy, Anil S