16 Sep, 2016

2 commits

  • With our DMA ops enabled for PCI devices, we should avoid allocating
    IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
    lead to faults, corruption or other badness. To be safe, punch out holes
    for all of the relevant host bridge's windows when initialising a DMA
    domain for a PCI device.

    CC: Marek Szyprowski
    CC: Inki Dae
    Reported-by: Lorenzo Pieralisi
    Signed-off-by: Robin Murphy
    Signed-off-by: Will Deacon

    Robin Murphy
     
  • When an MSI doorbell is located downstream of an IOMMU, attaching
    devices to a DMA ops domain and switching on translation leads to a rude
    shock when their attempt to write to the physical address returned by
    the irqchip driver faults (or worse, writes into some already-mapped
    buffer) and no interrupt is forthcoming.

    Address this by adding a hook for relevant irqchip drivers to call from
    their compose_msi_msg() callback, to swizzle the physical address with
    an appropriatly-mapped IOVA for any device attached to one of our DMA
    ops domains.

    Acked-by: Thomas Gleixner
    Acked-by: Marc Zyngier
    Signed-off-by: Robin Murphy
    Signed-off-by: Will Deacon

    Robin Murphy
     

04 Aug, 2016

1 commit

  • The dma-mapping core and the implementations do not change the DMA
    attributes passed by pointer. Thus the pointer can point to const data.
    However the attributes do not have to be a bitfield. Instead unsigned
    long will do fine:

    1. This is just simpler. Both in terms of reading the code and setting
    attributes. Instead of initializing local attributes on the stack
    and passing pointer to it to dma_set_attr(), just set the bits.

    2. It brings safeness and checking for const correctness because the
    attributes are passed by value.

    Semantic patches for this change (at least most of them):

    virtual patch
    virtual context

    @r@
    identifier f, attrs;

    @@
    f(...,
    - struct dma_attrs *attrs
    + unsigned long attrs
    , ...)
    {
    ...
    }

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
    )

    and

    // Options: --all-includes
    virtual patch
    virtual context

    @r@
    identifier f, attrs;
    type t;

    @@
    t f(..., struct dma_attrs *attrs);

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
    )

    Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
    Signed-off-by: Krzysztof Kozlowski
    Acked-by: Vineet Gupta
    Acked-by: Robin Murphy
    Acked-by: Hans-Christian Noren Egtvedt
    Acked-by: Mark Salter [c6x]
    Acked-by: Jesper Nilsson [cris]
    Acked-by: Daniel Vetter [drm]
    Reviewed-by: Bart Van Assche
    Acked-by: Joerg Roedel [iommu]
    Acked-by: Fabien Dessenne [bdisp]
    Reviewed-by: Marek Szyprowski [vb2-core]
    Acked-by: David Vrabel [xen]
    Acked-by: Konrad Rzeszutek Wilk [xen swiotlb]
    Acked-by: Joerg Roedel [iommu]
    Acked-by: Richard Kuo [hexagon]
    Acked-by: Geert Uytterhoeven [m68k]
    Acked-by: Gerald Schaefer [s390]
    Acked-by: Bjorn Andersson
    Acked-by: Hans-Christian Noren Egtvedt [avr32]
    Acked-by: Vineet Gupta [arc]
    Acked-by: Robin Murphy [arm64 and dma-iommu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Krzysztof Kozlowski
     

09 May, 2016

1 commit

  • Now that we know exactly which page sizes our caller wants to use in the
    given domain, we can restrict higher-order allocation attempts to just
    those sizes, if any, and avoid wasting any time or effort on other sizes
    which offer no benefit. In the same vein, this also lets us accommodate
    a minimum order greater than 0 for special cases.

    Signed-off-by: Robin Murphy
    Acked-by: Will Deacon
    Tested-by: Yong Wu
    Signed-off-by: Joerg Roedel

    Robin Murphy
     

15 Oct, 2015

1 commit

  • Taking inspiration from the existing arch/arm code, break out some
    generic functions to interface the DMA-API to the IOMMU-API. This will
    do the bulk of the heavy lifting for IOMMU-backed dma-mapping.

    Since associating an IOVA allocator with an IOMMU domain is a fairly
    common need, rather than introduce yet another private structure just to
    do this for ourselves, extend the top-level struct iommu_domain with the
    notion. A simple opaque cookie allows reuse by other IOMMU API users
    with their various different incompatible allocator types.

    Signed-off-by: Robin Murphy
    Signed-off-by: Joerg Roedel

    Robin Murphy