09 Sep, 2020
2 commits
-
Using the best-fit algorithm, instead of the first-fit
algorithm, may reduce fragmentation when allocating
IOVAs.Bug: 149544392
Link: https://lore.kernel.org/lkml/7239ddd532e94a4371289f3be23c66a3@codeaurora.org/
Change-Id: Icfbac0cc7be972a092915335508cbc73c47471cf
Signed-off-by: Patrick Daly
Signed-off-by: Isaac J. Manjarres
Signed-off-by: Chris Goldsworthy -
Some devices have a memory map which contains gaps or holes.
In order for the device to have as much IOVA space as possible,
allow its driver to inform the DMA-IOMMU layer that it should
not allocate addresses from these holes.Bug: 149544395
Link: https://lore.kernel.org/lkml/20200220185728.GA32137@pratikp-lnx/
Change-Id: I15bd1d313d889c2572d0eb2adecf6bebde3267f7
Signed-off-by: Patrick Daly
Signed-off-by: Isaac J. Manjarres
Signed-off-by: Chris Goldsworthy
24 Jun, 2019
1 commit
-
Linux 5.2-rc6
19 Jun, 2019
1 commit
-
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not see http www gnu org
licensesextracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 503 file(s).
Signed-off-by: Thomas Gleixner
Reviewed-by: Alexios Zavras
Reviewed-by: Allison Randal
Reviewed-by: Enrico Weigelt
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de
Signed-off-by: Greg Kroah-Hartman
27 May, 2019
4 commits
-
Signed-off-by: Christoph Hellwig
Acked-by: Robin Murphy
Signed-off-by: Joerg Roedel -
There is nothing really arm64 specific in the iommu_dma_ops
implementation, so move it to dma-iommu.c and keep a lot of symbols
self-contained. Note the implementation does depend on the
DMA_DIRECT_REMAP infrastructure for now, so we'll have to make the
DMA_IOMMU support depend on it, but this will be relaxed soon.Signed-off-by: Christoph Hellwig
Acked-by: Robin Murphy
Signed-off-by: Joerg Roedel -
We now have a arch_dma_prep_coherent architecture hook that is used
for the generic DMA remap allocator, and we should use the same
interface for the dma-iommu code.Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
Acked-by: Catalin Marinas
Signed-off-by: Joerg Roedel -
No need for a __KERNEL__ guard outside uapi and add a missing comment
describing the #else cpp statement. Last but not least include
instead of the asm version, which is frowned upon.Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
Signed-off-by: Joerg Roedel
03 May, 2019
2 commits
-
A recent change split iommu_dma_map_msi_msg() in two new functions. The
function was still implemented to avoid modifying all the callers at
once.Now that all the callers have been reworked, iommu_dma_map_msi_msg() can
be removed.Signed-off-by: Julien Grall
Reviewed-by: Robin Murphy
Reviewed-by: Eric Auger
Acked-by: Joerg Roedel
Signed-off-by: Marc Zyngier -
On RT, iommu_dma_map_msi_msg() may be called from non-preemptible
context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as
the function is using spin_lock (they can sleep on RT).iommu_dma_map_msi_msg() is used to map the MSI page in the IOMMU PT
and update the MSI message with the IOVA.Only the part to lookup for the MSI page requires to be called in
preemptible context. As the MSI page cannot change over the lifecycle
of the MSI interrupt, the lookup can be cached and re-used later on.iomma_dma_map_msi_msg() is now split in two functions:
- iommu_dma_prepare_msi(): This function will prepare the mapping
in the IOMMU and store the cookie in the structure msi_desc. This
function should be called in preemptible context.
- iommu_dma_compose_msi_msg(): This function will update the MSI
message with the IOVA when the device is behind an IOMMU.Signed-off-by: Julien Grall
Reviewed-by: Robin Murphy
Reviewed-by: Eric Auger
Acked-by: Joerg Roedel
Signed-off-by: Marc Zyngier
06 Dec, 2018
1 commit
-
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.Signed-off-by: Christoph Hellwig
Acked-by: Linus Torvalds
13 May, 2018
1 commit
-
Inclusion of include/dma-iommu.h when CONFIG_IOMMU_DMA is not selected
results in the following splat:In file included from drivers/irqchip/irq-gic-v3-mbi.c:20:0:
./include/linux/dma-iommu.h:95:69: error: unknown type name ‘dma_addr_t’
static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
^~~~~~~~~~
./include/linux/dma-iommu.h:108:74: warning: ‘struct list_head’ declared inside parameter list will not be visible outside of this definition or declaration
static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
^~~~~~~~~
scripts/Makefile.build:312: recipe for target 'drivers/irqchip/irq-gic-v3-mbi.o' failedFix it by including linux/types.h.
Signed-off-by: Marc Zyngier
Signed-off-by: Thomas Gleixner
Cc: Rob Herring
Cc: Jason Cooper
Cc: Ard Biesheuvel
Cc: Srinivas Kandagatla
Cc: Thomas Petazzoni
Cc: Miquel Raynal
Link: https://lkml.kernel.org/r/20180508121438.11301-5-marc.zyngier@arm.com
30 May, 2017
1 commit
-
Newly added code in the ipmmu-vmsa driver showed a small mistake
in a header file that can't be included by itself without CONFIG_IOMMU_DMA
enabled:In file included from drivers/iommu/ipmmu-vmsa.c:13:0:
include/linux/dma-iommu.h:105:94: error: 'struct device' declared inside parameter list will not be visible outside of this definition or declaration [-Werror]This adds a forward declaration for 'struct device', similar to how
we treat the other struct types in this case.Fixes: 3ae47292024f ("iommu/ipmmu-vmsa: Add new IOMMU_DOMAIN_DMA ops")
Fixes: 273df9635385 ("iommu/dma: Make PCI window reservation generic")
Signed-off-by: Arnd Bergmann
Acked-by: Robin Murphy
Signed-off-by: Joerg Roedel
05 May, 2017
1 commit
-
…mu', 'arm/core', 'x86/vt-d', 'x86/amd' and 'core' into next
29 Apr, 2017
1 commit
-
The include file does not need any PCI specifics, so remove
that include. Also fix the places that relied on it.Signed-off-by: Joerg Roedel
22 Mar, 2017
1 commit
-
Now that we're applying the IOMMU API reserved regions to our IOVA
domains, we shouldn't need to privately special-case PCI windows, or
indeed anything else which isn't specific to our iommu-dma layer.
However, since those aren't IOMMU-specific either, rather than start
duplicating code into IOMMU drivers let's transform the existing
function into an iommu_get_resv_regions() helper that they can share.Signed-off-by: Robin Murphy
Signed-off-by: Joerg Roedel
06 Feb, 2017
1 commit
-
Back when this was first written, dma_supported() was somewhat of a
murky mess, with subtly different interpretations being relied upon in
various places. The "does device X support DMA to address range Y?"
uses assuming Y to be physical addresses, which motivated the current
iommu_dma_supported() implementation and are alluded to in the comment
therein, have since been cleaned up, leaving only the far less ambiguous
"can device X drive address bits Y" usage internal to DMA API mask
setting. As such, there is no reason to keep a slightly misleading
callback which does nothing but duplicate the current default behaviour;
we already constrain IOVA allocations to the iommu_domain aperture where
necessary, so let's leave DMA mask business to architecture-specific
code where it belongs.Signed-off-by: Robin Murphy
Signed-off-by: Joerg Roedel
30 Jan, 2017
1 commit
-
…/will/linux into arm/core
23 Jan, 2017
1 commit
-
IOMMU domain users such as VFIO face a similar problem to DMA API ops
with regard to mapping MSI messages in systems where the MSI write is
subject to IOMMU translation. With the relevant infrastructure now in
place for managed DMA domains, it's actually really simple for other
users to piggyback off that and reap the benefits without giving up
their own IOVA management, and without having to reinvent their own
wheel in the MSI layer.Allow such users to opt into automatic MSI remapping by dedicating a
region of their IOVA space to a managed cookie, and extend the mapping
routine to implement a trivial linear allocator in such cases, to avoid
the needless overhead of a full-blown IOVA domain.Signed-off-by: Robin Murphy
Reviewed-by: Tomasz Nowicki
Reviewed-by: Eric Auger
Tested-by: Eric Auger
Tested-by: Tomasz Nowicki
Tested-by: Bharat Bhushan
Signed-off-by: Will Deacon
19 Jan, 2017
1 commit
-
The newly added DMA_ATTR_PRIVILEGED is useful for creating mappings that
are only accessible to privileged DMA engines. Implement it in
dma-iommu.c so that the ARM64 DMA IOMMU mapper can make use of it.Reviewed-by: Robin Murphy
Tested-by: Robin Murphy
Acked-by: Will Deacon
Signed-off-by: Mitchel Humpherys
Signed-off-by: Will Deacon
14 Nov, 2016
1 commit
-
With the new dma_{map,unmap}_resource() functions added to the DMA API
for the benefit of cases like slave DMA, add suitable implementations to
the arsenal of our generic layer. Since cache maintenance should not be
a concern, these can both be standalone callback implementations without
the need for arch code wrappers.CC: Joerg Roedel
Signed-off-by: Robin Murphy
Reviewed-by: Catalin Marinas
Signed-off-by: Joerg Roedel
16 Sep, 2016
2 commits
-
With our DMA ops enabled for PCI devices, we should avoid allocating
IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
lead to faults, corruption or other badness. To be safe, punch out holes
for all of the relevant host bridge's windows when initialising a DMA
domain for a PCI device.CC: Marek Szyprowski
CC: Inki Dae
Reported-by: Lorenzo Pieralisi
Signed-off-by: Robin Murphy
Signed-off-by: Will Deacon -
When an MSI doorbell is located downstream of an IOMMU, attaching
devices to a DMA ops domain and switching on translation leads to a rude
shock when their attempt to write to the physical address returned by
the irqchip driver faults (or worse, writes into some already-mapped
buffer) and no interrupt is forthcoming.Address this by adding a hook for relevant irqchip drivers to call from
their compose_msi_msg() callback, to swizzle the physical address with
an appropriatly-mapped IOVA for any device attached to one of our DMA
ops domains.Acked-by: Thomas Gleixner
Acked-by: Marc Zyngier
Signed-off-by: Robin Murphy
Signed-off-by: Will Deacon
04 Aug, 2016
1 commit
-
The dma-mapping core and the implementations do not change the DMA
attributes passed by pointer. Thus the pointer can point to const data.
However the attributes do not have to be a bitfield. Instead unsigned
long will do fine:1. This is just simpler. Both in terms of reading the code and setting
attributes. Instead of initializing local attributes on the stack
and passing pointer to it to dma_set_attr(), just set the bits.2. It brings safeness and checking for const correctness because the
attributes are passed by value.Semantic patches for this change (at least most of them):
virtual patch
virtual context@r@
identifier f, attrs;@@
f(...,
- struct dma_attrs *attrs
+ unsigned long attrs
, ...)
{
...
}@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)and
// Options: --all-includes
virtual patch
virtual context@r@
identifier f, attrs;
type t;@@
t f(..., struct dma_attrs *attrs);@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
Signed-off-by: Krzysztof Kozlowski
Acked-by: Vineet Gupta
Acked-by: Robin Murphy
Acked-by: Hans-Christian Noren Egtvedt
Acked-by: Mark Salter [c6x]
Acked-by: Jesper Nilsson [cris]
Acked-by: Daniel Vetter [drm]
Reviewed-by: Bart Van Assche
Acked-by: Joerg Roedel [iommu]
Acked-by: Fabien Dessenne [bdisp]
Reviewed-by: Marek Szyprowski [vb2-core]
Acked-by: David Vrabel [xen]
Acked-by: Konrad Rzeszutek Wilk [xen swiotlb]
Acked-by: Joerg Roedel [iommu]
Acked-by: Richard Kuo [hexagon]
Acked-by: Geert Uytterhoeven [m68k]
Acked-by: Gerald Schaefer [s390]
Acked-by: Bjorn Andersson
Acked-by: Hans-Christian Noren Egtvedt [avr32]
Acked-by: Vineet Gupta [arc]
Acked-by: Robin Murphy [arm64 and dma-iommu]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
09 May, 2016
1 commit
-
Now that we know exactly which page sizes our caller wants to use in the
given domain, we can restrict higher-order allocation attempts to just
those sizes, if any, and avoid wasting any time or effort on other sizes
which offer no benefit. In the same vein, this also lets us accommodate
a minimum order greater than 0 for special cases.Signed-off-by: Robin Murphy
Acked-by: Will Deacon
Tested-by: Yong Wu
Signed-off-by: Joerg Roedel
15 Oct, 2015
1 commit
-
Taking inspiration from the existing arch/arm code, break out some
generic functions to interface the DMA-API to the IOMMU-API. This will
do the bulk of the heavy lifting for IOMMU-backed dma-mapping.Since associating an IOVA allocator with an IOMMU domain is a fairly
common need, rather than introduce yet another private structure just to
do this for ourselves, extend the top-level struct iommu_domain with the
notion. A simple opaque cookie allows reuse by other IOMMU API users
with their various different incompatible allocator types.Signed-off-by: Robin Murphy
Signed-off-by: Joerg Roedel