04 Aug, 2016
1 commit
-
The dma-mapping core and the implementations do not change the DMA
attributes passed by pointer. Thus the pointer can point to const data.
However the attributes do not have to be a bitfield. Instead unsigned
long will do fine:1. This is just simpler. Both in terms of reading the code and setting
attributes. Instead of initializing local attributes on the stack
and passing pointer to it to dma_set_attr(), just set the bits.2. It brings safeness and checking for const correctness because the
attributes are passed by value.Semantic patches for this change (at least most of them):
virtual patch
virtual context@r@
identifier f, attrs;@@
f(...,
- struct dma_attrs *attrs
+ unsigned long attrs
, ...)
{
...
}@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)and
// Options: --all-includes
virtual patch
virtual context@r@
identifier f, attrs;
type t;@@
t f(..., struct dma_attrs *attrs);@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
Signed-off-by: Krzysztof Kozlowski
Acked-by: Vineet Gupta
Acked-by: Robin Murphy
Acked-by: Hans-Christian Noren Egtvedt
Acked-by: Mark Salter [c6x]
Acked-by: Jesper Nilsson [cris]
Acked-by: Daniel Vetter [drm]
Reviewed-by: Bart Van Assche
Acked-by: Joerg Roedel [iommu]
Acked-by: Fabien Dessenne [bdisp]
Reviewed-by: Marek Szyprowski [vb2-core]
Acked-by: David Vrabel [xen]
Acked-by: Konrad Rzeszutek Wilk [xen swiotlb]
Acked-by: Joerg Roedel [iommu]
Acked-by: Richard Kuo [hexagon]
Acked-by: Geert Uytterhoeven [m68k]
Acked-by: Gerald Schaefer [s390]
Acked-by: Bjorn Andersson
Acked-by: Hans-Christian Noren Egtvedt [avr32]
Acked-by: Vineet Gupta [arc]
Acked-by: Robin Murphy [arm64 and dma-iommu]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Feb, 2016
1 commit
-
This patch adds the DMA_ATTR_ALLOC_SINGLE_PAGES attribute to the
DMA-mapping subsystem.This attribute can be used as a hint to the DMA-mapping subsystem that
it's likely not worth it to try to allocate large pages behind the
scenes. Large pages are likely to make an IOMMU TLB work more
efficiently but may not be worth it. See the Documentation contained in
this patch for more details about this attribute and when to use it.Note that the name of the hint (DMA_ATTR_ALLOC_SINGLE_PAGES) is loosely
based on the name MADV_NOHUGEPAGE. Just as there is MADV_NOHUGEPAGE
vs. MADV_HUGEPAGE we could also add an "opposite" attribute to
DMA_ATTR_ALLOC_SINGLE_PAGES. Without having the "opposite" attribute
the lack of DMA_ATTR_ALLOC_SINGLE_PAGES means "use your best judgement
about whether to use small pages or large pages".Signed-off-by: Douglas Anderson
Acked-by: Marek Szyprowski
Reviewed-by: Robin Murphy
Tested-by: Javier Martinez Canillas
Signed-off-by: Russell King
05 May, 2014
1 commit
-
Fixed multiple spelling errors.
Acked-by: Randy Dunlap
Signed-off-by: Carlos E. Garcia
Signed-off-by: Jiri Kosina
14 Oct, 2013
1 commit
-
Fix some typos in Documentation/DMA-attributes.txt.
Signed-off-by: Xishi Qiu
Acked-by: Rob Landley
Signed-off-by: Jiri Kosina
29 Nov, 2012
1 commit
-
This patch adds DMA_ATTR_FORCE_CONTIGUOUS attribute to the DMA-mapping
subsystem.By default DMA-mapping subsystem is allowed to assemble the buffer
allocated by dma_alloc_attrs() function from individual pages if it can
be mapped as contiguous chunk into device dma address space. By
specifing this attribute the allocated buffer is forced to be contiguous
also in physical memory.Signed-off-by: Marek Szyprowski
30 Jul, 2012
2 commits
-
This patch adds DMA_ATTR_SKIP_CPU_SYNC attribute to the DMA-mapping
subsystem.By default dma_map_{single,page,sg} functions family transfer a given
buffer from CPU domain to device domain. Some advanced use cases might
require sharing a buffer between more than one device. This requires
having a mapping created separately for each device and is usually
performed by calling dma_map_{single,page,sg} function more than once
for the given buffer with device pointer to each device taking part in
the buffer sharing. The first call transfers a buffer from 'CPU' domain
to 'device' domain, what synchronizes CPU caches for the given region
(usually it means that the cache has been flushed or invalidated
depending on the dma direction). However, next calls to
dma_map_{single,page,sg}() for other devices will perform exactly the
same sychronization operation on the CPU cache. CPU cache sychronization
might be a time consuming operation, especially if the buffers are
large, so it is highly recommended to avoid it if possible.
DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of
the CPU cache for the given buffer assuming that it has been already
transferred to 'device' domain. This attribute can be also used for
dma_unmap_{single,page,sg} functions family to force buffer to stay in
device domain after releasing a mapping for it. Use this attribute with
care!Signed-off-by: Marek Szyprowski
Reviewed-by: Kyungmin Park -
This patch adds DMA_ATTR_NO_KERNEL_MAPPING attribute which lets the
platform to avoid creating a kernel virtual mapping for the allocated
buffer. On some architectures creating such mapping is non-trivial task
and consumes very limited resources (like kernel virtual address space
or dma consistent address space). Buffers allocated with this attribute
can be only passed to user space by calling dma_mmap_attrs().Signed-off-by: Marek Szyprowski
Reviewed-by: Kyungmin Park
Reviewed-by: Daniel Vetter
28 Mar, 2012
2 commits
-
DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either
consistent or non-consistent memory as it sees fit. By using this API,
you are guaranteeing to the platform that you have all the correct and
necessary sync points for this memory in the driver.Signed-off-by: Marek Szyprowski
Acked-by: Kyungmin Park
Reviewed-by: Arnd Bergmann -
DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be
buffered to improve performance. It will be used by the replacement for
ARM/ARV32 specific dma_alloc_writecombine() function.Signed-off-by: Marek Szyprowski
Acked-by: Kyungmin Park
Reviewed-by: Arnd Bergmann
22 Jul, 2008
1 commit
-
Introduce a new dma attriblue DMA_ATTR_WEAK_ORDERING to use weak ordering
on DMA mappings in the Cell processor. Add the code to the Cell's IOMMU
implementation to use this code.Dynamic mappings can be weakly or strongly ordered on an individual basis
but the fixed mapping has to be either completely strong or completely weak.
This is currently decided by a kernel boot option (pass iommu_fixed=weak
for a weakly ordered fixed linear mapping, strongly ordered is the default).Signed-off-by: Mark Nelson
Signed-off-by: Arnd Bergmann
Signed-off-by: Benjamin Herrenschmidt
29 Apr, 2008
1 commit
-
Document the new dma_*map*_attrs() functions.
[markn@au1.ibm.com: fix up for dma-add-dma_map_attrs-interfaces and update docs]
Signed-off-by: Arthur Kepner
Acked-by: David S. Miller
Cc: Tony Luck
Cc: Jesse Barnes
Cc: Jes Sorensen
Cc: Randy Dunlap
Cc: Roland Dreier
Cc: James Bottomley
Cc: Benjamin Herrenschmidt
Cc: Grant Grundler
Cc: Michael Ellerman
Signed-off-by: Mark Nelson
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds