04 Jul, 2011
9 commits
-
Pass the device type specific needs_bounce function in at dmabounce
register time, avoiding the need for a platform specific global
function to do this.Signed-off-by: Russell King
-
DMA addresses should not be casted to void * for printing. Fix
that to be consistent with the rest of the file.Signed-off-by: Russell King
-
Pointers should be checked against NULL rather than 0, otherwise we
get sparse warnings.Signed-off-by: Russell King
-
We already check that dev != NULL, so this won't be reached.
Signed-off-by: Russell King
-
Move the decision whether to bounce into __dma_map_page(), before
the check for high pages. This avoids triggering the high page
check for devices which aren't using dmabounce. Fix the unmap path
to cope too.Signed-off-by: Russell King
-
Move the decision to perform DMA bouncing out of map_single() into its
own stand-alone function.Signed-off-by: Russell King
-
This check is done at the DMA API level, so there's no point repeating
it here.Signed-off-by: Russell King
-
Use dma_map_page()/dma_unmap_page() internals to handle dma_map_single()
and dma_unmap_single().Signed-off-by: Russell King
-
When map_single() is unable to obtain a safe buffer, we must return
the dma_addr_t error value, which is ~0 rather than 0.Signed-off-by: Russell King
07 Jan, 2011
1 commit
-
Add ARM support for the DMA debug infrastructure, which allows the
DMA API usage to be debugged.Signed-off-by: Russell King
26 Feb, 2010
1 commit
-
…treaming', 'u300' and 'umc' into devel
15 Feb, 2010
1 commit
-
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API. This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.Signed-off-by: Russell King
Tested-By: Santosh Shilimkar
Tested-By: Jamie Iles
22 Dec, 2009
1 commit
-
Commit f74f7e57ae9fa12b2951ae62ce3557799b318399 (ARM: use
flush_kernel_dcache_area() for dmabounce) has broken dmabounce build:CC arch/arm/common/dmabounce.o
arch/arm/common/dmabounce.c: In function 'unmap_single':
arch/arm/common/dmabounce.c:315: error: implicit declaration of function '__cpuc_flush_kernel_dcache_area'
make[2]: *** [arch/arm/common/dmabounce.o] Error 1Fix it.
Signed-off-by: Mike Rapoport
Signed-off-by: Russell King
14 Dec, 2009
1 commit
-
After copying data from the bounce buffer to the real buffer, use
flush_kernel_dcache_page() to ensure that data is written back in
manner coherent with future userspace mappings.Signed-off-by: Russell King
23 Nov, 2009
1 commit
-
We will need to treat dma_unmap_page() differently from dma_unmap_single()
Signed-off-by: Russell King
Tested-By: Jamie Iles
16 Mar, 2009
1 commit
-
If a machine class has a custom __virt_to_bus() implementation then it
must provide a __arch_page_to_dma() implementation as well which is
_not_ based on page_address() to support highmem.This patch fixes existing __arch_page_to_dma() and provide a default
implementation otherwise. The default implementation for highmem is
based on __pfn_to_bus() which is defined only when no custom
__virt_to_bus() is provided by the machine class.That leaves only ebsa110 and footbridge which cannot support highmem
until they provide their own __arch_page_to_dma() implementation.
But highmem support on those legacy platforms with limited memory is
certainly not a priority.Signed-off-by: Nicolas Pitre
29 Sep, 2008
5 commits
-
Validate the direction argument like x86 does. In addition,
validate the dma_unmap_* parameters against those passed to
dma_map_* when using the DMA bounce code.Signed-off-by: Russell King
-
Signed-off-by: Russell King
-
The dmabounce dma_sync_xxx() implementation have been broken for
quite some time; they all copy data between the DMA buffer and
the CPU visible buffer no irrespective of the change of ownership.
(IOW, a DMA_FROM_DEVICE mapping copies data from the DMA buffer
to the CPU buffer during a call to dma_sync_single_for_device().)Fix it by getting rid of sync_single(), moving the contents into
the recently created dmabounce_sync_for_xxx() functions and adjusting
appropriately.This also makes it possible to properly support the DMA range sync
functions.Signed-off-by: Russell King
-
Signed-off-by: Russell King
-
Signed-off-by: Russell King
26 Sep, 2008
3 commits
-
No point having two of these; dma_map_page() can do all the work
for us.Signed-off-by: Russell King
-
We can translate a struct page directly to a DMA address using
page_to_dma(). No need to use page_address() followed by
virt_to_dma().Signed-off-by: Russell King
-
Update the ARM DMA scatter gather APIs for the scatterlist changes.
Signed-off-by: Russell King
10 Aug, 2008
2 commits
-
Convert the existing dma_sync_single_for_* APIs to the new range based
APIs, and make the dma_sync_single_for_* API a superset of it.Signed-off-by: Russell King
-
OMAP at least gets the return type(s) for the DMA translation functions
wrong, which can lead to subtle errors. Avoid this by moving the DMA
translation functions to asm/dma-mapping.h, and converting them to
inline functions.Fix the OMAP DMA translation macros to use the correct argument and
result types.Also, remove the unnecessary casts in dmabounce.c.
Signed-off-by: Russell King
27 Jul, 2008
1 commit
-
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
architecture does:This enables us to cleanly fix the Calgary IOMMU issue that some devices
are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).I think that per-device dma_mapping_ops support would be also helpful for
KVM people to support PCI passthrough but Andi thinks that this makes it
difficult to support the PCI passthrough (see the above thread). So I
CC'ed this to KVM camp. Comments are appreciated.A pointer to dma_mapping_ops to struct dev_archdata is added. If the
pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's
NULL, the system-wide dma_ops pointer is used as before.If it's useful for KVM people, I plan to implement a mechanism to register
a hook called when a new pci (or dma capable) device is created (it works
with hot plugging). It enables IOMMUs to set up an appropriate
dma_mapping_ops per device.The major obstacle is that dma_mapping_error doesn't take a pointer to the
device unlike other DMA operations. So x86 can't have dma_mapping_ops per
device. Note all the POWER IOMMUs use the same dma_mapping_error function
so this is not a problem for POWER but x86 IOMMUs use different
dma_mapping_error functions.The first patch adds the device argument to dma_mapping_error. The patch
is trivial but large since it touches lots of drivers and dma-mapping.h in
all the architecture.This patch:
dma_mapping_error() doesn't take a pointer to the device unlike other DMA
operations. So we can't have dma_mapping_ops per device.Note that POWER already has dma_mapping_ops per device but all the POWER
IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device
argument.[akpm@linux-foundation.org: fix sge]
[akpm@linux-foundation.org: fix svc_rdma]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix bnx2x]
[akpm@linux-foundation.org: fix s2io]
[akpm@linux-foundation.org: fix pasemi_mac]
[akpm@linux-foundation.org: fix sdhci]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix sparc]
[akpm@linux-foundation.org: fix ibmvscsi]
Signed-off-by: FUJITA Tomonori
Cc: Muli Ben-Yehuda
Cc: Andi Kleen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Avi Kivity
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
22 Jul, 2008
1 commit
-
We have the dev_printk() variants for this kind of thing, use them
instead of directly trying to access the bus_id field of struct device.This is done in order to remove bus_id entirely.
Cc: Kay Sievers
Signed-off-by: Greg Kroah-Hartman
22 Jun, 2008
1 commit
-
Noticed by Martin Michlmayr, this missing export prevents IEEE1394
from building with:ERROR: "dma_sync_sg_for_device" [drivers/ieee1394/ieee1394.ko] undefined!
Signed-off-by: Russell King
23 Oct, 2007
2 commits
-
arch/arm/common/dmabounce.c: In function 'dma_map_sg':
arch/arm/common/dmabounce.c:445: error: implicit declaration of function 'sg_page'Signed-off-by: FUJITA Tomonori
Signed-off-by: Jens Axboe -
Signed-off-by: Jens Axboe
13 Oct, 2007
1 commit
-
consistent_sync() is used to handle the cache maintainence issues with
DMA operations. Since we've now removed the misuse of this function
from the two MTD drivers, rename it to prevent future mis-use.Signed-off-by: Russell King
21 May, 2007
1 commit
-
Spelling fixes in arch/arm/.
Signed-off-by: Simon Arlott
Signed-off-by: Russell King
18 Feb, 2007
1 commit
12 Feb, 2007
2 commits
-
Rather than printk'ing the dmabounce statistics occasionally to
the kernel log, provide a sysfs file to allow this information
to be periodically read.Signed-off-by: Russell King
-
dmabounce keeps a per-device structure, and finds the correct
structure by walking a list. Since architectures can now add
fields to struct device, we can attach this structure direct to
the struct device, thereby eliminating the code to search the
list.Signed-off-by: Russell King
08 Feb, 2007
3 commits
-
The DMA cache handling functions take virtual addresses, but in the
form of unsigned long arguments. This leads to a little confusion
about what exactly they take. So, convert them to take const void *
instead.Signed-off-by: Russell King
-
The outer cache can be L2 as on RealView/EB MPCore platform or even L3
or further on ARMv7 cores. This patch adds the generic support for
flushing the outer cache in the DMA operations.Signed-off-by: Catalin Marinas
Signed-off-by: Russell King -
Memory allocated by the coherent memory allocators will be marked
uncacheable, which means it's pointless calling consistent_sync()
to perform cache maintainence on this memory; it's just a waste of
CPU cycles.Moreover, with the (subsequent) merge of outer cache support, it
actually breaks things to call consistent_sync() on anything but
direct-mapped memory.Signed-off-by: Russell King
03 Nov, 2006
1 commit
-
dma_sync_single is no more (and to be removed in 2.7) so this export should be dma_sync_single_for_cpu.
Also export dma_sync_single_for_device.
Signed-off-by: Kevin Hilman
Signed-off-by: Russell King