04 Sep, 2019
1 commit
-
This function is entirely unused given that declared memory is
generally provided by platform setup code.Signed-off-by: Christoph Hellwig
03 Sep, 2019
1 commit
-
This patch adds a new DMA API "dma_get_merge_boundary". This function
returns the DMA merge boundary if the DMA layer can merge the segments.
This patch also adds the implementation for a new dma_map_ops pointer.Signed-off-by: Yoshihiro Shimoda
Reviewed-by: Simon Horman
Signed-off-by: Christoph Hellwig
08 Jun, 2019
1 commit
-
The exported function name is dma_max_mapping_size(), not
dma_direct_max_mapping_size() so that this patch fixes
the function name in the documentation.Fixes: 133d624b1cee ("dma: Introduce dma_max_mapping_size()")
Signed-off-by: Yoshihiro Shimoda
Signed-off-by: Jonathan Corbet
11 Mar, 2019
2 commits
-
Pull virtio updates from Michael Tsirkin:
"Several fixes, most notably fix for virtio on swiotlb systems"* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
vhost: silence an unused-variable warning
virtio: hint if callbacks surprisingly might sleep
virtio-ccw: wire up ->bus_name callback
s390/virtio: handle find on invalid queue gracefully
virtio-ccw: diag 500 may return a negative cookie
virtio_balloon: remove the unnecessary 0-initialization
virtio-balloon: improve update_balloon_size_func
virtio-blk: Consider virtio_max_dma_size() for maximum segment size
virtio: Introduce virtio_max_dma_size()
dma: Introduce dma_max_mapping_size()
swiotlb: Add is_swiotlb_active() function
swiotlb: Introduce swiotlb_max_mapping_size() -
Pull DMA mapping updates from Christoph Hellwig:
- add debugfs support for dumping dma-debug information (Corentin
Labbe)- Kconfig cleanups (Andy Shevchenko and me)
- debugfs cleanups (Greg Kroah-Hartman)
- improve dma_map_resource and use it in the media code
- arch_setup_dma_ops / arch_teardown_dma_ops cleanups
- various small cleanups and improvements for the per-device coherent
allocator- make the DMA mask an upper bound and don't fail "too large" dma mask
in the remaning two architectures - this will allow big driver
cleanups in the following merge windows* tag 'dma-mapping-5.1' of git://git.infradead.org/users/hch/dma-mapping: (21 commits)
Documentation/DMA-API-HOWTO: update dma_mask sections
sparc64/pci_sun4v: allow large DMA masks
sparc64/iommu: allow large DMA masks
sparc64: refactor the ali DMA quirk
ccio: allow large DMA masks
dma-mapping: remove the DMA_MEMORY_EXCLUSIVE flag
dma-mapping: remove dma_mark_declared_memory_occupied
dma-mapping: move CONFIG_DMA_CMA to kernel/dma/Kconfig
dma-mapping: improve selection of dma_declare_coherent availability
dma-mapping: remove an incorrect __iommem annotation
of: select OF_RESERVED_MEM automatically
device.h: dma_mem is only needed for HAVE_GENERIC_DMA_COHERENT
mfd/sm501: depend on HAS_DMA
dma-mapping: add a kconfig symbol for arch_teardown_dma_ops availability
dma-mapping: add a kconfig symbol for arch_setup_dma_ops availability
dma-mapping: move debug configuration options to kernel/dma
dma-debug: add dumping facility via debugfs
dma: debug: no need to check return value of debugfs_create functions
videobuf2: replace a layering violation with dma_map_resource
dma-mapping: don't BUG when calling dma_map_resource on RAM
...
07 Mar, 2019
1 commit
-
The function returns the maximum size that can be mapped
using DMA-API functions. The patch also adds the
implementation for direct DMA and a new dma_map_ops pointer
so that other implementations can expose their limit.Cc: stable@vger.kernel.org
Reviewed-by: Konrad Rzeszutek Wilk
Reviewed-by: Christoph Hellwig
Signed-off-by: Joerg Roedel
Signed-off-by: Michael S. Tsirkin
20 Feb, 2019
2 commits
-
All users of dma_declare_coherent want their allocations to be
exclusive, so default to exclusive allocations.Signed-off-by: Christoph Hellwig
Reviewed-by: Greg Kroah-Hartman -
This API is not used anywhere, so remove it.
Signed-off-by: Christoph Hellwig
01 Feb, 2019
1 commit
-
While debugging a DMA mapping leak, I needed to access
debug_dma_dump_mappings() but easily from user space.This patch adds a /sys/kernel/debug/dma-api/dump file which contain all
current DMA mapping.Signed-off-by: Corentin Labbe
Signed-off-by: Christoph Hellwig
21 Jan, 2019
1 commit
-
This patch fixes two typos, a missing "e" and dma-api/driver_filter was
incorrectly typed dma-api/driver-filter.Signed-off-by: Corentin Labbe
[jc: fixed obvious language typos on the way in]
Signed-off-by: Jonathan Corbet
20 Dec, 2018
1 commit
-
We now always return zeroed memory from dma_alloc_coherent. Note that
simply passing GFP_ZERO to dma_alloc_coherent wasn't always doing the
right thing to start with given that various allocators are not backed
by the page allocator and thus would ignore GFP_ZERO.Signed-off-by: Christoph Hellwig
11 Dec, 2018
4 commits
-
DMA debug entries are one of those things which aren't that useful
individually - we will always want some larger quantity of them - and
which we don't really need to manage the exact number of - we only care
about having 'enough'. In that regard, the current behaviour of creating
them one-by-one leads to a lot of unwarranted function call overhead and
memory wasted on alignment padding.Now that we don't have to worry about freeing anything via
dma_debug_resize_entries(), we can optimise the allocation behaviour by
grabbing whole pages at once, which will save considerably on the
aforementioned overheads, and probably offer a little more cache/TLB
locality benefit for traversing the lists under normal operation. This
should also give even less reason for an architecture-level override of
the preallocation size, so make the definition unconditional - if there
is still any desire to change the compile-time value for some platforms
it would be better off as a Kconfig option anyway.Since freeing a whole page of entries at once becomes enough of a
challenge that it's not really worth complicating dma_debug_init(), we
may as well tweak the preallocation behaviour such that as long as we
manage to allocate *some* pages, we can leave debugging enabled on a
best-effort basis rather than otherwise wasting them.Signed-off-by: Robin Murphy
Tested-by: Qian Cai
Signed-off-by: Christoph Hellwig -
Now that we can dynamically allocate DMA debug entries to cope with
drivers maintaining excessively large numbers of live mappings, a driver
which *does* actually have a bug leaking mappings (and is not unloaded)
will no longer trigger the "DMA-API: debugging out of memory - disabling"
message until it gets to actual kernel OOM conditions, which means it
could go unnoticed for a while. To that end, let's inform the user each
time the pool has grown to a multiple of its initial size, which should
make it apparent that they either have a leak or might want to increase
the preallocation size.Signed-off-by: Robin Murphy
Tested-by: Qian Cai
Signed-off-by: Christoph Hellwig -
Certain drivers such as large multi-queue network adapters can use pools
of mapped DMA buffers larger than the default dma_debug_entry pool of
65536 entries, with the result that merely probing such a device can
cause DMA debug to disable itself during boot unless explicitly given an
appropriate "dma_debug_entries=..." option.Developers trying to debug some other driver on such a system may not be
immediately aware of this, and at worst it can hide bugs if they fail to
realise that dma-debug has already disabled itself unexpectedly by the
time their code of interest gets to run. Even once they do realise, it
can be a bit of a pain to emprirically determine a suitable number of
preallocated entries to configure, short of massively over-allocating.There's really no need for such a static limit, though, since we can
quite easily expand the pool at runtime in those rare cases that the
preallocated entries are insufficient, which is arguably the least
surprising and most useful behaviour. To that end, refactor the
prealloc_memory() logic a little bit to generalise it for runtime
reallocations as well.Signed-off-by: Robin Murphy
Tested-by: Qian Cai
Signed-off-by: Christoph Hellwig -
Expose nr_total_entries in debugfs, so that {num,min}_free_entries
become even more meaningful to users interested in current/maximum
utilisation. This becomes even more relevant once nr_total_entries
may change at runtime beyond just the existing AMD GART debug code.Suggested-by: John Garry
Signed-off-by: Robin Murphy
Tested-by: Qian Cai
Signed-off-by: Christoph Hellwig
01 Sep, 2017
2 commits
-
DMA_MEMORY_IO was never used in the tree, so remove it. That means there is
no need for the DMA_MEMORY_MAP flag either now, so remove it as well and
change dma_declare_coherent_memory to return a normal errno value.Signed-off-by: Christoph Hellwig
Reviewed-by: Marek Szyprowski -
This flag was never implemented or used.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
29 Aug, 2017
1 commit
-
No users left, everyone switched to the _attrs versions.
Signed-off-by: Christoph Hellwig
15 Jul, 2017
1 commit
-
Each text file under Documentation follows a different
format. Some doesn't even have titles!Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:- Fix some title marks to match ReST;
- use :Author: for author name;
- foo_ is an hyperlink. Get rid of it;
- Mark literal blocks as such;
- Use tables on some places that are almost using the
table format.Signed-off-by: Mauro Carvalho Chehab
Signed-off-by: Jonathan Corbet
06 Jun, 2017
1 commit
-
Correct the typo, the wrongly typed function does not exist.
Fixes: 6c9c6d6301287e ("dma-debug: New interfaces to debug dma mapping errors")
Signed-off-by: Wolfram Sang
Signed-off-by: Jonathan Corbet
27 Sep, 2016
1 commit
-
Map/Unmap a device MMIO resource from a physical address. If no dma_map_ops
method is available the operation is a no-op.Signed-off-by: Niklas Söderlund
Acked-by: Arnd Bergmann
Signed-off-by: Vinod Koul
04 Aug, 2016
1 commit
-
The dma-mapping core and the implementations do not change the DMA
attributes passed by pointer. Thus the pointer can point to const data.
However the attributes do not have to be a bitfield. Instead unsigned
long will do fine:1. This is just simpler. Both in terms of reading the code and setting
attributes. Instead of initializing local attributes on the stack
and passing pointer to it to dma_set_attr(), just set the bits.2. It brings safeness and checking for const correctness because the
attributes are passed by value.Semantic patches for this change (at least most of them):
virtual patch
virtual context@r@
identifier f, attrs;@@
f(...,
- struct dma_attrs *attrs
+ unsigned long attrs
, ...)
{
...
}@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)and
// Options: --all-includes
virtual patch
virtual context@r@
identifier f, attrs;
type t;@@
t f(..., struct dma_attrs *attrs);@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
Signed-off-by: Krzysztof Kozlowski
Acked-by: Vineet Gupta
Acked-by: Robin Murphy
Acked-by: Hans-Christian Noren Egtvedt
Acked-by: Mark Salter [c6x]
Acked-by: Jesper Nilsson [cris]
Acked-by: Daniel Vetter [drm]
Reviewed-by: Bart Van Assche
Acked-by: Joerg Roedel [iommu]
Acked-by: Fabien Dessenne [bdisp]
Reviewed-by: Marek Szyprowski [vb2-core]
Acked-by: David Vrabel [xen]
Acked-by: Konrad Rzeszutek Wilk [xen swiotlb]
Acked-by: Joerg Roedel [iommu]
Acked-by: Richard Kuo [hexagon]
Acked-by: Geert Uytterhoeven [m68k]
Acked-by: Gerald Schaefer [s390]
Acked-by: Bjorn Andersson
Acked-by: Hans-Christian Noren Egtvedt [avr32]
Acked-by: Vineet Gupta [arc]
Acked-by: Robin Murphy [arm64 and dma-iommu]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
12 Jan, 2016
1 commit
-
Change the phrase "handed off to the driver" to "handed off to the
device" as in the paragraph below.Signed-off-by: Masahiro Yamada
Signed-off-by: Jonathan Corbet
10 Nov, 2015
1 commit
-
Signed-off-by: Christoph Hellwig
Acked-by: Greg Kroah-Hartman
Cc: Alan Stern
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
25 Sep, 2015
1 commit
-
The nents argument to the DMA API functions operating on scatterlists is
always the same. The documentation used different argument names and the
matter was not mentioned in Documentation/DMA-API-HOWTO.txt at all. Fix
these.Signed-off-by: Sakari Ailus
Signed-off-by: Jonathan Corbet
09 Sep, 2015
1 commit
-
Add a wrapper function for dma_pool_alloc() to get zeroed memory.
Signed-off-by: Sean O. Stalley
Cc: Vinod Koul
Cc: Bjorn Helgaas
Cc: Gilles Muller
Cc: Nicolas Palix
Cc: Michal Marek
Cc: Sebastian Andrzej Siewior
Cc: Jonathan Corbet
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 May, 2015
1 commit
-
David Ahern reported that d63e2e1f3df9 ("sparc/PCI: Clip bridge windows
to fit in upstream windows") fails to boot on sparc/T5-8:pci 0000:06:00.0: reg 0x184: can't handle BAR above 4GB (bus address 0x110204000)
The problem is that sparc64 assumed that dma_addr_t only needed to hold DMA
addresses, i.e., bus addresses returned via the DMA API (dma_map_single(),
etc.), while the PCI core assumed dma_addr_t could hold *any* bus address,
including raw BAR values. On sparc64, all DMA addresses fit in 32 bits, so
dma_addr_t is a 32-bit type. However, BAR values can be 64 bits wide, so
they don't fit in a dma_addr_t. d63e2e1f3df9 added new checking that
tripped over this mismatch.Add pci_bus_addr_t, which is wide enough to hold any PCI bus address,
including both raw BAR values and DMA addresses. This will be 64 bits
on 64-bit platforms and on platforms with a 64-bit dma_addr_t. Then
dma_addr_t only needs to be wide enough to hold addresses from the DMA API.[bhelgaas: changelog, bugzilla, Kconfig to ensure pci_bus_addr_t is at
least as wide as dma_addr_t, documentation]
Fixes: d63e2e1f3df9 ("sparc/PCI: Clip bridge windows to fit in upstream windows")
Fixes: 23b13bc76f35 ("PCI: Fail safely if we can't handle BARs larger than 4GB")
Link: http://lkml.kernel.org/r/CAE9FiQU1gJY1LYrxs+ma5LCTEEe4xmtjRG0aXJ9K_Tsu+m9Wuw@mail.gmail.com
Link: http://lkml.kernel.org/r/1427857069-6789-1-git-send-email-yinghai@kernel.org
Link: https://bugzilla.kernel.org/show_bug.cgi?id=96231
Reported-by: David Ahern
Tested-by: David Ahern
Signed-off-by: Yinghai Lu
Signed-off-by: Bjorn Helgaas
Acked-by: David S. Miller
CC: stable@vger.kernel.org # v3.19+
27 May, 2014
1 commit
-
Sometimes we used "cpu," other times "CPU." Use "CPU" consistently.
Suggested-by: Randy Dunlap
Signed-off-by: Bjorn Helgaas
21 May, 2014
2 commits
-
dma_declare_coherent_memory() takes two addresses for a region of memory: a
"bus_addr" and a "device_addr". I think the intent is that "bus_addr" is
the physical address a *CPU* would use to access the region, and
"device_addr" is the bus address the *device* would use to address the
region.Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t.
Most callers already supply a phys_addr_t for this argument. The others
supply a 32-bit integer (a constant, unsigned int, or __u32) and need no
change.Use "unsigned long", not phys_addr_t, to hold PFNs.
No functional change (this could theoretically fix a truncation in a config
with 32-bit dma_addr_t and 64-bit phys_addr_t, but I don't think there are
any such cases involving this code).Signed-off-by: Bjorn Helgaas
Acked-by: Arnd Bergmann
Acked-by: Greg Kroah-Hartman
Acked-by: James Bottomley
Acked-by: Randy Dunlap -
The DMA-API documentation sometimes refers to "physical addresses" when it
really means "bus addresses." Sometimes these are identical, but they may
be different if the bridge leading to the bus performs address translation.
Update the documentation to use "bus address" when appropriate.Also, consistently capitalize "DMA", use parens with function names, use
dev_printk() in examples, and reword a few sections for clarity.No functional change; documentation changes only.
Signed-off-by: Bjorn Helgaas
Acked-by: Greg Kroah-Hartman
Acked-by: Arnd Bergmann
Acked-by: James Bottomley
Acked-by: Randy Dunlap
17 Sep, 2013
1 commit
-
Provide a helper to set both the DMA and coherent DMA masks to the
same value - this avoids duplicated code in a number of drivers,
sometimes with buggy error handling, and also allows us identify
which drivers do things differently.Signed-off-by: Russell King
24 Oct, 2012
1 commit
-
Add dma-debug interface debug_dma_mapping_error() to debug
drivers that fail to check dma mapping errors on addresses
returned by dma_map_single() and dma_map_page() interfaces.
This interface clears a flag set by debug_dma_map_page() to
indicate that dma_mapping_error() has been called by the
driver. When driver does unmap, debug_dma_unmap() checks the
flag and if this flag is still set, prints warning message
that includes call trace that leads up to the unmap. This
interface can be called from dma_mapping_error() routines to
enable dma mapping error check debugging.Tested: Intel iommu and swiotlb (iommu=soft) on x86-64 with
CONFIG_DMA_API_DEBUG enabled and disabled.Signed-off-by: Shuah Khan
Reviewed-by: Konrad Rzeszutek Wilk
Signed-off-by: Joerg Roedel
03 Nov, 2011
1 commit
-
Lots of driver code does a dma_alloc_coherent() and then zeroes out the
memory with a memset. Make it easy for them.Cc: Alexandre Bounine
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Aug, 2010
1 commit
-
Architectures implement dma_is_consistent() in different ways (some
misinterpret the definition of API in DMA-API.txt). So it hasn't been so
useful for drivers. We have only one user of the API in tree. Unlikely
out-of-tree drivers use the API.Even if we fix dma_is_consistent() in some architectures, it doesn't look
useful at all. It was invented long ago for some old systems that can't
allocate coherent memory at all. It's better to export only APIs that are
definitely necessary for drivers.Let's remove this API.
Signed-off-by: FUJITA Tomonori
Cc: James Bottomley
Reviewed-by: Konrad Rzeszutek Wilk
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
13 Mar, 2010
6 commits
-
Signed-off-by: FUJITA Tomonori
Cc: James Bottomley
Cc: "David S. Miller"
Cc: Randy Dunlap
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
- remove the PCI DMA API description in DMA-API.txt
- remove the descriptions of dma_unmap macros since
PCI-DMA-mapping.txt has the same description.Signed-off-by: FUJITA Tomonori
Cc: James Bottomley
Cc: "David S. Miller"
Reviewed-by: Randy Dunlap
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
dma_set_coherent_mask corresponds to pci_set_consistent_dma_mask. This is
necessary to move to the generic device model DMA API from the PCI bus
specific API in the long term.dma_set_coherent_mask works in the exact same way that
pci_set_consistent_dma_mask does. So this patch also changes
pci_set_consistent_dma_mask to call dma_set_coherent_mask.Signed-off-by: FUJITA Tomonori
Cc: James Bottomley
Cc: David S. Miller
Cc: Jesse Barnes
Cc: Benjamin Herrenschmidt
Cc: Russell King
Cc: Greg KH
Cc: Kay Sievers
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Adds the following macros:
DECLARE_DMA_UNMAP_ADDR(ADDR_NAME)
DECLARE_DMA_UNMAP_LEN(LEN_NAME)
dma_unmap_addr(PTR, ADDR_NAME)
dma_unmap_addr_set(PTR, ADDR_NAME, VAL)
dma_unmap_len(PTR, LEN_NAME)
dma_unmap_len_set(PTR, LEN_NAME, VAL)The API corresponds to the pci_unmap state API. We'll move to this new
generic API from the PCI specific API in the long term. As
include/asm-generic/pci-dma-compat.h does, the pci_unmap API simply calls
the new generic API for some time.Signed-off-by: FUJITA Tomonori
Cc: James Bottomley
Cc: David S. Miller
Cc: Jesse Barnes
Cc: Arnd Bergmann
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
dma_sync_single_for_cpu/for_device supports a partial sync so there is no
point to have dma_sync_single_range (also dma_sync_single was obsoleted
long ago, replaced with dma_sync_single_for_cpu/for_device).There is no user of dma_sync_single_range() in mainline and only Alpha
architecture supports dma_sync_single_range(). So it's unlikely that
someone out of the tree uses it.Signed-off-by: FUJITA Tomonori
Acked-by: David Miller
Acked-by: James Bottomley
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This adds the description of the following eight function:
dma_sync_single_for_cpu
pci_dma_sync_single_for_cpu
dma_sync_single_for_device
pci_dma_sync_single_for_device
dma_sync_sg_for_cpu
pci_dma_sync_sg_for_cpu
dma_sync_sg_for_device
pci_dma_sync_sg_for_deviceIt was unclear that the API permits a partial sync (some network drivers
already do though). I made it clear that the sync_single API can do a
partial sync but the sync_sg API can't.We could do a partial sync with the sync_sg API too, however, it's
difficult for driver writers to correctly use the sync_sg API for a
partial sync since the scatterlists passed in to the mapping API can't be
modified. It's unlikely that driver writers want to do a partial sync
with the sync_sg API (because the sync_sg API are usually used for block
drivers). So I think that it's better to forbid a partial sync with the
sync_sg API.Signed-off-by: FUJITA Tomonori
Acked-by: David Miller
Acked-by: James Bottomley
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds