11 Sep, 2019
8 commits
-
xen_dma_map_page uses a different and more complicated check for foreign
pages than the other three cache maintainance helpers. Switch it to the
simpler pfn_valid method a well, and document the scheme with a single
improved comment in xen_dma_map_page.Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini -
There is no need to wrap the common version, just wire them up directly.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini -
These routines are only used by swiotlb-xen, which cannot be modular.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini -
arm and arm64 can just use xen_swiotlb_dma_ops directly like x86, no
need for a pointer indirection.Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
Reviewed-by: Stefano Stabellini -
Calculate the required operation in the caller, and pass it directly
instead of recalculating it for each page, and use simple arithmetics
to get from the physical address to Xen page size aligned chunks.Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini -
Use the dma-noncoherent dev_is_dma_coherent helper instead of the home
grown variant. Note that both are always initialized to the same
value in arch_setup_dma_ops.Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
Reviewed-by: Stefano Stabellini -
Shared the duplicate arm/arm64 code in include/xen/arm/page-coherent.h.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini -
Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA
on-coherent devices.Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
04 Sep, 2019
17 commits
-
Remove a few tiny wrappers around the generic dma remap code.
Signed-off-by: Christoph Hellwig
-
A helper to find the backing page array based on a virtual address.
This also ensures we do the same vm_flags check everywhere instead
of slightly different or missing ones in a few places.Signed-off-by: Christoph Hellwig
-
Currently the generic dma remap allocator gets a vm_flags passed by
the caller that is a little confusing. We just introduced a generic
vmalloc-level flag to identify the dma coherent allocations, so use
that everywhere and remove the now pointless argument.Signed-off-by: Christoph Hellwig
-
The arm architecture had a VM_ARM_DMA_CONSISTENT flag to mark DMA
coherent remapping for a while. Lift this flag to common code so
that we can use it generically. We also check it in the only place
VM_USERMAP is directly check so that we can entirely replace that
flag as well (although I'm not even sure why we'd want to allow
remapping DMA appings, but I'd rather not change behavior).Signed-off-by: Christoph Hellwig
-
Most dma_map_ops instances are IOMMUs that work perfectly fine in 32-bits
of IOVA space, and the generic direct mapping code already provides its
own routines that is intelligent based on the amount of memory actually
present. Wire up the dma-direct routine for the ARM direct mapping code
as well, and otherwise default to the constant 32-bit mask. This way
we only need to override it for the occasional odd IOMMU that requires
64-bit IOVA support, or IOMMU drivers that are more efficient if they
can fall back to the direct mapping.Signed-off-by: Christoph Hellwig
-
dma_declare_coherent_memory is something that the platform setup code
(which pretty much means the device tree these days) need to do so that
drivers can use the memory as declared by the platform. Drivers
themselves have no business calling this function.Signed-off-by: Christoph Hellwig
-
Remoteproc started using dma_declare_coherent_memory recently, which is
a bad idea from drivers, and the maintainers agreed to fix that. But
until that is fixed only allow building the driver built in so that we
can remove the dma_declare_coherent_memory export and prevent other
drivers from "accidentally" using it like remoteproc. Note that the
driver would also leak the declared coherent memory on unload if it
actually was built as a module at the moment.Signed-off-by: Christoph Hellwig
Reviewed-by: Bjorn Andersson -
dma_mmap_from_dev_coherent is only used by dma_map_ops instances,
none of which is modular.Signed-off-by: Christoph Hellwig
-
This function is entirely unused given that declared memory is
generally provided by platform setup code.Signed-off-by: Christoph Hellwig
-
We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version,
so remove the third way of doing things.Signed-off-by: Christoph Hellwig
Reviewed-by: Laurent Pinchart
Reviewed-by: Tomi Valkeinen -
CONFIG_ARCH_NO_COHERENT_DMA_MMAP is now functionally identical to
!CONFIG_MMU, so remove the separate symbol. The only difference is that
arm did not set it for !CONFIG_MMU, but arm uses a separate dma mapping
implementation including its own mmap method, which is handled by moving
the CONFIG_MMU check in dma_can_mmap so that is only applies to the
dma-direct case, just as the other ifdefs for it.Signed-off-by: Christoph Hellwig
Acked-by: Geert Uytterhoeven # m68k -
parisc is the only architecture that sets ARCH_NO_COHERENT_DMA_MMAP
when an MMU is enabled. AFAIK this is because parisc CPUs use VIVT
caches, which means exporting normally cachable memory to userspace is
relatively dangrous due to cache aliasing.But normally cachable memory is only allocated by dma_alloc_coherent
on parisc when using the sba_iommu or ccio_iommu drivers, so just
remove the .mmap implementation for them so that we don't have to set
ARCH_NO_COHERENT_DMA_MMAP, which I plan to get rid of.Signed-off-by: Christoph Hellwig
-
There is no need to go through dma_common_mmap for the arm-nommu
dma mmap implementation as the only possible memory not handled above
could be that from the per-device coherent pool.Signed-off-by: Christoph Hellwig
-
Replace the local hack with the dma_can_mmap helper to check if
a given device supports mapping DMA allocations to userspace.Signed-off-by: Christoph Hellwig
Reviewed-by: Takashi Iwai -
Add a helper to check if DMA allocations for a specific device can be
mapped to userspace using dma_mmap_*.Signed-off-by: Christoph Hellwig
-
While the default ->mmap and ->get_sgtable implementations work for the
majority of our dma_map_ops impementations they are inherently safe
for others that don't use the page allocator or CMA and/or use their
own way of remapping not covered by the common code. So remove the
defaults if these methods are not wired up, but instead wire up the
default implementations for all safe instances.Fixes: e1c7e324539a ("dma-mapping: always provide the dma_map_ops based implementation")
Signed-off-by: Christoph Hellwig -
The comments are spot on and should be near the central API, not just
near a single implementation.Signed-off-by: Christoph Hellwig
03 Sep, 2019
5 commits
-
After commit cf65a0f6f6ff ("dma-mapping: move all DMA mapping code to
kernel/dma") some of the files are referring to outdated information,
i.e. old file names of DMA mapping sources. Fix it here.Note, the lines with "Glue code for..." have been removed completely.
Signed-off-by: Andy Shevchenko
Signed-off-by: Christoph Hellwig -
This patch adds a new dma_map_ops of get_merge_boundary() to
expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA.Signed-off-by: Yoshihiro Shimoda
Reviewed-by: Simon Horman
Acked-by: Joerg Roedel
Signed-off-by: Christoph Hellwig -
This patch adds a new DMA API "dma_get_merge_boundary". This function
returns the DMA merge boundary if the DMA layer can merge the segments.
This patch also adds the implementation for a new dma_map_ops pointer.Signed-off-by: Yoshihiro Shimoda
Reviewed-by: Simon Horman
Signed-off-by: Christoph Hellwig -
When the max_segs of a mmc host is smaller than 512, the mmc
subsystem tries to use 512 segments if DMA MAP layer can merge
the segments, and then the mmc subsystem exposes such information
to the block layer by using blk_queue_can_use_dma_map_merging().Signed-off-by: Yoshihiro Shimoda
Reviewed-by: Ulf Hansson
Reviewed-by: Simon Horman
Signed-off-by: Christoph Hellwig -
This patch adds a helper function whether a queue can merge
the segments by the DMA MAP layer (e.g. via IOMMU).Signed-off-by: Yoshihiro Shimoda
Reviewed-by: Christoph Hellwig
Reviewed-by: Simon Horman
29 Aug, 2019
4 commits
-
Based on an email from Paul Burton, quoting section 4.8 "Cacheability and
Coherency Attributes and Access Types" of "MIPS Architecture Volume 1:
Introduction to the MIPS32 Architecture" (MD00080, revision 6.01).Signed-off-by: Christoph Hellwig
Acked-by: Paul Burton -
Based on an email from Will Deacon.
Signed-off-by: Christoph Hellwig
Acked-by: Will Deacon
Acked-by: Mark Rutland -
The memory allocated for the atomic pool needs to have the same
mapping attributes that we use for remapping, so use
pgprot_dmacoherent instead of open coding it. Also deduct a
suitable zone to allocate the memory from based on the presence
of the DMA zones.Signed-off-by: Christoph Hellwig
-
arch_dma_mmap_pgprot is used for two things:
1) to override the "normal" uncached page attributes for mapping
memory coherent to devices that can't snoop the CPU caches
2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older
arm systems and some mips platformsReplace one with the pgprot_dmacoherent macro that is already provided
by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE
handling to common code with an explicit arch opt-in.Signed-off-by: Christoph Hellwig
Acked-by: Geert Uytterhoeven # m68k
Acked-by: Paul Burton # mips
26 Aug, 2019
6 commits
-
Signed-off-by: Christoph Hellwig
-
Signed-off-by: Christoph Hellwig
-
Pull auxdisplay cleanup from Miguel Ojeda:
"Make ht16k33_fb_fix and ht16k33_fb_var constant (Nishka Dasgupta)"* tag 'auxdisplay-for-linus-v5.3-rc7' of git://github.com/ojeda/linux:
auxdisplay: ht16k33: Make ht16k33_fb_fix and ht16k33_fb_var constant -
Pull UML fix from Richard Weinberger:
"Fix time travel mode"* tag 'for-linus-5.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml:
um: fix time travel mode -
Pull UBIFS and JFFS2 fixes from Richard Weinberger:
"UBIFS:
- Don't block too long in writeback_inodes_sb()
- Fix for a possible overrun of the log head
- Fix double unlock in orphan_delete()JFFS2:
- Remove C++ style from UAPI header and unbreak picky toolchains"* tag 'for-linus-5.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs:
ubifs: Limit the number of pages in shrink_liability
ubifs: Correctly initialize c->min_log_bytes
ubifs: Fix double unlock around orphan_delete()
jffs2: Remove C++ style comments from uapi header