14 Jan, 2021

2 commits

  • IOMMU_SYS_CACHE_ONLY_NWA allows buffers for non-coherent devices
    to be mapped with the correct memory attributes so that the buffers
    can be cached in the system cache, with a no write allocate cache policy.
    However, this property is only usable by drivers that invoke the
    IOMMU API directly; it is not usable by drivers that use the DMA API.

    Thus, introduce DMA_ATTR_SYS_CACHE_ONLY_NWA, so that drivers for
    non-coherent devices that use the DMA API can use it to specify if
    they want a buffer to be cached in the system cache.

    Bug: 176778547
    Change-Id: Ic812a1fb144a58deb4279c2bf121fc6cc4c3b208
    Signed-off-by: Isaac J. Manjarres

    Isaac J. Manjarres
     
  • IOMMU_SYS_CACHE_ONLY allows buffers for non-coherent devices
    to be mapped with the correct memory attributes so that the buffers
    can be cached in the system cache. However, this property
    is only usable by drivers that invoke the IOMMU API directly;
    it is not usable by drivers that use the DMA API.

    Thus, introduce DMA_ATTR_SYS_CACHE_ONLY, so that drivers for
    non-coherent devices that use the DMA API can use it to specify if
    they want a buffer to be cached in the system cache.

    Bug: 176778547
    Change-Id: I849d7a3f36b689afd2f6ee400507223fd6395158
    Signed-off-by: Isaac J. Manjarres

    Isaac J. Manjarres
     

06 Oct, 2020

3 commits


25 Sep, 2020

3 commits


18 Sep, 2020

1 commit

  • When booting the kernel v5.9-rc4 on a VM, the kernel would panic when
    printing a warning message in swiotlb_map(). The dev->dma_mask must not
    be a NULL pointer when calling the dma mapping layer. A NULL pointer
    check can potentially avoid the panic.

    Signed-off-by: Thomas Tai
    Reviewed-by: Konrad Rzeszutek Wilk
    Signed-off-by: Christoph Hellwig

    Thomas Tai
     

11 Sep, 2020

1 commit


19 Jul, 2020

1 commit


16 Jul, 2020

1 commit

  • For a long time the DMA API has been implemented inline in dma-mapping.h,
    but the function bodies can be quite large. Move them all out of line.

    This also removes all the dma_direct_* exports as those are just
    implementation details and should never be used by drivers directly.

    Signed-off-by: Christoph Hellwig
    Tested-by: Alexey Kardashevskiy
    Reviewed-by: Alexey Kardashevskiy

    Christoph Hellwig
     

30 Jun, 2020

1 commit


17 Mar, 2020

1 commit

  • When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory
    under SEV encryption and sometimes under SME encryption, it will actually
    set up an encrypted mapping rather than an unencrypted, causing devices
    that DMAs from that memory to read encrypted contents. Fix this.

    When force_dma_unencrypted() returns true, the linear kernel map of the
    coherent pages have had the encryption bit explicitly cleared and the
    page content is unencrypted. Make sure that any additional PTEs we set
    up to these pages also have the encryption bit cleared by having
    dma_pgprot() return a protection with the encryption bit cleared in this
    case.

    Signed-off-by: Thomas Hellstrom
    Signed-off-by: Borislav Petkov
    Reviewed-by: Christoph Hellwig
    Acked-by: Tom Lendacky
    Link: https://lkml.kernel.org/r/20200304114527.3636-3-thomas_os@shipmail.org

    Thomas Hellstrom
     

11 Nov, 2019

1 commit

  • For dma-direct we know that the DMA address is an encoding of the
    physical address that we can trivially decode. Use that fact to
    provide implementations that do not need the arch_dma_coherent_to_pfn
    architecture hook. Note that we still can only support mmap of
    non-coherent memory only if the architecture provides a way to set an
    uncached bit in the page tables. This must be true for architectures
    that use the generic remap helpers, but other architectures can also
    manually select it.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Max Filippov

    Christoph Hellwig
     

21 Sep, 2019

1 commit

  • Pull powerpc updates from Michael Ellerman:
    "This is a bit late, partly due to me travelling, and partly due to a
    power outage knocking out some of my test systems *while* I was
    travelling.

    - Initial support for running on a system with an Ultravisor, which
    is software that runs below the hypervisor and protects guests
    against some attacks by the hypervisor.

    - Support for building the kernel to run as a "Secure Virtual
    Machine", ie. as a guest capable of running on a system with an
    Ultravisor.

    - Some changes to our DMA code on bare metal, to allow devices with
    medium sized DMA masks (> 32 && < 59 bits) to use more than 2GB of
    DMA space.

    - Support for firmware assisted crash dumps on bare metal (powernv).

    - Two series fixing bugs in and refactoring our PCI EEH code.

    - A large series refactoring our exception entry code to use gas
    macros, both to make it more readable and also enable some future
    optimisations.

    As well as many cleanups and other minor features & fixups.

    Thanks to: Adam Zerella, Alexey Kardashevskiy, Alistair Popple, Andrew
    Donnellan, Aneesh Kumar K.V, Anju T Sudhakar, Anshuman Khandual,
    Balbir Singh, Benjamin Herrenschmidt, Cédric Le Goater, Christophe
    JAILLET, Christophe Leroy, Christopher M. Riedl, Christoph Hellwig,
    Claudio Carvalho, Daniel Axtens, David Gibson, David Hildenbrand,
    Desnes A. Nunes do Rosario, Ganesh Goudar, Gautham R. Shenoy, Greg
    Kurz, Guerney Hunt, Gustavo Romero, Halil Pasic, Hari Bathini, Joakim
    Tjernlund, Jonathan Neuschafer, Jordan Niethe, Leonardo Bras, Lianbo
    Jiang, Madhavan Srinivasan, Mahesh Salgaonkar, Mahesh Salgaonkar,
    Masahiro Yamada, Maxiwell S. Garcia, Michael Anderson, Nathan
    Chancellor, Nathan Lynch, Naveen N. Rao, Nicholas Piggin, Oliver
    O'Halloran, Qian Cai, Ram Pai, Ravi Bangoria, Reza Arbab, Ryan Grimm,
    Sam Bobroff, Santosh Sivaraj, Segher Boessenkool, Sukadev Bhattiprolu,
    Thiago Bauermann, Thiago Jung Bauermann, Thomas Gleixner, Tom
    Lendacky, Vasant Hegde"

    * tag 'powerpc-5.4-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (264 commits)
    powerpc/mm/mce: Keep irqs disabled during lockless page table walk
    powerpc: Use ftrace_graph_ret_addr() when unwinding
    powerpc/ftrace: Enable HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
    ftrace: Look up the address of return_to_handler() using helpers
    powerpc: dump kernel log before carrying out fadump or kdump
    docs: powerpc: Add missing documentation reference
    powerpc/xmon: Fix output of XIVE IPI
    powerpc/xmon: Improve output of XIVE interrupts
    powerpc/mm/radix: remove useless kernel messages
    powerpc/fadump: support holes in kernel boot memory area
    powerpc/fadump: remove RMA_START and RMA_END macros
    powerpc/fadump: update documentation about option to release opalcore
    powerpc/fadump: consider f/w load area
    powerpc/opalcore: provide an option to invalidate /sys/firmware/opal/core file
    powerpc/opalcore: export /sys/firmware/opal/core for analysing opal crashes
    powerpc/fadump: update documentation about CONFIG_PRESERVE_FA_DUMP
    powerpc/fadump: add support to preserve crash data on FADUMP disabled kernel
    powerpc/fadump: improve how crashed kernel's memory is reserved
    powerpc/fadump: consider reserved ranges while releasing memory
    powerpc/fadump: make crash memory ranges array allocation generic
    ...

    Linus Torvalds
     

04 Sep, 2019

5 commits

  • Most dma_map_ops instances are IOMMUs that work perfectly fine in 32-bits
    of IOVA space, and the generic direct mapping code already provides its
    own routines that is intelligent based on the amount of memory actually
    present. Wire up the dma-direct routine for the ARM direct mapping code
    as well, and otherwise default to the constant 32-bit mask. This way
    we only need to override it for the occasional odd IOMMU that requires
    64-bit IOVA support, or IOMMU drivers that are more efficient if they
    can fall back to the direct mapping.

    Signed-off-by: Christoph Hellwig

    Christoph Hellwig
     
  • CONFIG_ARCH_NO_COHERENT_DMA_MMAP is now functionally identical to
    !CONFIG_MMU, so remove the separate symbol. The only difference is that
    arm did not set it for !CONFIG_MMU, but arm uses a separate dma mapping
    implementation including its own mmap method, which is handled by moving
    the CONFIG_MMU check in dma_can_mmap so that is only applies to the
    dma-direct case, just as the other ifdefs for it.

    Signed-off-by: Christoph Hellwig
    Acked-by: Geert Uytterhoeven # m68k

    Christoph Hellwig
     
  • Add a helper to check if DMA allocations for a specific device can be
    mapped to userspace using dma_mmap_*.

    Signed-off-by: Christoph Hellwig

    Christoph Hellwig
     
  • While the default ->mmap and ->get_sgtable implementations work for the
    majority of our dma_map_ops impementations they are inherently safe
    for others that don't use the page allocator or CMA and/or use their
    own way of remapping not covered by the common code. So remove the
    defaults if these methods are not wired up, but instead wire up the
    default implementations for all safe instances.

    Fixes: e1c7e324539a ("dma-mapping: always provide the dma_map_ops based implementation")
    Signed-off-by: Christoph Hellwig

    Christoph Hellwig
     
  • The comments are spot on and should be near the central API, not just
    near a single implementation.

    Signed-off-by: Christoph Hellwig

    Christoph Hellwig
     

03 Sep, 2019

1 commit

  • This patch adds a new DMA API "dma_get_merge_boundary". This function
    returns the DMA merge boundary if the DMA layer can merge the segments.
    This patch also adds the implementation for a new dma_map_ops pointer.

    Signed-off-by: Yoshihiro Shimoda
    Reviewed-by: Simon Horman
    Signed-off-by: Christoph Hellwig

    Yoshihiro Shimoda
     

29 Aug, 2019

1 commit

  • arch_dma_mmap_pgprot is used for two things:

    1) to override the "normal" uncached page attributes for mapping
    memory coherent to devices that can't snoop the CPU caches
    2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older
    arm systems and some mips platforms

    Replace one with the pgprot_dmacoherent macro that is already provided
    by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE
    handling to common code with an explicit arch opt-in.

    Signed-off-by: Christoph Hellwig
    Acked-by: Geert Uytterhoeven # m68k
    Acked-by: Paul Burton # mips

    Christoph Hellwig
     

11 Aug, 2019

1 commit

  • All the way back to introducing dma_common_mmap we've defaulted to mark
    the pages as uncached. But this is wrong for DMA coherent devices.
    Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
    flag is only treated special on the alloc side for non-coherent devices.

    Introduce a new dma_pgprot helper that deals with the check for coherent
    devices so that only the remapping cases ever reach arch_dma_mmap_pgprot
    and we thus ensure no aliasing of page attributes happens, which makes
    the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the
    remaining ones.

    Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but
    we'll phase it out soon.

    Fixes: 64ccc9c033c6 ("common: dma-mapping: add support for generic dma_mmap_* calls")
    Reported-by: Shawn Anastasio
    Reported-by: Gavin Li
    Signed-off-by: Christoph Hellwig
    Acked-by: Catalin Marinas # arm64

    Christoph Hellwig
     

09 Aug, 2019

1 commit

  • sme_active() is an x86-specific function so it's better not to call it from
    generic code. Christoph Hellwig mentioned that "There is no reason why we
    should have a special debug printk just for one specific reason why there
    is a requirement for a large DMA mask.", so just remove dma_check_mask().

    Signed-off-by: Thiago Jung Bauermann
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Tom Lendacky
    Signed-off-by: Michael Ellerman
    Link: https://lore.kernel.org/r/20190806044919.10622-4-bauerman@linux.ibm.com

    Thiago Jung Bauermann
     

24 Jul, 2019

1 commit


25 Jun, 2019

1 commit

  • The dma masks in struct device are always 64-bits wide. But for builds
    using a 32-bit dma_addr_t we need to ensure we don't store an
    unsupportable value. Before Linux 5.0 this was handled at least by
    the ARM dma mapping code by never allowing to set a larger dma_mask,
    but these days we allow the driver to just set the largest supported
    value and never fall back to a smaller one. Ensure this always works
    by truncating the value.

    Fixes: 9eb9e96e97b3 ("Documentation/DMA-API-HOWTO: update dma_mask sections")
    Signed-off-by: Christoph Hellwig

    Christoph Hellwig
     

24 Apr, 2019

1 commit


08 Apr, 2019

1 commit


11 Mar, 2019

1 commit

  • Pull virtio updates from Michael Tsirkin:
    "Several fixes, most notably fix for virtio on swiotlb systems"

    * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
    vhost: silence an unused-variable warning
    virtio: hint if callbacks surprisingly might sleep
    virtio-ccw: wire up ->bus_name callback
    s390/virtio: handle find on invalid queue gracefully
    virtio-ccw: diag 500 may return a negative cookie
    virtio_balloon: remove the unnecessary 0-initialization
    virtio-balloon: improve update_balloon_size_func
    virtio-blk: Consider virtio_max_dma_size() for maximum segment size
    virtio: Introduce virtio_max_dma_size()
    dma: Introduce dma_max_mapping_size()
    swiotlb: Add is_swiotlb_active() function
    swiotlb: Introduce swiotlb_max_mapping_size()

    Linus Torvalds
     

07 Mar, 2019

1 commit

  • The function returns the maximum size that can be mapped
    using DMA-API functions. The patch also adds the
    implementation for direct DMA and a new dma_map_ops pointer
    so that other implementations can expose their limit.

    Cc: stable@vger.kernel.org
    Reviewed-by: Konrad Rzeszutek Wilk
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Joerg Roedel
    Signed-off-by: Michael S. Tsirkin

    Joerg Roedel
     

18 Feb, 2019

2 commits


04 Jan, 2019

2 commits


21 Dec, 2018

1 commit

  • The cleanup in commit 356da6d0cde3 ("dma-mapping: bypass indirect calls
    for dma-direct") accidentally inverted the logic in the check for the
    presence of a ->dma_supported() callback. Switch this back to the way it
    was to prevent a crash on boot.

    Fixes: 356da6d0cde3 ("dma-mapping: bypass indirect calls for dma-direct")
    Signed-off-by: Thierry Reding
    Signed-off-by: Christoph Hellwig

    Thierry Reding
     

14 Dec, 2018

4 commits