14 Jan, 2021

2 commits

  • IOMMU_SYS_CACHE_ONLY_NWA allows buffers for non-coherent devices
    to be mapped with the correct memory attributes so that the buffers
    can be cached in the system cache, with a no write allocate cache policy.
    However, this property is only usable by drivers that invoke the
    IOMMU API directly; it is not usable by drivers that use the DMA API.

    Thus, introduce DMA_ATTR_SYS_CACHE_ONLY_NWA, so that drivers for
    non-coherent devices that use the DMA API can use it to specify if
    they want a buffer to be cached in the system cache.

    Bug: 176778547
    Change-Id: Ic812a1fb144a58deb4279c2bf121fc6cc4c3b208
    Signed-off-by: Isaac J. Manjarres

    Isaac J. Manjarres
     
  • IOMMU_SYS_CACHE_ONLY allows buffers for non-coherent devices
    to be mapped with the correct memory attributes so that the buffers
    can be cached in the system cache. However, this property
    is only usable by drivers that invoke the IOMMU API directly;
    it is not usable by drivers that use the DMA API.

    Thus, introduce DMA_ATTR_SYS_CACHE_ONLY, so that drivers for
    non-coherent devices that use the DMA API can use it to specify if
    they want a buffer to be cached in the system cache.

    Bug: 176778547
    Change-Id: I849d7a3f36b689afd2f6ee400507223fd6395158
    Signed-off-by: Isaac J. Manjarres

    Isaac J. Manjarres
     

12 Nov, 2020

1 commit


04 Nov, 2020

1 commit

  • Export dma_contiguous_default_area so dev_get_cma_area() can be called
    from a module.

    This allows the CMA dma-buf heap to be configured and built as a module.

    Signed-off-by: John Stultz
    Change-Id: I8ae944c147ff83dcd8d42a39efa6769dae4039b7
    Bug: 155218010
    Link: https://lore.kernel.org/lkml/20191025234834.28214-2-john.stultz@linaro.org/
    Signed-off-by: Hridya Valsaraju

    John Stultz
     

02 Nov, 2020

2 commits

  • The tbl_dma_addr argument is used to check the DMA boundary for the
    allocations, and thus needs to be a dma_addr_t. swiotlb-xen instead
    passed a physical address, which could lead to incorrect results for
    strange offsets. Fix this by removing the parameter entirely and hard
    code the DMA address for io_tlb_start instead.

    Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations")
    Signed-off-by: Christoph Hellwig
    Reviewed-by: Stefano Stabellini
    Signed-off-by: Konrad Rzeszutek Wilk

    Christoph Hellwig
     
  • kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
    allocate a buffer for the swiotlb. It does so by calling

    memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);

    If the allocation must fail, no_iotlb_memory is set.

    Later during initialization swiotlb-xen comes in
    (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
    is != 0, it thinks the memory is ready to use when actually it is not.

    When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
    and since no_iotlb_memory is set the kernel panics.

    Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
    initialized, it would do the initialization itself, which might still
    succeed.

    Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
    failure, and also by setting no_iotlb_memory to false on swiotlb
    initialization success.

    Fixes: ac2cbab21f31 ("x86: Don't panic if can not alloc buffer for swiotlb")

    Reported-by: Elliott Mitchell
    Tested-by: Elliott Mitchell
    Signed-off-by: Stefano Stabellini
    Reviewed-by: Christoph Hellwig
    Cc: stable@vger.kernel.org
    Signed-off-by: Konrad Rzeszutek Wilk

    Stefano Stabellini
     

29 Oct, 2020

1 commit


26 Oct, 2020

2 commits


25 Oct, 2020

2 commits


20 Oct, 2020

1 commit


17 Oct, 2020

1 commit

  • Fix multiple occurrences of duplicated words in kernel/.

    Fix one typo/spello on the same line as a duplicate word. Change one
    instance of "the the" to "that the". Otherwise just drop one of the
    repeated words.

    Signed-off-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Link: https://lkml.kernel.org/r/98202fa6-8919-ef63-9efe-c0fad5ca7af1@infradead.org
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

16 Oct, 2020

1 commit

  • Pull dma-mapping updates from Christoph Hellwig:

    - rework the non-coherent DMA allocator

    - move private definitions out of

    - lower CMA_ALIGNMENT (Paul Cercueil)

    - remove the omap1 dma address translation in favor of the common code

    - make dma-direct aware of multiple dma offset ranges (Jim Quinlan)

    - support per-node DMA CMA areas (Barry Song)

    - increase the default seg boundary limit (Nicolin Chen)

    - misc fixes (Robin Murphy, Thomas Tai, Xu Wang)

    - various cleanups

    * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits)
    ARM/ixp4xx: add a missing include of dma-map-ops.h
    dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling
    dma-direct: factor out a dma_direct_alloc_from_pool helper
    dma-direct check for highmem pages in dma_direct_alloc_pages
    dma-mapping: merge into
    dma-mapping: move large parts of to kernel/dma
    dma-mapping: move dma-debug.h to kernel/dma/
    dma-mapping: remove
    dma-mapping: merge into
    dma-contiguous: remove dma_contiguous_set_default
    dma-contiguous: remove dev_set_cma_area
    dma-contiguous: remove dma_declare_contiguous
    dma-mapping: split
    cma: decrease CMA_ALIGNMENT lower limit to 2
    firewire-ohci: use dma_alloc_pages
    dma-iommu: implement ->alloc_noncoherent
    dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods
    dma-mapping: add a new dma_alloc_pages API
    dma-mapping: remove dma_cache_sync
    53c700: convert to dma_alloc_noncoherent
    ...

    Linus Torvalds
     

15 Oct, 2020

1 commit


14 Oct, 2020

1 commit

  • The memory size calculation in cma_early_percent_memory() traverses
    memblock.memory rather than simply call memblock_phys_mem_size(). The
    comment in that function suggests that at some point there should have
    been call to memblock_analyze() before memblock_phys_mem_size() could be
    used. As of now, there is no memblock_analyze() at all and
    memblock_phys_mem_size() can be used as soon as cold-plug memory is
    registered with memblock.

    Replace loop over memblock.memory with a call to memblock_phys_mem_size().

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Baoquan He
    Cc: Andy Lutomirski
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Catalin Marinas
    Cc: Daniel Axtens
    Cc: Dave Hansen
    Cc: Emil Renner Berthing
    Cc: Hari Bathini
    Cc: Ingo Molnar
    Cc: Ingo Molnar
    Cc: Jonathan Cameron
    Cc: Marek Szyprowski
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Miguel Ojeda
    Cc: Palmer Dabbelt
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Thomas Bogendoerfer
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: https://lkml.kernel.org/r/20200818151634.14343-3-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

07 Oct, 2020

3 commits


06 Oct, 2020

9 commits


25 Sep, 2020

3 commits


21 Sep, 2020

1 commit


18 Sep, 2020

2 commits

  • The new field 'dma_range_map' in struct device is used to facilitate the
    use of single or multiple offsets between mapping regions of cpu addrs and
    dma addrs. It subsumes the role of "dev->dma_pfn_offset" which was only
    capable of holding a single uniform offset and had no region bounds
    checking.

    The function of_dma_get_range() has been modified so that it takes a single
    argument -- the device node -- and returns a map, NULL, or an error code.
    The map is an array that holds the information regarding the DMA regions.
    Each range entry contains the address offset, the cpu_start address, the
    dma_start address, and the size of the region.

    of_dma_configure() is the typical manner to set range offsets but there are
    a number of ad hoc assignments to "dev->dma_pfn_offset" in the kernel
    driver code. These cases now invoke the function
    dma_direct_set_offset(dev, cpu_addr, dma_addr, size).

    Signed-off-by: Jim Quinlan
    [hch: various interface cleanups]
    Signed-off-by: Christoph Hellwig
    Reviewed-by: Mathieu Poirier
    Tested-by: Mathieu Poirier
    Tested-by: Nathan Chancellor

    Jim Quinlan
     
  • When booting the kernel v5.9-rc4 on a VM, the kernel would panic when
    printing a warning message in swiotlb_map(). The dev->dma_mask must not
    be a NULL pointer when calling the dma mapping layer. A NULL pointer
    check can potentially avoid the panic.

    Signed-off-by: Thomas Tai
    Reviewed-by: Konrad Rzeszutek Wilk
    Signed-off-by: Christoph Hellwig

    Thomas Tai
     

11 Sep, 2020

6 commits