04 Sep, 2019

1 commit


03 Sep, 2019

1 commit

  • This patch adds a new DMA API "dma_get_merge_boundary". This function
    returns the DMA merge boundary if the DMA layer can merge the segments.
    This patch also adds the implementation for a new dma_map_ops pointer.

    Signed-off-by: Yoshihiro Shimoda
    Reviewed-by: Simon Horman
    Signed-off-by: Christoph Hellwig

    Yoshihiro Shimoda
     

08 Jun, 2019

1 commit


11 Mar, 2019

2 commits

  • Pull virtio updates from Michael Tsirkin:
    "Several fixes, most notably fix for virtio on swiotlb systems"

    * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
    vhost: silence an unused-variable warning
    virtio: hint if callbacks surprisingly might sleep
    virtio-ccw: wire up ->bus_name callback
    s390/virtio: handle find on invalid queue gracefully
    virtio-ccw: diag 500 may return a negative cookie
    virtio_balloon: remove the unnecessary 0-initialization
    virtio-balloon: improve update_balloon_size_func
    virtio-blk: Consider virtio_max_dma_size() for maximum segment size
    virtio: Introduce virtio_max_dma_size()
    dma: Introduce dma_max_mapping_size()
    swiotlb: Add is_swiotlb_active() function
    swiotlb: Introduce swiotlb_max_mapping_size()

    Linus Torvalds
     
  • Pull DMA mapping updates from Christoph Hellwig:

    - add debugfs support for dumping dma-debug information (Corentin
    Labbe)

    - Kconfig cleanups (Andy Shevchenko and me)

    - debugfs cleanups (Greg Kroah-Hartman)

    - improve dma_map_resource and use it in the media code

    - arch_setup_dma_ops / arch_teardown_dma_ops cleanups

    - various small cleanups and improvements for the per-device coherent
    allocator

    - make the DMA mask an upper bound and don't fail "too large" dma mask
    in the remaning two architectures - this will allow big driver
    cleanups in the following merge windows

    * tag 'dma-mapping-5.1' of git://git.infradead.org/users/hch/dma-mapping: (21 commits)
    Documentation/DMA-API-HOWTO: update dma_mask sections
    sparc64/pci_sun4v: allow large DMA masks
    sparc64/iommu: allow large DMA masks
    sparc64: refactor the ali DMA quirk
    ccio: allow large DMA masks
    dma-mapping: remove the DMA_MEMORY_EXCLUSIVE flag
    dma-mapping: remove dma_mark_declared_memory_occupied
    dma-mapping: move CONFIG_DMA_CMA to kernel/dma/Kconfig
    dma-mapping: improve selection of dma_declare_coherent availability
    dma-mapping: remove an incorrect __iommem annotation
    of: select OF_RESERVED_MEM automatically
    device.h: dma_mem is only needed for HAVE_GENERIC_DMA_COHERENT
    mfd/sm501: depend on HAS_DMA
    dma-mapping: add a kconfig symbol for arch_teardown_dma_ops availability
    dma-mapping: add a kconfig symbol for arch_setup_dma_ops availability
    dma-mapping: move debug configuration options to kernel/dma
    dma-debug: add dumping facility via debugfs
    dma: debug: no need to check return value of debugfs_create functions
    videobuf2: replace a layering violation with dma_map_resource
    dma-mapping: don't BUG when calling dma_map_resource on RAM
    ...

    Linus Torvalds
     

07 Mar, 2019

1 commit

  • The function returns the maximum size that can be mapped
    using DMA-API functions. The patch also adds the
    implementation for direct DMA and a new dma_map_ops pointer
    so that other implementations can expose their limit.

    Cc: stable@vger.kernel.org
    Reviewed-by: Konrad Rzeszutek Wilk
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Joerg Roedel
    Signed-off-by: Michael S. Tsirkin

    Joerg Roedel
     

20 Feb, 2019

2 commits


01 Feb, 2019

1 commit

  • While debugging a DMA mapping leak, I needed to access
    debug_dma_dump_mappings() but easily from user space.

    This patch adds a /sys/kernel/debug/dma-api/dump file which contain all
    current DMA mapping.

    Signed-off-by: Corentin Labbe
    Signed-off-by: Christoph Hellwig

    Corentin Labbe
     

21 Jan, 2019

1 commit


20 Dec, 2018

1 commit

  • We now always return zeroed memory from dma_alloc_coherent. Note that
    simply passing GFP_ZERO to dma_alloc_coherent wasn't always doing the
    right thing to start with given that various allocators are not backed
    by the page allocator and thus would ignore GFP_ZERO.

    Signed-off-by: Christoph Hellwig

    Christoph Hellwig
     

11 Dec, 2018

4 commits

  • DMA debug entries are one of those things which aren't that useful
    individually - we will always want some larger quantity of them - and
    which we don't really need to manage the exact number of - we only care
    about having 'enough'. In that regard, the current behaviour of creating
    them one-by-one leads to a lot of unwarranted function call overhead and
    memory wasted on alignment padding.

    Now that we don't have to worry about freeing anything via
    dma_debug_resize_entries(), we can optimise the allocation behaviour by
    grabbing whole pages at once, which will save considerably on the
    aforementioned overheads, and probably offer a little more cache/TLB
    locality benefit for traversing the lists under normal operation. This
    should also give even less reason for an architecture-level override of
    the preallocation size, so make the definition unconditional - if there
    is still any desire to change the compile-time value for some platforms
    it would be better off as a Kconfig option anyway.

    Since freeing a whole page of entries at once becomes enough of a
    challenge that it's not really worth complicating dma_debug_init(), we
    may as well tweak the preallocation behaviour such that as long as we
    manage to allocate *some* pages, we can leave debugging enabled on a
    best-effort basis rather than otherwise wasting them.

    Signed-off-by: Robin Murphy
    Tested-by: Qian Cai
    Signed-off-by: Christoph Hellwig

    Robin Murphy
     
  • Now that we can dynamically allocate DMA debug entries to cope with
    drivers maintaining excessively large numbers of live mappings, a driver
    which *does* actually have a bug leaking mappings (and is not unloaded)
    will no longer trigger the "DMA-API: debugging out of memory - disabling"
    message until it gets to actual kernel OOM conditions, which means it
    could go unnoticed for a while. To that end, let's inform the user each
    time the pool has grown to a multiple of its initial size, which should
    make it apparent that they either have a leak or might want to increase
    the preallocation size.

    Signed-off-by: Robin Murphy
    Tested-by: Qian Cai
    Signed-off-by: Christoph Hellwig

    Robin Murphy
     
  • Certain drivers such as large multi-queue network adapters can use pools
    of mapped DMA buffers larger than the default dma_debug_entry pool of
    65536 entries, with the result that merely probing such a device can
    cause DMA debug to disable itself during boot unless explicitly given an
    appropriate "dma_debug_entries=..." option.

    Developers trying to debug some other driver on such a system may not be
    immediately aware of this, and at worst it can hide bugs if they fail to
    realise that dma-debug has already disabled itself unexpectedly by the
    time their code of interest gets to run. Even once they do realise, it
    can be a bit of a pain to emprirically determine a suitable number of
    preallocated entries to configure, short of massively over-allocating.

    There's really no need for such a static limit, though, since we can
    quite easily expand the pool at runtime in those rare cases that the
    preallocated entries are insufficient, which is arguably the least
    surprising and most useful behaviour. To that end, refactor the
    prealloc_memory() logic a little bit to generalise it for runtime
    reallocations as well.

    Signed-off-by: Robin Murphy
    Tested-by: Qian Cai
    Signed-off-by: Christoph Hellwig

    Robin Murphy
     
  • Expose nr_total_entries in debugfs, so that {num,min}_free_entries
    become even more meaningful to users interested in current/maximum
    utilisation. This becomes even more relevant once nr_total_entries
    may change at runtime beyond just the existing AMD GART debug code.

    Suggested-by: John Garry
    Signed-off-by: Robin Murphy
    Tested-by: Qian Cai
    Signed-off-by: Christoph Hellwig

    Robin Murphy
     

01 Sep, 2017

2 commits


29 Aug, 2017

1 commit


15 Jul, 2017

1 commit

  • Each text file under Documentation follows a different
    format. Some doesn't even have titles!

    Change its representation to follow the adopted standard,
    using ReST markups for it to be parseable by Sphinx:

    - Fix some title marks to match ReST;
    - use :Author: for author name;
    - foo_ is an hyperlink. Get rid of it;
    - Mark literal blocks as such;
    - Use tables on some places that are almost using the
    table format.

    Signed-off-by: Mauro Carvalho Chehab
    Signed-off-by: Jonathan Corbet

    Mauro Carvalho Chehab
     

06 Jun, 2017

1 commit


27 Sep, 2016

1 commit


04 Aug, 2016

1 commit

  • The dma-mapping core and the implementations do not change the DMA
    attributes passed by pointer. Thus the pointer can point to const data.
    However the attributes do not have to be a bitfield. Instead unsigned
    long will do fine:

    1. This is just simpler. Both in terms of reading the code and setting
    attributes. Instead of initializing local attributes on the stack
    and passing pointer to it to dma_set_attr(), just set the bits.

    2. It brings safeness and checking for const correctness because the
    attributes are passed by value.

    Semantic patches for this change (at least most of them):

    virtual patch
    virtual context

    @r@
    identifier f, attrs;

    @@
    f(...,
    - struct dma_attrs *attrs
    + unsigned long attrs
    , ...)
    {
    ...
    }

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
    )

    and

    // Options: --all-includes
    virtual patch
    virtual context

    @r@
    identifier f, attrs;
    type t;

    @@
    t f(..., struct dma_attrs *attrs);

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
    )

    Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
    Signed-off-by: Krzysztof Kozlowski
    Acked-by: Vineet Gupta
    Acked-by: Robin Murphy
    Acked-by: Hans-Christian Noren Egtvedt
    Acked-by: Mark Salter [c6x]
    Acked-by: Jesper Nilsson [cris]
    Acked-by: Daniel Vetter [drm]
    Reviewed-by: Bart Van Assche
    Acked-by: Joerg Roedel [iommu]
    Acked-by: Fabien Dessenne [bdisp]
    Reviewed-by: Marek Szyprowski [vb2-core]
    Acked-by: David Vrabel [xen]
    Acked-by: Konrad Rzeszutek Wilk [xen swiotlb]
    Acked-by: Joerg Roedel [iommu]
    Acked-by: Richard Kuo [hexagon]
    Acked-by: Geert Uytterhoeven [m68k]
    Acked-by: Gerald Schaefer [s390]
    Acked-by: Bjorn Andersson
    Acked-by: Hans-Christian Noren Egtvedt [avr32]
    Acked-by: Vineet Gupta [arc]
    Acked-by: Robin Murphy [arm64 and dma-iommu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Krzysztof Kozlowski
     

12 Jan, 2016

1 commit


10 Nov, 2015

1 commit


25 Sep, 2015

1 commit


09 Sep, 2015

1 commit

  • Add a wrapper function for dma_pool_alloc() to get zeroed memory.

    Signed-off-by: Sean O. Stalley
    Cc: Vinod Koul
    Cc: Bjorn Helgaas
    Cc: Gilles Muller
    Cc: Nicolas Palix
    Cc: Michal Marek
    Cc: Sebastian Andrzej Siewior
    Cc: Jonathan Corbet
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sean O. Stalley
     

30 May, 2015

1 commit

  • David Ahern reported that d63e2e1f3df9 ("sparc/PCI: Clip bridge windows
    to fit in upstream windows") fails to boot on sparc/T5-8:

    pci 0000:06:00.0: reg 0x184: can't handle BAR above 4GB (bus address 0x110204000)

    The problem is that sparc64 assumed that dma_addr_t only needed to hold DMA
    addresses, i.e., bus addresses returned via the DMA API (dma_map_single(),
    etc.), while the PCI core assumed dma_addr_t could hold *any* bus address,
    including raw BAR values. On sparc64, all DMA addresses fit in 32 bits, so
    dma_addr_t is a 32-bit type. However, BAR values can be 64 bits wide, so
    they don't fit in a dma_addr_t. d63e2e1f3df9 added new checking that
    tripped over this mismatch.

    Add pci_bus_addr_t, which is wide enough to hold any PCI bus address,
    including both raw BAR values and DMA addresses. This will be 64 bits
    on 64-bit platforms and on platforms with a 64-bit dma_addr_t. Then
    dma_addr_t only needs to be wide enough to hold addresses from the DMA API.

    [bhelgaas: changelog, bugzilla, Kconfig to ensure pci_bus_addr_t is at
    least as wide as dma_addr_t, documentation]
    Fixes: d63e2e1f3df9 ("sparc/PCI: Clip bridge windows to fit in upstream windows")
    Fixes: 23b13bc76f35 ("PCI: Fail safely if we can't handle BARs larger than 4GB")
    Link: http://lkml.kernel.org/r/CAE9FiQU1gJY1LYrxs+ma5LCTEEe4xmtjRG0aXJ9K_Tsu+m9Wuw@mail.gmail.com
    Link: http://lkml.kernel.org/r/1427857069-6789-1-git-send-email-yinghai@kernel.org
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=96231
    Reported-by: David Ahern
    Tested-by: David Ahern
    Signed-off-by: Yinghai Lu
    Signed-off-by: Bjorn Helgaas
    Acked-by: David S. Miller
    CC: stable@vger.kernel.org # v3.19+

    Yinghai Lu
     

27 May, 2014

1 commit


21 May, 2014

2 commits

  • dma_declare_coherent_memory() takes two addresses for a region of memory: a
    "bus_addr" and a "device_addr". I think the intent is that "bus_addr" is
    the physical address a *CPU* would use to access the region, and
    "device_addr" is the bus address the *device* would use to address the
    region.

    Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t.
    Most callers already supply a phys_addr_t for this argument. The others
    supply a 32-bit integer (a constant, unsigned int, or __u32) and need no
    change.

    Use "unsigned long", not phys_addr_t, to hold PFNs.

    No functional change (this could theoretically fix a truncation in a config
    with 32-bit dma_addr_t and 64-bit phys_addr_t, but I don't think there are
    any such cases involving this code).

    Signed-off-by: Bjorn Helgaas
    Acked-by: Arnd Bergmann
    Acked-by: Greg Kroah-Hartman
    Acked-by: James Bottomley
    Acked-by: Randy Dunlap

    Bjorn Helgaas
     
  • The DMA-API documentation sometimes refers to "physical addresses" when it
    really means "bus addresses." Sometimes these are identical, but they may
    be different if the bridge leading to the bus performs address translation.
    Update the documentation to use "bus address" when appropriate.

    Also, consistently capitalize "DMA", use parens with function names, use
    dev_printk() in examples, and reword a few sections for clarity.

    No functional change; documentation changes only.

    Signed-off-by: Bjorn Helgaas
    Acked-by: Greg Kroah-Hartman
    Acked-by: Arnd Bergmann
    Acked-by: James Bottomley
    Acked-by: Randy Dunlap

    Bjorn Helgaas
     

17 Sep, 2013

1 commit


24 Oct, 2012

1 commit

  • Add dma-debug interface debug_dma_mapping_error() to debug
    drivers that fail to check dma mapping errors on addresses
    returned by dma_map_single() and dma_map_page() interfaces.
    This interface clears a flag set by debug_dma_map_page() to
    indicate that dma_mapping_error() has been called by the
    driver. When driver does unmap, debug_dma_unmap() checks the
    flag and if this flag is still set, prints warning message
    that includes call trace that leads up to the unmap. This
    interface can be called from dma_mapping_error() routines to
    enable dma mapping error check debugging.

    Tested: Intel iommu and swiotlb (iommu=soft) on x86-64 with
    CONFIG_DMA_API_DEBUG enabled and disabled.

    Signed-off-by: Shuah Khan
    Reviewed-by: Konrad Rzeszutek Wilk
    Signed-off-by: Joerg Roedel

    Shuah Khan
     

03 Nov, 2011

1 commit


11 Aug, 2010

1 commit

  • Architectures implement dma_is_consistent() in different ways (some
    misinterpret the definition of API in DMA-API.txt). So it hasn't been so
    useful for drivers. We have only one user of the API in tree. Unlikely
    out-of-tree drivers use the API.

    Even if we fix dma_is_consistent() in some architectures, it doesn't look
    useful at all. It was invented long ago for some old systems that can't
    allocate coherent memory at all. It's better to export only APIs that are
    definitely necessary for drivers.

    Let's remove this API.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Reviewed-by: Konrad Rzeszutek Wilk
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     

13 Mar, 2010

6 commits

  • Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: "David S. Miller"
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • - remove the PCI DMA API description in DMA-API.txt
    - remove the descriptions of dma_unmap macros since
    PCI-DMA-mapping.txt has the same description.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: "David S. Miller"
    Reviewed-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • dma_set_coherent_mask corresponds to pci_set_consistent_dma_mask. This is
    necessary to move to the generic device model DMA API from the PCI bus
    specific API in the long term.

    dma_set_coherent_mask works in the exact same way that
    pci_set_consistent_dma_mask does. So this patch also changes
    pci_set_consistent_dma_mask to call dma_set_coherent_mask.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: David S. Miller
    Cc: Jesse Barnes
    Cc: Benjamin Herrenschmidt
    Cc: Russell King
    Cc: Greg KH
    Cc: Kay Sievers
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • Adds the following macros:

    DECLARE_DMA_UNMAP_ADDR(ADDR_NAME)
    DECLARE_DMA_UNMAP_LEN(LEN_NAME)
    dma_unmap_addr(PTR, ADDR_NAME)
    dma_unmap_addr_set(PTR, ADDR_NAME, VAL)
    dma_unmap_len(PTR, LEN_NAME)
    dma_unmap_len_set(PTR, LEN_NAME, VAL)

    The API corresponds to the pci_unmap state API. We'll move to this new
    generic API from the PCI specific API in the long term. As
    include/asm-generic/pci-dma-compat.h does, the pci_unmap API simply calls
    the new generic API for some time.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: David S. Miller
    Cc: Jesse Barnes
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • dma_sync_single_for_cpu/for_device supports a partial sync so there is no
    point to have dma_sync_single_range (also dma_sync_single was obsoleted
    long ago, replaced with dma_sync_single_for_cpu/for_device).

    There is no user of dma_sync_single_range() in mainline and only Alpha
    architecture supports dma_sync_single_range(). So it's unlikely that
    someone out of the tree uses it.

    Signed-off-by: FUJITA Tomonori
    Acked-by: David Miller
    Acked-by: James Bottomley
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • This adds the description of the following eight function:

    dma_sync_single_for_cpu
    pci_dma_sync_single_for_cpu
    dma_sync_single_for_device
    pci_dma_sync_single_for_device
    dma_sync_sg_for_cpu
    pci_dma_sync_sg_for_cpu
    dma_sync_sg_for_device
    pci_dma_sync_sg_for_device

    It was unclear that the API permits a partial sync (some network drivers
    already do though). I made it clear that the sync_single API can do a
    partial sync but the sync_sg API can't.

    We could do a partial sync with the sync_sg API too, however, it's
    difficult for driver writers to correctly use the sync_sg API for a
    partial sync since the scatterlists passed in to the mapping API can't be
    modified. It's unlikely that driver writers want to do a partial sync
    with the sync_sg API (because the sync_sg API are usually used for block
    drivers). So I think that it's better to forbid a partial sync with the
    sync_sg API.

    Signed-off-by: FUJITA Tomonori
    Acked-by: David Miller
    Acked-by: James Bottomley
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori