08 Aug, 2016

1 commit

  • This is in preperation of moving to a callback that provides results to the
    callback for the transaction. The conversion will maintain current behavior
    and the driver must convert to new callback mechanism at a later time in
    order to receive results.

    Signed-off-by: Dave Jiang
    Reviewed-by: Lars-Peter Clausen
    Signed-off-by: Vinod Koul

    Dave Jiang
     

16 Jul, 2016

1 commit


03 May, 2016

1 commit

  • Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
    that makes this transformation is as follows: (http://coccinelle.lip6.fr/)

    //
    @@
    expression d,e;
    statement S;
    @@

    d =
    - dma_pool_alloc
    + dma_pool_zalloc
    (...);
    if (!d) S
    - memset(d, 0, sizeof(*d));
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: Vinod Koul

    Julia Lawall
     

05 Aug, 2015

1 commit

  • Most drivers need to set constraints on the buffer alignment for async tx
    operations. However, even though it is documented, some drivers either use
    a defined constant that is not matching what the alignment variable expects
    (like DMA_BUSWIDTH_* constants) or fill the alignment in bytes instead of
    power of two.

    Add a new enum for these alignments that matches what the framework
    expects, and convert the drivers to it.

    Signed-off-by: Maxime Ripard
    Signed-off-by: Vinod Koul

    Maxime Ripard
     

25 Apr, 2015

1 commit

  • Pull slave-dmaengine updates from Vinod Koul:

    - new drivers for:
    - Ingenic JZ4780 controller
    - APM X-Gene controller
    - Freescale RaidEngine device
    - Renesas USB Controller

    - remove device_alloc_chan_resources dummy handlers

    - sh driver cleanups for peri peri and related emmc and asoc patches
    as well

    - fixes and enhancements spread over the drivers

    * 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (59 commits)
    dmaengine: dw: don't prompt for DW_DMAC_CORE
    dmaengine: shdmac: avoid unused variable warnings
    dmaengine: fix platform_no_drv_owner.cocci warnings
    dmaengine: pch_dma: fix memory leak on failure path in pch_dma_probe()
    dmaengine: at_xdmac: unlock spin lock before return
    dmaengine: xgene: devm_ioremap() returns NULL on error
    dmaengine: xgene: buffer overflow in xgene_dma_init_channels()
    dmaengine: usb-dmac: Fix dereferencing freed memory 'desc'
    dmaengine: sa11x0: report slave capabilities to upper layers
    dmaengine: vdma: Fix compilation warnings
    dmaengine: fsl_raid: statify fsl_re_chan_probe
    dmaengine: Driver support for FSL RaidEngine device.
    dmaengine: xgene_dma_init_ring_mngr() can be static
    Documentation: dma: Add documentation for the APM X-Gene SoC DMA device DTS binding
    arm64: dts: Add APM X-Gene SoC DMA device and DMA clock DTS nodes
    dmaengine: Add support for APM X-Gene SoC DMA engine driver
    dmaengine: usb-dmac: Add Renesas USB DMA Controller (USB-DMAC) driver
    dmaengine: renesas,usb-dmac: Add device tree bindings documentation
    dmaengine: edma: fixed wrongly initialized data parameter to the edma callback
    dmaengine: ste_dma40: fix implicit conversion
    ...

    Linus Torvalds
     

19 Mar, 2015

1 commit


06 Mar, 2015

1 commit


24 Feb, 2015

1 commit

  • Some dma channels may be reserved for other purpose in other layer,
    like secure driver in EL2/EL3. PDMA driver can see the interrupt
    status, but it should not try to handle related interrupt, since it
    doesn't belong to PDMA driver in kernel. These interrupts should be
    handled by corresponding client/module.Otherwise, it will overwrite
    illegal memory and cause unexpected issues, since pdma driver only
    requests resources for pdma channels.

    In PDMA driver, the reserved channels are at the end of total 32
    channels. If we find interrupt bit index is not smaller than total
    dma channels, we should ignore it.

    Signed-off-by: Qiao Zhou
    Acked-by: Zhangfei Gao
    Signed-off-by: Vinod Koul

    Qiao Zhou
     

22 Dec, 2014

1 commit


06 Nov, 2014

1 commit

  • There is no need to init .owner field.

    Based on the patch from Peter Griffin
    "mmc: remove .owner field for drivers using module_platform_driver"

    This patch removes the superflous .owner field for drivers which
    use the module_platform_driver API, as this is overriden in
    platform_driver_register anyway."

    Signed-off-by: Kiran Padwal
    [for nvidia]
    Acked-by: Thierry Reding
    Signed-off-by: Vinod Koul

    Kiran Padwal
     

04 Aug, 2014

1 commit


07 May, 2014

1 commit

  • A channel can accommodate more than one transaction, each consisting of
    multiple descriptors, the last of which has the DCMD_ENDIRQEN bit set.

    In order to report the channel's residue, we hence have to walk the
    list of running descriptors, look for those which match the cookie,
    and then try to find the descriptor which defines upper and lower
    boundaries that embrace the current transport pointer. Once it is found,
    walk forward until we find the descriptor that tells us about the end of
    a transaction via a set DCMD_ENDIRQEN bit.

    Signed-off-by: Daniel Mack
    Signed-off-by: Vinod Koul

    Daniel Mack
     

02 May, 2014

2 commits


06 Mar, 2014

1 commit


29 Jan, 2014

1 commit

  • The merge between 2b7f65b11d87f "mmp_pdma: Style neatening" and
    8010dad55a0ab0 "dma: add dma_get_any_slave_channel(), for use in of_xlate()"
    caused a build error by leaving obsolete code in place:

    mmp_pdma.c: In function 'mmp_pdma_dma_xlate':
    mmp_pdma.c:909:31: error: 'candidate' undeclared
    mmp_pdma.c:912:3: error: label 'retry' used but not defined
    mmp_pdma.c:901:24: warning: unused variable 'c' [-Wunused-variable]

    This removes the extraneous lines.

    Signed-off-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Arnd Bergmann
     

08 Jan, 2014

1 commit


16 Dec, 2013

1 commit


10 Dec, 2013

1 commit

  • mmp_pdma.c implements a custom of_xlate() function that is 95% identical
    to what Tegra will need. Create a function to implement the common part,
    so everyone doesn't just cut/paste the implementation.

    Cc: Dan Williams
    Cc: Vinod Koul
    Cc: Lars-Peter Clausen
    Cc: dmaengine@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Stephen Warren
    Signed-off-by: Vinod Koul

    Stephen Warren
     

28 Nov, 2013

1 commit


13 Oct, 2013

1 commit


07 Oct, 2013

1 commit


26 Aug, 2013

4 commits

  • As the driver now has its own xlate function and makes use of the
    dma_get_slave_channel(), we need to manually set the DMA_PRIVATE flags.

    Drivers which rely on of_dma_simple_xlate() do implicitly the same by
    going through __dma_request_channel().

    Signed-off-by: Daniel Mack
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • Provide a callback to prepare cyclic DMA transfers.
    This is for instance needed for audio channel transport.

    Signed-off-by: Daniel Mack
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • In order to fully support multiple transactions per channel, we need to
    assure we get an interrupt for each completed transaction. That flags
    bit is also our only way to tell at which descriptor a transaction ends.

    So, remove the manual clearing of that bit, and then inline the only
    remaining command that is left in append_pending_queue() for better
    readability.

    Signed-off-by: Daniel Mack
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • Currently, when an interrupt has occured for a channel, the tasklet
    worker code will only look at the very last entry in the running list
    and complete its cookie, and then dispose the entire running chain.
    Hence, the first transaction's cookie will never complete.

    In fact, the interrupt we should handle will be the one related to the
    first descriptor in the chain with the ENDIRQEN bit set, so complete
    the second transaction that is in fact still running.

    As a result, the driver can't currently handle multiple transactions on
    one chanel, and it's likely that no drivers exist that rely on this
    feature.

    Fix this by walking the running_chain and look for the first
    descriptor that has the interrupt-enable bit set. Only queue
    descriptors up to that point for completion handling, while leaving
    the rest intact. Also, only make the channel idle if the list is
    completely empty after such a cycle.

    Signed-off-by: Daniel Mack
    Signed-off-by: Vinod Koul

    Daniel Mack
     

14 Aug, 2013

9 commits

  • Remove unneeded error handling on the result of a call to
    platform_get_resource when the value is passed to devm_ioremap_resource.

    A simplified version of the semantic patch that makes this change is as
    follows: (http://coccinelle.lip6.fr/)

    //
    @@
    expression pdev,res,n,e,e1;
    expression ret != 0;
    identifier l;
    @@

    - res = platform_get_resource(pdev, IORESOURCE_MEM, n);
    ... when != res
    - if (res == NULL) { ... \(goto l;\|return ret;\) }
    ... when != res
    + res = platform_get_resource(pdev, IORESOURCE_MEM, n);
    e = devm_ioremap_resource(e1, res);
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: Vinod Koul

    Julia Lawall
     
  • The PXA DMA controller has a DALGN register which allows for
    byte-aligned DMA transfers. Use it in case any of the transfer
    descriptors is not aligned to a mask of ~0x7.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • The DMA_SLAVE is currently set twice.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • That helps check the provided runtime information.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • This patch makes the mmp_pdma controller able to provide DMA resources
    in DT environments by providing an dma xlate function.

    of_dma_simple_xlate() isn't used here, because if fails to handle
    multiple different DMA engines or several instances of the same
    controller. Instead, a private implementation is provided that makes use
    of the newly introduced dma_get_slave_channel() call.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • PXA peripherals need to obtain specific DMA request ids which will
    eventually be stored in the DRCMR register.

    Currently, clients are expected to store that number inside the slave
    config block as slave_id, which is unfortunately incompatible with the
    way DMA resources are handled in DT environments.

    This patch adds a filter function which stores the filter parameter
    passed in by of-dma.c into the channel's drcmr register.

    For backward compatability, cfg->slave_id is still used if set to
    a non-zero value.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • There's no reason for limiting the maximum transfer length to 0x1000.
    Take the actual bit mask instead; the PDMA is able to transfer chunks of
    up to SZ_8K - 1.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • As suggested by Ezequiel García, release the spinlock at the end of the
    function only, and use a goto for the control flow.

    Just a minor cleanup.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     
  • The exact same calculation is done twice, so let's factor it out to a
    macro.

    Signed-off-by: Daniel Mack
    Acked-by: Arnd Bergmann
    Signed-off-by: Vinod Koul

    Daniel Mack
     

13 Aug, 2013

1 commit


05 Aug, 2013

3 commits

  • In mmp pdma, phy channels are allocated/freed dynamically.
    The mapping from DMA request to DMA channel number in DRCMR
    should be cleared when a phy channel is freed. Otherwise
    conflicts will happen when:
    1. A is using channel 2 and free it after finished, but A
    still maps to channel 2 in DRCMR of A.
    2. Now another one B gets channel 2. So B maps to channel 2
    too in DRCMR of B.
    In the datasheet, it is described that "Do not map two active
    requests to the same channel since it produces unpredictable
    results" and we can observe that during test.

    Signed-off-by: Xiang Wang
    Signed-off-by: Vinod Koul

    Xiang Wang
     
  • In mmp pdma, phy channels are allocated/freed dynamically
    and frequently. But no proper protection is added.
    Conflict will happen when multi-users are requesting phy
    channels at the same time. Use spinlock to protect.

    Signed-off-by: Xiang Wang
    Signed-off-by: Vinod Koul

    Xiang Wang
     
  • Accordingly to dma_cookie_status() description locking is not required.

    Signed-off-by: Andy Shevchenko
    Signed-off-by: Vinod Koul

    Andy Shevchenko
     

27 Feb, 2013

1 commit

  • Pull slave-dmaengine updates from Vinod Koul:
    "This is fairly big pull by my standards as I had missed last merge
    window. So we have the support for device tree for slave-dmaengine,
    large updates to dw_dmac driver from Andy for reusing on different
    architectures. Along with this we have fixes on bunch of the drivers"

    Fix up trivial conflicts, usually due to #include line movement next to
    each other.

    * 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits)
    Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT"
    ARM: dts: pl330: Add #dma-cells for generic dma binding support
    DMA: PL330: Register the DMA controller with the generic DMA helpers
    DMA: PL330: Add xlate function
    DMA: PL330: Add new pl330 filter for DT case.
    dma: tegra20-apb-dma: remove unnecessary assignment
    edma: do not waste memory for dma_mask
    dma: coh901318: set residue only if dma is in progress
    dma: coh901318: avoid unbalanced locking
    dmaengine.h: remove redundant else keyword
    dma: of-dma: protect list write operation by spin_lock
    dmaengine: ste_dma40: do not remove descriptors for cyclic transfers
    dma: of-dma.c: fix memory leakage
    dw_dmac: apply default dma_mask if needed
    dmaengine: ioat - fix spare sparse complain
    dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c
    ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING
    dw_dmac: add support for Lynxpoint DMA controllers
    dw_dmac: return proper residue value
    dw_dmac: fill individual length of descriptor
    ...

    Linus Torvalds