05 Nov, 2011

1 commit

  • * 'next' of git://git.infradead.org/users/vkoul/slave-dma: (63 commits)
    dmaengine: mid_dma: mask_peripheral_interrupt only when dmac is idle
    dmaengine/ep93xx_dma: add module.h include
    pch_dma: Reduce wasting memory
    pch_dma: Fix suspend issue
    dma/timberdale: free_irq() on an error path
    dma: shdma: transfer based runtime PM
    dmaengine: shdma: protect against the IRQ handler
    dmaengine i.MX DMA/SDMA: add missing include of linux/module.h
    dmaengine: delete redundant chan_id and chancnt initialization in dma drivers
    dmaengine/amba-pl08x: Check txd->llis_va before freeing dma_pool
    dmaengine/amba-pl08x: Add support for sg len greater than one for slave transfers
    serial: sh-sci: don't filter on DMA device, use only channel ID
    ARM: SAMSUNG: Remove Samsung specific enum type for dma direction
    ASoC: Samsung: Update DMA interface
    spi/s3c64xx: Merge dma control code
    spi/s3c64xx: Add support DMA engine API
    ARM: SAMSUNG: Remove S3C-PL330-DMA driver
    ARM: S5P64X0: Use generic DMA PL330 driver
    ARM: S5PC100: Use generic DMA PL330 driver
    ARM: S5PV210: Use generic DMA PL330 driver
    ...

    Fix up fairly trivial conflicts in
    - arch/arm/mach-exynos4/{Kconfig,clock.c}
    - arch/arm/mach-s5p64x0/dma.c

    Linus Torvalds
     

27 Oct, 2011

1 commit

  • This adds devicetree binding support to the ARM pl061 driver removing the
    platform_data dependency. When DT binding is used, the gpio numbering is
    assigned dynamically. For now, interrupts are not supported with DT until
    irqdomains learn dynamic irq assignment.

    Rather than add another case of -1, updating the driver to use NO_IRQ.

    Signed-off-by: Rob Herring
    Acked-by: Baruch Siach
    Signed-off-by: Grant Likely

    Rob Herring
     

21 Sep, 2011

1 commit


19 Sep, 2011

1 commit


14 Sep, 2011

1 commit

  • This patch updates following 3 items.
    1. Removes unneccessary code.
    2. Add AMBA, PL330 configuration
    3. Change the meaning of 'peri_id' variable
    from PL330 event number to specific dma id by user.

    Signed-off-by: Boojin Kim
    Acked-by: Linus Walleij
    Acked-by: Vinod Koul
    Cc: Dan Williams
    Signed-off-by: Kukjin Kim
    Signed-off-by: Vinod Koul

    Boojin Kim
     

25 Aug, 2011

3 commits

  • At least, on SPEAr platforms there is one peripheral, JPEG, which can be flow
    controller for DMA transfer. Currently DMA controller driver didn't support
    peripheral flow controller configurations.

    This patch adds device_fc field in struct pl08x_channel_data, which will be used
    only for slave transfers and is not used in case of mem2mem transfers.

    Signed-off-by: Viresh Kumar
    Acked-by: Linus Walleij
    Signed-off-by: Vinod Koul

    Viresh Kumar
     
  • Pl080 Manual says: "Bursts do not cross the 1KB address boundary"

    We can program the controller to cross 1 KB boundary on a burst and controller
    can take care of this boundary condition by itself.

    Following is the discussion with ARM Technical Support Guys (David):
    [Viresh] Manual says: "Bursts do not cross the 1KB address boundary"

    What does that actually mean? As, Maximum size transferable with a single LLI is
    4095 * 4 =16380 ~ 16KB. So, if we don't have src/dest address aligned to burst
    size, we can't use this big of an LLI.

    [David] There is a difference between bursts describing the total data
    transferred by the DMA controller and AHB bursts. Bursts described by the
    programmable parameters in the PL080 have no direct connection with the bursts
    that are seen on the AHB bus.

    The statement that "Bursts do not cross the 1KB address boundary" in the TRM is
    referring to AHB bursts, where this limitation is a requirement of the AHB spec.
    You can still issue bursts within the PL080 that are in excess of 1KB. The
    PL080 will make sure that its bursts are broken down into legal AHB bursts which
    will be formatted to ensure that no AHB burst crosses a 1KB boundary.

    Based on above discussion, this patch removes all code related to 1 KB boundary
    as we are not required to handle this in driver.

    Signed-off-by: Viresh Kumar
    Acked-by: Linus Walleij
    Signed-off-by: Vinod Koul

    Viresh Kumar
     
  • Doc comment for struct pl08x_txd was incomplete. Complete that.

    Signed-off-by: Viresh Kumar
    Acked-by: Linus Walleij
    Signed-off-by: Vinod Koul

    Viresh Kumar
     

27 Jul, 2011

1 commit


26 Jul, 2011

3 commits


17 Jun, 2011

1 commit

  • This workaround aims to break the deadlock situation
    which raises during continuous transfer of data for long
    duration over uart with hardware flow control. It is
    observed that CTS interrupt cannot be cleared in uart
    interrupt register (ICR). Hence further transfer over
    uart gets blocked.

    It is seen that during such deadlock condition ICR
    don't get cleared even on multiple write. This leads
    pass_counter to decrease and finally reach zero. This
    can be taken as trigger point to run this UART_BT_WA.

    Workaround backups the register configuration, does soft
    reset of UART using BIT-0 of PRCC_K_SOFTRST_SET/CLEAR
    registers and restores the registers.

    This patch also provides support for uart init and exit
    function calls if present.

    Signed-off-by: Shreshtha Kumar Sahu
    Signed-off-by: Linus Walleij
    Signed-off-by: Greg Kroah-Hartman

    Shreshtha Kumar Sahu
     

31 Mar, 2011

1 commit


20 Mar, 2011

1 commit


17 Mar, 2011

1 commit


24 Feb, 2011

4 commits


19 Feb, 2011

2 commits

  • The ARM CLCD PL110 controller in TFT mode provides two output formats
    based on whether the controller is in 24bpp mode or not - either 5551
    or 888. PL111 augments this with a 444 and 565 modes.

    Some implementations provide an external MUX on the PL110 output to
    reassign the bits to achieve 565 mode.

    Provide a system of capability flags to allow the CLCD driver to work
    out what is supported by each panel and board, and therefore which
    display formats are permitted.

    Acked-by: Catalin Marinas
    Signed-off-by: Russell King

    Russell King
     
  • Instead of matching the entire peripheral ID, match against
    just the part number using the amba_xxx() macros.

    Acked-by: Catalin Marinas
    Signed-off-by: Russell King

    Russell King
     

04 Feb, 2011

1 commit

  • Based on a patch from Linus Walleij.

    Add dmaengine based support for DMA to the MMCI driver, using the
    Primecell DMA engine interface. The changes over Linus' driver are:

    - rename txsize_threshold to dmasize_threshold, as this reflects the
    purpose more.
    - use 'mmci_dma_' as the function prefix rather than 'dma_mmci_'.
    - clean up requesting of dma channels.
    - don't release a single channel twice when it's shared between tx and rx.
    - get rid of 'dma_enable' bool - instead check whether the channel is NULL.
    - detect incomplete DMA at the end of a transfer. Some DMA controllers
    (eg, PL08x) are unable to be configured for scatter DMA and also listen
    to all four DMA request signals [BREQ,SREQ,LBREQ,LSREQ] from the MMCI.
    They can do one or other but not both. As MMCI uses LBREQ/LSREQ for the
    final burst/words, PL08x does not transfer the last few words.
    - map and unmap DMA buffers using the DMA engine struct device, not the
    MMCI struct device - the DMA engine is doing the DMA transfer, not us.
    - avoid double-unmapping of the DMA buffers on MMCI data errors.
    - don't check for negative values from the dmaengine tx submission
    function - Dan says this must never fail.
    - use new dmaengine helper functions rather than using the ugly function
    pointers directly.
    - allow DMA code to be fully optimized away using dma_inprogress() which
    is defined to constant 0 if DMA engine support is disabled.
    - request maximum segment size from the DMA engine struct device and
    set this appropriately.
    - removed checking of buffer alignment - the DMA engine should deal with
    its own restrictions on buffer alignment, not the individual DMA engine
    users.
    - removed setting DMAREQCTL - this confuses some DMA controllers as it
    causes LBREQ to be asserted for the last seven transfers, rather than
    six SREQ and one LSREQ.
    - removed burst setting - the DMA controller should not burst past the
    transfer size required to complete the DMA operation.

    Tested-by: Linus Walleij
    Signed-off-by: Russell King

    Russell King
     

18 Jan, 2011

1 commit

  • * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (63 commits)
    ARM: PL08x: cleanup comments
    Update CONFIG_MD_RAID6_PQ to CONFIG_RAID6_PQ in drivers/dma/iop-adma.c
    ARM: PL08x: fix a warning
    Fix dmaengine_submit() return type
    dmaengine: at_hdmac: fix race while monitoring channel status
    dmaengine: at_hdmac: flags located in first descriptor
    dmaengine: at_hdmac: use subsys_initcall instead of module_init
    dmaengine: at_hdmac: no need set ACK in new descriptor
    dmaengine: at_hdmac: trivial add precision to unmapping comment
    dmaengine: at_hdmac: use dma_address to program DMA hardware
    pch_dma: support new device ML7213 IOH
    ARM: PL08x: prevent dma_set_runtime_config() reconfiguring memcpy channels
    ARM: PL08x: allow dma_set_runtime_config() to return errors
    ARM: PL08x: fix locking between prepare function and submit function
    ARM: PL08x: introduce 'phychan_hold' to hold on to physical channels
    ARM: PL08x: put txd's on the pending list in pl08x_tx_submit()
    ARM: PL08x: rename 'desc_list' as 'pend_list'
    ARM: PL08x: implement unmapping of memcpy buffers
    ARM: PL08x: store prep_* flags in async_tx structure
    ARM: PL08x: shrink srcbus/dstbus in txd structure
    ...

    Linus Torvalds
     

17 Jan, 2011

1 commit


15 Jan, 2011

1 commit

  • drivers/dma/amba-pl08x.c: In function 'pl08x_start_txd':
    drivers/dma/amba-pl08x.c:205: warning: dereferencing 'void *' pointer

    We never dereference llis_va aside from assigning it to a struct
    pl08x_lli pointer or calculating the address of array element 0.

    Signed-off-by: Dan Williams

    Dan Williams
     

07 Jan, 2011

1 commit


06 Jan, 2011

2 commits

  • ST Micro variants has some specific dma burst threshold compensation,
    which allows them to make better use of a DMA controller. Add support
    to set this up.

    Based on a patch from Linus Walleij.

    Acked-by: Linus Walleij
    Signed-off-by: Russell King

    Russell King
     
  • Add DMA engine support for transmit to the PL011 driver. Based on a
    patch from Linus Walliej, with the following changes:

    - remove RX DMA support. As PL011 doesn't give us receive timeout
    interrupts, we only get notified of received data when the RX DMA
    has completed. This rather sucks for interactive use of the TTY.

    - remove abuse of completions. Completions are supposed to be for
    events, not to tell what condition buffers are in. Replace it with
    a simple 'queued' bool.

    - fix locking - it is only safe to access the circular buffer with the
    port lock held.

    - only map the DMA buffer when required - if we're ever behind an IOMMU
    this helps keep IOMMU usage down, and also ensures that we're legal
    when we change the scatterlist entry length.

    - fix XON/XOFF sending - we must send XON/XOFF characters out as soon
    as possible - waiting for up to 4095 characters in the DMA buffer
    to be sent first is not acceptable.

    - fix XON/XOFF receive handling - we need to stop DMA when instructed
    to by the TTY layer, and restart it again when instructed to. There
    is a subtle problem here: we must not completely empty the circular
    buffer with DMA, otherwise we will not be notified of XON.

    - change the 'enable_dma' flag into a 'using DMA' flag, and track
    whether we can use TX DMA by whether the channel pointer is non-NULL.
    This gives us more control over whether we use DMA in the driver.

    - we don't need to have the TX DMA buffer continually allocated for
    each port - instead, allocate it when the port starts up, and free
    it when it's shut down. Update the 'using DMA' flag if we get
    the buffer, and adjust the TTY FIFO size appropriately.

    - if we're going to use PIO to send characters, use the existing IRQ
    based functionality rather than reimplementing it. This also ensures
    we call uart_write_wakeup() at the appropriate time, otherwise we'll
    stall.

    - use DMA engine helper functions for type safety.

    - fix init when built as a module - we can't have to initcall functions,
    so we must settle on one. This means we can eliminate the deferred
    DMA initialization.

    - there is no need to terminate transfers on a failed prep_slave_sg()
    call - nothing has been setup, so nothing needs to be terminated.
    This avoids a potential deadlock in the DMA engine code
    (tasklet->callback->failed prepare->terminate->tasklet_disable
    which then ends up waiting for the tasklet to finish running.)

    - Dan says that the submission callback should not return an error:
    | dma_submit_error() is something I should have removed after commit
    | a0587bcf "ioat1: move descriptor allocation from submit to prep" all
    | errors should be notified by prep failing to return a descriptor
    | handle. Negative dma_cookie_t values are only returned by the
    | dma_async_memcpy* calls which translate a prep failure into -ENOMEM.
    So remove the error handling at that point. This also solves the
    potential deadlock mentioned in the previous comment.

    Acked-by: Linus Walleij
    Signed-off-by: Russell King

    Russell King
     

05 Jan, 2011

11 commits

  • The PL08x driver holds on to the channel lock with interrupts disabled
    between the prepare and the subsequent submit API functions. This
    means that the locking state when the prepare function returns is
    dependent on whether it suceeeds or not.

    It did this to ensure that the physical channel wasn't released, and
    as it used to add the descriptor onto the pending list at prepare time
    rather than submit time.

    Now that we have reorganized the code to remove those reasons, we can
    now safely release the spinlock at the end of preparation and reacquire
    it in our submit function.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • Introduce 'phychan_hold' to hold on to physical DMA channels while we're
    preparing a new descriptor for it. This will be incremented when we
    allocate a physical channel and set the MUX registers during the
    preparation of the TXD, and will only be decremented when the TXD is
    submitted.

    This prevents the physical channel being given up before the new TXD
    is placed on the queue.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • This 'desc_list' is actually a list of pending descriptors, so name
    it after its function (pending list) rather than what it contains
    (descriptors).

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • We only need to store the dma address.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • Platforms need to be able to control which AHB master interface is used,
    as each AHB master interface may be asymetric. Allow the interfaces
    used for fetching LLIs, memory, and each peripheral to be configured
    individually.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • Rather than modifying platform data while preparing a transfer, copy
    the cctl value into the txd structure and modify the value there.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • The ccfg register is used to configure the channel parameters - the type
    and direction of transfer, the flow control signal and IRQ mask enables.
    The type and direction of transfer is known in the relevent prep_*
    function where a txd is created. The IRQ mask enables are always set,
    and the flow control signals are always set when we start processing a
    txd according to phychan->signal.

    If we store the ccfg value in the txd structure, we can avoid modifying
    platform data - and even having it in platform data at all.

    So, remove it from platform data too.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • As we now have all the code accessing the phychan {csrc,cdst,clli,cctl,
    ccfg} members in one function, there's no point storing the data into
    the struct. Get rid of the struct members. Re-order the register dump
    in the dev_dbg() to reflect the order we write the registers to the DMA
    device.

    The txd {csrc,cdst,clli,cctl} values are duplicates of the lli[0]
    values, so there's no point duplicating these either. Program the DMAC
    registers directly from the lli[0] values.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • Use size_t for variables denoting lengths throughout, and use the 'z'
    qualifier for printing the value. For safety, add a BUG_ON() in
    pl08x_fill_lli_for_desc() to catch the remainder potentially becoming
    negative.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • Avoid using 'void *' struct fields when the structs are not defined
    in linux/amba/pl08x.h - instead, forward declare the struct names, and
    use these instead. This ensures we have proper typechecking.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux
     
  • The last_issued variable uses an atomic type, which is only
    incremented inside a protected region, and then read. Everywhere else
    only reads the value, so it isn't using atomic_t correctly, and it
    doesn't even need to. Moreover, the DMA engine code provides us with
    a variable for this already - chan.cookie. Use chan.cookie instead.

    Also, avoid negative dma_cookie_t values - negative returns from
    tx_submit() mean failure, yet in reality we always succeed. Restart
    from cookie 1, just like other DMA engine drivers do.

    Signed-off-by: Russell King
    Acked-by: Linus Walleij
    Signed-off-by: Dan Williams

    Russell King - ARM Linux