19 Dec, 2014

2 commits


12 Dec, 2014

1 commit


28 Nov, 2014

2 commits

  • This API is used for platforms (may be specific for mxs PHY) which can
    pull down 15K resistor when necessary

    Signed-off-by: Peter Chen

    Peter Chen
     
  • M4 can NOT switch its clk parent due to glitch MUX,
    to handle this case, A9 will help switch M4's clk
    parent, the flow is as below:

    M4:
    1. enter low power idle, send bus use count-- to A9;
    2. enter wfi and only wait for MU interrupt;
    3. receive A9's clk switch ready message, go into low
    power idle;
    4. receive interrupt to exit low power idle, send request
    to A9 for increase busfreq and M4 freq, enter wfi
    and only wait for MU interrupt;
    5. receive A9 ready message, go out of low power idle.

    A9:
    1. when receive M4's message of entering low power idle,
    wait M4 into wfi, hold M4 in wfi by hardware, gate
    M4 clk, then switch M4's clk to OSC, ungate M4 clk,
    send ready command to wake up M4 into low power idle;
    2. when receive M4's message of exiting low power idle,
    wait M4 into wfi, hold M4 in wfi by hardware, gate
    M4 clk, then switch M4's clk to origin high clk,
    ungate M4 clk, send ready command to wake up M4
    to exit low power idle;

    Signed-off-by: Anson Huang

    Anson Huang
     

25 Nov, 2014

1 commit


21 Nov, 2014

2 commits


14 Nov, 2014

1 commit


11 Nov, 2014

1 commit

  • The AHBBRST at SBUSCFG and RX/TX burst size at BURSTSIZE are implementation
    dependent, each platform may have different values, and some values may not be
    optimized.

    The glue layer can override ahb burst configuration value by setting flag
    CI_HDRC_OVERRIDE_AHB_BURST and ahbburst_config.

    The glue layer can override RX/TX burst size by setting flag
    CI_HDRC_OVERRIDE_BURST_LENGTH and burst_length.

    Signed-off-by: Peter Chen

    Peter Chen
     

08 Nov, 2014

2 commits

  • Currently, devm_ managed memory only supports kzalloc.

    Convert the devm_kzalloc implementation to devm_kmalloc and remove the
    complete memset to 0 but still set the initial struct devres header and
    whatever padding before data to 0.

    Add the other normal alloc variants as static inlines with __GFP_ZERO
    added to the gfp flag where appropriate:

    devm_kzalloc
    devm_kcalloc
    devm_kmalloc_array

    Add gfp.h to device.h for the newly added static inlines.

    akpm: the current API forces us to replace kmalloc() with kzalloc() when
    performing devm_ conversions. This adds a relatively minor overhead.
    More significantly, it will defeat kmemcheck used-uninitialized checking,
    and for a particular driver, losing used-uninitialised checking for their
    core controlling data structures will significantly degrade kmemcheck
    usefulness.

    Signed-off-by: Joe Perches
    Cc: Tejun Heo
    Cc: Sangjung Woo
    Signed-off-by: Andrew Morton
    Signed-off-by: Greg Kroah-Hartman
    (cherry picked from commit 64c862a839a8db2c02bbaa88b923d13e1208919d)
    (cherry picked from commit 9a93865303d0f24fc2ebe205182e13cf882ca2e7)

    Joe Perches
     
  • AEAD key parsing is duplicated to multiple places in the kernel. Add a
    common helper function to consolidate that functionality.

    Cc: Herbert Xu
    Cc: "David S. Miller"
    Signed-off-by: Mathias Krause
    Signed-off-by: Herbert Xu
    (cherry picked from commit bc6e2bdb71056607141ada309a185f0a50b1aeaf)
    (cherry picked from commit e04ea19d6744a2eaaed0cef3400c590e790b0827)

    Mathias Krause
     

07 Nov, 2014

12 commits

  • 1. Enable MU as wakeup source always: as when M4 is busy, A9 suspend
    only enters WAIT mode, when M4 is from busy to idle, it will send
    A9 a message via MU, so we need to make sure MU message can wake
    up A9 and provide chance for A9 to enter DSM mode;
    2. Make sure MU is disabled when last message is NOT handled: as we
    use delay work to handle MU message, and this message maybe over
    written if there is another MU message coming, to make it more
    robust, disable MU receive interrupt until last message has been
    handled;
    3. Make MU interrupt as early resume source to speed up MU message
    handle and avoid message over written issue during resume process;
    4. Enable GIC interrupt of those wakeup soucres from M4: this is to
    cover the cornor case of suspend, when there is wakeup source of
    M4 enabled in GPC but NOT in GIC, but it is pending before suspend,
    CCM will NOT enter low power mode, and A9 is in wfi, to make sure
    this interrupt can wake up A9 from wfi, make sure it is also enabled
    in GIC and create a dummy action to handle this interrupt, only
    for those modules that are NOT enabled in A9.

    Signed-off-by: Anson Huang

    Anson Huang
     
  • These functions are being open-coded in 3 different places in the driver
    core, and other driver subsystems will want to start doing this as well,
    so move it to the sysfs core to keep it all in one place, where we know
    it is written properly.

    Signed-off-by: Greg Kroah-Hartman

    Conflicts:
    drivers/base/bus.c

    (cherry picked from commit f1986282fe78586eddf3ae972a72eab7ca425aa7)

    Greg Kroah-Hartman
     
  • groups should be able to support binary attributes, just like it
    supports "normal" attributes. This lets us only handle one type of
    structure, groups, throughout the driver core and subsystems, making
    binary attributes a "full fledged" part of the driver model, and not
    something just "tacked on".

    Reported-by: Oliver Schinagl
    Reviewed-by: Guenter Roeck
    Tested-by: Guenter Roeck
    Signed-off-by: Greg Kroah-Hartman
    (cherry picked from commit 03829e7591389acd227a532db83d92f0bd188287)

    Greg Kroah-Hartman
     
  • The PCI MSI sysfs code is a mess with kobjects for things that don't really
    need to be kobjects. This patch creates attributes dynamically for the MSI
    interrupts instead of using kobjects.

    Note, this removes a directory from sysfs. Old MSI kobjects:

    pci_device
    └── msi_irqs
       └── 40
       └── mode

    New MSI attributes:

    pci_device
    └── msi_irqs
       └── 40

    As there was only one file "mode" with the kobject model, the interrupt
    number is now a file that returns the "mode" of the interrupt (msi vs.
    msix).

    Signed-off-by: Greg Kroah-Hartman
    Signed-off-by: Bjorn Helgaas
    Acked-by: Neil Horman
    (cherry picked from commit 1c51b50c2995f543d145d3bce78029ac9f8ca6b3)

    Conflicts:
    drivers/pci/msi.c

    (cherry picked from commit 9018185784be55fe8d04a63c687083db58f85bf0)

    Greg Kroah-Hartman
     
  • Suggested-by: Ben Hutchings
    Signed-off-by: Alexander Gordeev
    Signed-off-by: Bjorn Helgaas
    Reviewed-by: Tejun Heo
    (cherry picked from commit 8ec5db6b20c860ddd1311c794b38c98ce86ac7ae)
    (cherry picked from commit aa1eee2e30c2c88c89bc9c8028c1538473ba41b1)

    Alexander Gordeev
     
  • This adds pci_enable_msi_range(), which supersedes the pci_enable_msi()
    and pci_enable_msi_block() MSI interfaces.

    It also adds pci_enable_msix_range(), which supersedes the
    pci_enable_msix() MSI-X interface.

    The old interfaces have three categories of return values:

    negative: failure; caller should not retry
    positive: failure; value indicates number of interrupts that *could*
    have been allocated, and caller may retry with a smaller request
    zero: success; at least as many interrupts allocated as requested

    It is error-prone to handle these three cases correctly in drivers.

    The new functions return either a negative error code or a number of
    successfully allocated MSI/MSI-X interrupts, which is expected to lead to
    clearer device driver code.

    pci_enable_msi(), pci_enable_msi_block() and pci_enable_msix() still exist
    unchanged, but are deprecated and may be removed after callers are updated.

    [bhelgaas: tweak changelog]
    Suggested-by: Ben Hutchings
    Signed-off-by: Alexander Gordeev
    Signed-off-by: Bjorn Helgaas
    Reviewed-by: Tejun Heo
    (cherry picked from commit 302a2523c277bea0bbe8340312b09507905849ed)

    (cherry picked from commit 8fdfdd8e8c089da6e768a50e52ec820a290ecaae)

    Alexander Gordeev
     
  • This creates an MSI-X counterpart for pci_msi_vec_count(). Device drivers
    can use this function to obtain maximum number of MSI-X interrupts the
    device supports and use that number in a subsequent call to
    pci_enable_msix().

    pci_msix_vec_count() supersedes pci_msix_table_size() and returns a
    negative errno if device does not support MSI-X interrupts. After this
    update, callers must always check the returned value.

    The only user of pci_msix_table_size() was the PCI-Express port driver,
    which is also updated by this change.

    Signed-off-by: Alexander Gordeev
    Signed-off-by: Bjorn Helgaas
    Reviewed-by: Tejun Heo
    (cherry picked from commit ff1aa430a2fa43189e89c7ddd559f0bee2298288)
    (cherry picked from commit 033f2e78584cdd322c3a06cdf1a5641ca2f0ae3d)

    Alexander Gordeev
     
  • The new pci_msi_vec_count() interface makes pci_enable_msi_block_auto()
    superfluous.

    Drivers can use pci_msi_vec_count() to learn the maximum number of MSIs
    supported by the device, and then call pci_enable_msi_block().

    pci_enable_msi_block_auto() was introduced recently, and its only user is
    the AHCI driver, which is also updated by this change.

    Signed-off-by: Alexander Gordeev
    Signed-off-by: Bjorn Helgaas
    Acked-by: Tejun Heo
    Conflicts:
    include/linux/pci.h

    (cherry picked from commit 0f65ddf8c339610a98e2787709b2a2b3aa1e3dac)

    Alexander Gordeev
     
  • Device drivers can use this interface to obtain the maximum number of MSI
    interrupts the device supports and use that number, e.g., in a subsequent
    call to pci_enable_msi_block().

    Signed-off-by: Alexander Gordeev
    Signed-off-by: Bjorn Helgaas
    Reviewed-by: Tejun Heo
    Conflicts:
    drivers/pci/msi.c

    (cherry picked from commit 42032860b401ae67986321555404d610c7b7c823)

    Alexander Gordeev
     
  • Make pci_enable_msi_block(), pci_enable_msi_block_auto() and
    pci_enable_msix() consistent with regard to the type of 'nvec' argument.

    Signed-off-by: Alexander Gordeev
    Signed-off-by: Bjorn Helgaas
    Reviewed-by: Tejun Heo
    (cherry picked from commit a9ec6e28d5423e0d2383e57a7cffa1b38f70c878)

    Alexander Gordeev
     
  • Change x86_msi.restore_msi_irqs(struct pci_dev *dev, int irq) to
    x86_msi.restore_msi_irqs(struct pci_dev *dev).

    restore_msi_irqs() restores multiple MSI-X IRQs, so param 'int irq' is
    unneeded. This makes code more consistent between vm and bare metal.

    Dom0 MSI-X restore code can also be optimized as XEN only has a hypercall
    to restore all MSI-X vectors at one time.

    Tested-by: Sucheta Chakraborty
    Signed-off-by: Zhenzhong Duan
    Signed-off-by: Bjorn Helgaas
    Acked-by: Konrad Rzeszutek Wilk
    Conflicts:
    drivers/pci/msi.c

    (cherry picked from commit 41b4d5f3cf3537cfd31e85e68df29640a6fbe571)

    DuanZhenzhong
     
  • Certain platforms do not allow writes in the MSI-X BARs to setup or tear
    down vector values. To combat against the generic code trying to write to
    that and either silently being ignored or crashing due to the pagetables
    being marked R/O this patch introduces a platform override.

    Note that we keep two separate, non-weak, functions default_mask_msi_irqs()
    and default_mask_msix_irqs() for the behavior of the arch_mask_msi_irqs()
    and arch_mask_msix_irqs(), as the default behavior is needed by x86 PCI
    code.

    For Xen, which does not allow the guest to write to MSI-X tables - as the
    hypervisor is solely responsible for setting the vector values - we
    implement two nops.

    This fixes a Xen guest crash when passing a PCI device with MSI-X to the
    guest. See the bugzilla for more details.

    [bhelgaas: add bugzilla info]
    Reference: https://bugzilla.kernel.org/show_bug.cgi?id=64581
    Signed-off-by: Konrad Rzeszutek Wilk
    Signed-off-by: Bjorn Helgaas
    CC: Sucheta Chakraborty
    CC: Zhenzhong Duan

    (cherry picked from commit 4f7617a1116a88d6e6a71bbeb0686bf2fb00e395)

    Konrad Rzeszutek Wilk
     

04 Nov, 2014

6 commits

  • The chipidea IP has different limitations for host and device mode,
    see below errata, we may need to enable SDIS(Stream Disable Mode)
    at host mode, but we don't want it at device mode at some situations.

    TAR 9000378958
    Title: Non-Double Word Aligned Buffer Address Sometimes Causes Host to Hang on OUT Retry
    Impacted Configuration: Host mode, all transfer types
    Description:
    The host core operating in streaming mode may under run while sending the data packet of an OUT transaction. This under run can occur if there are unexpected system delays in fetching the remaining packet data from memory. The host forces a bad CRC on the packet, the device detects the error and discards the packet. The host then retries a Bulk, Interrupt, or Control transfer if an under run occurs according to the USB specification.
    During simulations, it was found that the host does not issue the retry of the failed bulk OUT. It does not issue any other transactions except SOF packets that have incorrect frame numbers.
    The second failure mode occurs if the under run occurs on an ISO OUT transaction and the next ISO transaction is a zero byte packet. The host does not issue any transactions (including SOFs). The device detects a Suspend condition, reverts to full speed, and waits for resume signaling.
    A third failure mode occurs when the host under runs on an ISO OUT and the next ISO in the schedule is an ISO OUT with two max packets of 1024 bytes each.
    The host should issue MDATA for the first OUT followed by DATA1 for the second. However, it drops the MDATA transaction, and issues the DATA1 transaction.
    The system impact of this bug is the same regardless of the failure mode observed. The host core hangs, the ehci_ctrl state machine waits for the protocol engine to send the completion status for the corrupted transaction, which never occurs. No indication is sent to the host controller driver, no register bits change and no interrupts occur. Eventually the requesting application times out.
    Detailed internal behavior:
    The EHCI control state machine (ehci_ctrl) in the DMA block is responsible for parsing the schedules and initiating all transactions. The ehci_ctrl state machine passes the transaction details to the protocol block by writing the transaction information in to the TxFIFO. It then asserts the pe_hst_run_pkt signal to inform the host protocol state machine (pe_hst_state) that there is a packet in the TxFIFO.
    A tag of 0x0 indicates a start of packet with the data providing the following information:

    35:32 Tag
    31:30 Reserved
    29:23 Endpoint (lowest 4 bits)
    22:16 Address
    15:10 Reserved
    9:8 Endpoint speed
    7:6 Endpoint type
    5:6 Data Toggle
    3:0 PID
    The pe_hst_state reads the packet information and constructs the packet and issues it to the PHY interface.
    The ehci_ctrl state machine writes the start transaction information in to the TxFIFO as 0x03002910c for the OUT packet that had the under run error. However, it writes 0xC3002910C for the retry of the Out transaction, which is incorrect.
    The pe_hst_state enters a bus timeout state after sending the bad CRC for the packet that under ran. It then purges any data that was back filled in to the TxFIFO for the packet that under ran. The pe_hst_state machine stops purging the TxFIFO when it is empty or if it reads a location that has a tag of 0x0, indicating a start of packet command.
    The pe_hst_state reads 0xC3002910C and discards it as it does not decode to a start of packet command. It continues to purge the OUT data that has been pre-buffered for the OUT retry . The pe_hst_state detects the hst_packet_run signal and attempts to read the PID and address information from the TxFIFO. This location has packet data and so does not decode to a valid PID and so falls through to the PE_HST_SOF_LOAD state where the frame_num_counter is updated. The frame_num_counter is updated with the data in the TxFIFO. In this case, the data is incorrect as the ehci_ctrl state machine did not initiate the load. The hst_pe_state machine detects the SOF request signal and sends an SOF with the bad frame number. Meanwhile, the ehci_ctrl state machine waits indefinitely in the run_pkt state waiting for the completion status from pe_hst_state machine, which will never happen.
    The ISO failure case is similar except that there is no retry for ISO. The ehci_ctrl state machine moves to the next transfer in the periodic schedule. If the under run occurs on the last entry of the periodic list then it moves to the Async schedule.
    In the case of ISO OUT simulations, the next ISO is a zero byte OUT and again the start of packet command gets corrupted. The TxFIFO is empty when the hst_pe_state attempts to read the Address and PID information as the transaction is a zero byte packet. This results in the hst_pe_state machine staying in the GET_PID state, which means that it does not issue any transactions (including SOFs). The device detects a Suspend condition and reverts to full speed mode and waits for a Resume or Reset signal.
    The EHCI specification allows a Non-DoubleWord (32 bits) offset to be used as a current offset for Buffer Pointer Page 0 of the qTD. In Non-DoubleWord aligned cases, the core reads the packet data from the AHB memory, performs the alignment operation before writing it in to the TxFIFO as a 32 bit data word. An End Of Packet tag (EOP) is written to the TxFIFO after all the packet data has been written in to the TxFIFO. The alignment function is reset to Idle by the EOP tag. The corruption of the start of packet command arises because the packet buffer for the OUT transaction that under ran is not aligned to a DoubleWord, and hence no EOP tag is written to the TxFIFO. The alignment function is still active when the start packet information is written in to the TxFIFO for the retry of the bulk packet or for the next transaction in the case of an under run on an ISO. This results in the corruption of the start tag and the transaction information.
    Click for waveform showing the command 0x 0000300291 being written in to the TX FIFO for the Out that under ran.
    Click for waveform showing the command 0xC3002910C written to the TxFIFO instead of 0x 0000300291
    Versions affected: Versions 2.10a and previous versions
    How discovered: Customer simulation
    Workaround:
    1- The EHCI specification allows a non-DoubleWord offset to be used as a current offset for Buffer Pointer Page 0 of the qTD. However, if a DoubleWord offset is used then this issue does not arise.
    2- Use non streaming mode to eliminate under runs.
    Resolution:
    The fix involves changes to the traffic state machine in the vusb_hs_dma_traf block. The ehci_ctrl state machine updates the context information by encoding the transaction results on the hst_op_context_update signals at the end of a transaction. The signal hst_op_context_update is added to the traffic state machine, and the tx_fifo_under_ran_r signal is generated if the transaction results in an under run error. Click for waveform
    The traffic state machine then traverses to the do_eop states if the tx_fifo_under_ran error is asserted. Thus an EOP tag is written in to the TxFIFO as shown in this waveform .
    The EOP tag resets the align state machine to the Idle state ensuring that the next command written by the echi_ctrl state machine does not get corrupted.
    File(s) modified:
    RTL code fixed: …..
    Method of reproducing: This failure cannot be reproduced in the current test bench.
    Date Found: March 2010
    Date Fixed: June 2010
    Update information:
    Added the RTL code fix

    Signed-off-by: Peter Chen

    Peter Chen
     
  • imx sema4 driver changes in mcc2.0 updates

    Signed-off-by: Richard Zhu

    Richard Zhu
     
  • the linux and soc(imx6sx) related modifications
    in mcc2.0 updates.
    - wrap linux os header into mcc_linux.h
    - move the platform related macro definitions from
    mcc_config.h to mcc_config_linux.h
    - DO not use MCC_OS_USED macro in the mcc common codes
    except the header file include.
    - unify the phys and virt exchange callbacks.

    Signed-off-by: Richard Zhu

    Richard Zhu
     
  • Common codes changes in the mcc 2.0 updates
    - common definitions are moved from mcc_config.h to mcc_common.h
    because that these definitions are common for the standalone mcc
    stack, and shared by different platforms, such as Linux, MQX.
    - re-define the common api _psp_core_num(), and _psp_node_num().
    Let them to be no platform dependency.
    - move the definition of the MCC_OS_USED in mcc_config.h
    - new add on mcc_config_linux.h file, contained the platform
    related macro definitions contained in mcc_config.h before.
    - add the related linux modifications into mcc_api.c/mcc_common.c
    when implement the mcc2.0 into linux BSP.
    - fix one potential bug that all the share memory operations should
    be protected by sema4.

    Acked-by: Shawn Guo
    Signed-off-by: Richard Zhu

    Richard Zhu
     
  • This is the base line of the mcc version 2.0.

    Acked-by: Shawn Guo
    Signed-off-by: Richard Zhu

    Richard Zhu
     
  • This reverts commit 4aad1cf7652d45d81ea534f6cfc55a2f75ed221a.

    Signed-off-by: Richard Zhu

    Richard Zhu
     

28 Oct, 2014

2 commits


20 Oct, 2014

6 commits

  • B-device detects that bus is idle for more than TB_AIDL_BDIS min and begins HNP
    by turning off pullup on DP, this allows the bus to discharge to the SE0 state.
    This timer was missed and failed with one PET test, this patch is to fix this
    timing issue.

    Signed-off-by: Li Jun

    Li Jun
     
  • This patch adds a timer to delay turn on vbus after detecting data pulse
    from B-device, this is required by OTG SRP timing.

    Signed-off-by: Li Jun

    Li Jun
     
  • 1. improve busfreq enter/exit protocol, M4 will request
    high/low busfreq to A9, M4 will be in TCM when it
    release high busfreq, A9 only need to responce when
    M4 request high busfreq, no need handshake;
    2. only when M4 release high busfreq, A9 is allow to do
    linux kernel suspend.

    Signed-off-by: Anson Huang

    Anson Huang
     
  • We cannot unconditionally use dma_map_single() to map data for use with
    SPI since transfers may exceed a page and virtual addresses may not be
    provided with physically contiguous pages. Further, addresses allocated
    using vmalloc() need to be mapped differently to other addresses.

    Currently only the MXS driver handles all this, a few drivers do handle
    the possibility that buffers may not be physically contiguous which is
    the main potential problem but many don't even do that. Factoring this
    out into the core will make it easier for drivers to do a good job so if
    the driver is using the core DMA code then generate a scatterlist
    instead of mapping to a single address so do that.

    This code is mainly based on a combination of the existing code in the MXS
    and PXA2xx drivers. In future we should be able to extend it to allow the
    core to concatenate adjacent transfers if they are compatible, improving
    performance.

    Currently for simplicity clients are not allowed to use the scatterlist
    when they do DMA mapping, in the future the existing single address
    mappings will be replaced with use of the scatterlist most likely as
    part of pre-verifying transfers.

    This change makes it mandatory to use scatterlists when using the core DMA
    mapping so update the s3c64xx driver to do this when used with dmaengine.
    Doing so makes the code more ugly but it is expected that the old s3c-dma
    code can be removed very soon.

    Signed-off-by: Mark Brown
    (cherry picked from commit 6ad45a27cbe343ec8d7888e5edf6335499a4b555)

    Mark Brown
     
  • It is fairly common for SPI devices to require that one or both transfer
    directions is always active. Currently drivers open code this in various
    ways with varying degrees of efficiency. Start factoring this out by
    providing flags SPI_MASTER_MUST_TX and SPI_MASTER_MUST_RX. These will cause
    the core to provide buffers for the requested direction if none are
    specified in the underlying transfer.

    Currently this is fairly inefficient since we actually allocate a data
    buffer which may get large, support for mapping transfers using a
    scatterlist will allow us to avoid this for DMA based transfers.

    Signed-off-by: Mark Brown
    (cherry picked from commit 3a2eba9bd0a6447dfbc01635e4cd0689f5f2bdad)

    Mark Brown
     
  • The process of DMA mapping buffers for SPI transfers does not vary between
    devices so in order to save duplication of code in drivers this can be
    factored out into the core, allowing it to be integrated with the work that
    is being done on factoring out the common elements from the data path
    including more sharing of dmaengine code.

    In order to use this masters need to provide a can_dma() operation and while
    the hardware is prepared they should ensure that DMA channels are provided
    in tx_dma and rx_dma. The core will then ensure that the buffers are mapped
    for DMA prior to calling transfer_one_message().

    Currently the cleanup on error is not complete, this needs to be improved.

    Signed-off-by: Mark Brown
    (cherry picked from commit 99adef310f682d6343cb40c1f6c9c25a4b3a450d)

    Mark Brown
     

16 Oct, 2014

1 commit


15 Oct, 2014

1 commit