10 Jul, 2019

1 commit


01 Feb, 2019

1 commit


24 Jan, 2019

1 commit


31 Jan, 2018

1 commit


24 Jan, 2018

1 commit


11 Jan, 2018

1 commit


15 Sep, 2017

2 commits


05 Sep, 2017

1 commit

  • The current code for deciding which CPU runs the complete lpi flow is
    too complicated. Since all enter/exit code now runs under the same lock
    we can just use a single non-atomic counter of cpus inside lpi.

    Another variable is used to make num_online_cpus() available to ASM code
    but idle code can treat it as a constant.

    Signed-off-by: Leonard Crestez

    Leonard Crestez
     

30 Aug, 2017

1 commit


29 Aug, 2017

1 commit


15 Aug, 2017

1 commit


07 Aug, 2017

1 commit


19 Jul, 2017

2 commits


18 Jul, 2017

1 commit


07 Jul, 2017

1 commit


06 Jul, 2017

3 commits

  • Low power idle exit latency is much longer than declared, in the
    milisecond range.

    Signed-off-by: Anson Huang
    Signed-off-by: Leonard Crestez
    Reviewed-by: Anson Huang

    Leonard Crestez
     
  • The GPC will wake us on peripheral interrupts but not IPIs. So check
    them manually by reading the GIC's GICD_SPENDSGIR* registers and
    aborting idle if something is pending.

    We do this only for the last cpu and after taking the required locks.
    We know that at this stage the other cpu is in WFI itself or waiting for
    the imx_pen_lock and can't trigger any additional IPIs. This means that
    the check is not racy.

    This fixes occasional lost IPIs causing tasks to get stuck in the
    TASK_WAKING 'W' state for long periods. This eventually manifested as
    rcu stalls.

    Signed-off-by: Leonard Crestez

    Leonard Crestez
     
  • This makes the code much easier to reason about. In particular it o
    makes sure the imx7d cpuidle driver respects the requirements for
    cpu_cluster_pm_enter/exit:

    * cpu_cluster_pm_enter must be called after cpu_pm_enter has been called
    on all cpus in the power domain, and before cpu_pm_exit has been called
    on any cpu in the power domain.
    * cpu_cluster_pm_exit must be called after cpu_pm_enter has been called
    on all cpus in the power domain, and before cpu_pm_exit has been called
    on any cpu in the power domain.

    This fixes interrupts sometimes getting "stuck" because of improper
    save/restore of GIC DIST registers.

    Signed-off-by: Anson Huang
    Signed-off-by: Leonard Crestez
    Reviewed-by: Anson Huang

    Leonard Crestez
     

27 Jun, 2017

1 commit


16 Jun, 2017

1 commit

  • i.MX7ULP QSPI dtb was used to update the M4 images, it should not able
    to boot the kernel even without the M4 image in QSPI.

    Also fixed the typo in dtsi to correct the QSPI register address
    mapping range.

    Signed-off-by: Han Xu
    (cherry picked from commit 6fb558d7e38a4f60944a791d4ff12fd5a5f039f5)

    Han Xu
     

15 Jun, 2017

1 commit


14 Jun, 2017

2 commits

  • IMX_SOC_IMX7 is referenced in makefiles and kconfig but it is not
    defined, so define it and select it for both IMX7D and IMX7ULP.

    Fixes the following build errors:

    arch/arm/mach-imx/built-in.o: In function `update_lpddr2_freq_smp':
    platform-imx-dma.c:(.text+0xf7c): undefined reference to `imx_scu_base'
    platform-imx-dma.c:(text+0xf88): undefined reference to `imx_scu_base'
    arch/arm/mach-imx/built-in.o: In function `update_ddr_freq_imx_smp':
    platform-imx-dma.c:(text+0x330c): undefined reference to `imx_scu_base'
    platform-imx-dma.c:(text+0x3318): undefined reference to `imx_scu_base'
    Makefile:969: recipe for target 'vmlinux' failed
    make: *** [vmlinux] Error 1

    Signed-off-by: Octavian Purdila
    Reviewed-by: Leonard Crestez

    Octavian Purdila
     
  • The AIPSx address space of i.MX7ULP need to be mapped as SZ_1M block
    in iRAM tlb for suspend code use. If we use ioremap to map these
    address region into kernel space, we can't make sure that the returned
    virtual address is 1M alignment. So we can map this address regions
    as static, then if we use the ioremap to map these memory regions, it will
    always return the virtual address of static mapping. So we can make sure
    the virtual address is 1M aligned.

    Signed-off-by: Bai Ping
    (cherry picked from commit 486041dc2fed38adc82ad93bd2dcc155c219ef01)

    Bai Ping
     

09 Jun, 2017

15 commits