01 Dec, 2010

1 commit

  • This follows the ARM change c01778001a4f5ad9c62d882776235f3f31922fdd
    ("ARM: 6379/1: Assume new page cache pages have dirty D-cache") for the
    same rationale:

    There are places in Linux where writes to newly allocated page
    cache pages happen without a subsequent call to flush_dcache_page()
    (several PIO drivers including USB HCD). This patch changes the
    meaning of PG_arch_1 to be PG_dcache_clean and always flush the
    D-cache for a newly mapped page in update_mmu_cache().

    This addresses issues seen with executing binaries from MMC, in
    addition to some of the other HCDs that don't explicitly do cache
    management for their pipe-in buffers.

    Requested-by: Yoshihiro Shimoda
    Signed-off-by: Paul Mundt

    Paul Mundt
     

26 Jan, 2010

1 commit

  • The old ctrl in/out routines are non-portable and unsuitable for
    cross-platform use. While drivers/sh has already been sanitized, there
    is still quite a lot of code that is not. This converts the arch/sh/ bits
    over, which permits us to flag the routines as deprecated whilst still
    building with -Werror for the architecture code, and to ensure that
    future users are not added.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

21 Jan, 2010

1 commit


02 Jan, 2010

1 commit


09 Dec, 2009

1 commit


04 Dec, 2009

1 commit

  • When flushing/invalidating the icache/dcache via the memory-mapped IC/OC
    address arrays, the associative bit should only be used in conjunction with
    virtual addresses. However, we currently flush cache lines based on physical
    address, so stop using the associative bit.

    It is a better strategy to use non-associative writes (and physical tags) for
    flushing the caches anyway, because flushing by virtual address (as with the
    A-bit set) requires a valid TLB entry for that virtual address. If one does not
    exist in the TLB no exception is generated and the flush is silently ignored.

    This is also future-proofing for SH-4A parts which are gradually phasing out
    associative writes to the cache array due to the aforementioned case of certain
    flushes silently turning in to nops.

    Signed-off-by: Matt Fleming
    Signed-off-by: Paul Mundt

    Matt Fleming
     

09 Nov, 2009

2 commits


30 Oct, 2009

1 commit


16 Oct, 2009

2 commits


10 Oct, 2009

3 commits

  • To allow the MMU to be switched between 29bit and 32bit mode at runtime
    some constants need to swapped for functions that return a runtime
    value.

    Signed-off-by: Matt Fleming
    Signed-off-by: Paul Mundt

    Matt Fleming
     
  • Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
    idea that all addresses in P1SEG are untranslated, so we can access an
    address's physical page as an offset from P1SEG. This doesn't work for
    CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
    for PMB mappings and so can be translated to any physical address.

    Likewise, replace a P1SEGADDR() use with virt_to_phys().

    Signed-off-by: Matt Fleming
    Signed-off-by: Paul Mundt

    Matt Fleming
     
  • Paul Mundt
     

09 Oct, 2009

1 commit


09 Sep, 2009

6 commits

  • If PAGE_SIZE is presently over 4k we do a lot of extra flushing given
    that we purge the cache 4k at a time. Make it explicitly 4k per
    iteration, rather than iterating for PAGE_SIZE before looping over again.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • This builds on top of the MIPS r4k code that does roughly the same thing.
    This permits the use of kmap_coherent() for mapped pages with dirty
    dcache lines and falls back on kmap_atomic() otherwise.

    This also fixes up a problem with the alias check and defers to
    shm_align_mask directly.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • This kills off the unrolled segment based flushers on SH-4 and switches
    over to a generic unrolled approach derived from the writethrough segment
    flusher.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • PHYSADDR() runs in to issues in 32-bit mode when we do not have the
    legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
    directly, which also happens to do the right thing in legacy 29-bit mode.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • The i-cache flush in the case of VM_EXEC was added way back when as a
    sanity measure, and in practice we only care about evicting aliases from
    the d-cache. As a result, it's possible to drop the i-cache flush
    completely here.

    After careful profiling it's also come up that all of the work associated
    with hunting down aliases and doing ranged flushing ends up generating
    more overhead than simply blasting away the entire dcache, particularly
    if there are many mm's that need to be iterated over. As a result of
    that, just move back to flush_dcache_all() in these cases, which restores
    the old behaviour, and vastly simplifies the path.

    Additionally, on platforms without aliases at all, this can simply be
    nopped out. Presently we have the alias check in the SH-4 specific
    version, but this is true for all of the platforms, so move the check up
    to a generic location. This cuts down quite a bit on superfluous cacheop
    IPIs.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • There was quite a lot of tab->space damage done here from a former patch,
    clean it up once and for all.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

01 Sep, 2009

3 commits


27 Aug, 2009

1 commit


24 Aug, 2009

2 commits


21 Aug, 2009

2 commits


20 Aug, 2009

1 commit


15 Aug, 2009

6 commits


04 Aug, 2009

1 commit


22 Jul, 2009

1 commit

  • This inverts the delayed dcache flush a bit to be more in line with other
    platforms. At the same time this also gives us the ability to do some
    more optimizations and cleanup. Now that the update_mmu_cache() callsite
    only tests for the bit, the implementation can gradually be split out and
    made generic, rather than relying on special implementations for each of
    the peculiar CPU types.

    SH7705 in 32kB mode and SH-4 still need slightly different handling, but
    this is something that can remain isolated in the varying page copy/clear
    routines. On top of that, SH-X3 is dcache coherent, so there is no need
    to bother with any of these tests in the PTEAEX version of
    update_mmu_cache(), so we kill that off too.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

08 Sep, 2008

1 commit


28 Jul, 2008

1 commit