02 Apr, 2010
1 commit
-
While the MMUCR.URB and ITLB/UTLB differentiation works fine for all SH-4
and later TLBs, these features are absent on SH-3. This splits out
local_flush_tlb_all() in to SH-4 and PTEAEX copies while restoring the
old SH-3 one, subsequently fixing up the build.This will probably want some further reordering and tidying in the
future, but that's out of scope at present.Signed-off-by: Paul Mundt
26 Jan, 2010
1 commit
-
The old ctrl in/out routines are non-portable and unsuitable for
cross-platform use. While drivers/sh has already been sanitized, there
is still quite a lot of code that is not. This converts the arch/sh/ bits
over, which permits us to flag the routines as deprecated whilst still
building with -Werror for the architecture code, and to ensure that
future users are not added.Signed-off-by: Paul Mundt
21 Jan, 2010
1 commit
-
Now that cached_to_uncached works as advertized in 32-bit mode and we're
never going to be able to map < 16MB anyways, there's no need for the
special uncached section. Kill it off.Signed-off-by: Paul Mundt
19 Jan, 2010
1 commit
-
Presently this is duplicated between tlb-sh4 and tlb-pteaex. Split the
helpers out in to a generic tlb-urb that can be used by any parts
equipped with MMUCR.URB.At the same time, move the SH-5 code out-of-line, as we require single
global state for DTLB entry wiring.Signed-off-by: Paul Mundt
16 Jan, 2010
1 commit
-
Provide a new extended page flag, _PAGE_WIRED and an SH4 implementation
for wiring TLB entries and use it in the fixmap code path so that we can
wire the fixmap TLB entry.Signed-off-by: Matt Fleming
01 Sep, 2009
1 commit
-
Conflicts:
arch/sh/mm/cache-sh4.c
20 Aug, 2009
1 commit
-
Signed-off-by: Michael Trimarchi
Signed-off-by: Paul Mundt
29 Jul, 2009
1 commit
-
The TLB miss fast-path presently calls in to update_mmu_cache() to
set up the entry, and does so with a NULL vma. Check for vma validity
in the __update_tlb() ptrace checks.Signed-off-by: Paul Mundt
28 Jul, 2009
1 commit
-
This splits out a separate __update_cache()/__update_tlb() for
update_mmu_cache() to wrap in to. This lets us share the common
__update_cache() bits while keeping special __update_tlb() handling
broken out.Signed-off-by: Paul Mundt
22 Jul, 2009
1 commit
-
This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.Signed-off-by: Paul Mundt