30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

02 Mar, 2010

1 commit

  • This implements a fairly significant overhaul of the dynamic PMB mapping
    code. The primary change here is that the PMB gets its own VMA that
    follows the uncached mapping and we attempt to be a bit more intelligent
    with dynamic sizing, multi-entry mapping, and so forth.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

17 Feb, 2010

1 commit

  • Both the store queue API and the PMB remapping take unsigned long for
    their pgprot flags, which cuts off the extended protection bits. In the
    case of the PMB this isn't really a problem since the cache attribute
    bits that we care about are all in the lower 32-bits, but we do it just
    to be safe. The store queue remapping on the other hand depends on the
    extended prot bits for enabling userspace access to the mappings.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

29 Jan, 2010

1 commit


19 Jan, 2010

2 commits

  • This is already taken care of in the top-level ioremap, and now that
    no one should be calling ioremap_fixed() directly we can simply throw the
    mapping displacement in as an additional argument.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • Presently 'flags' gets passed around a lot between the various ioremap
    helpers and implementations, which is only 32-bits. In the X2TLB case
    we use 64-bit pgprots which presently results in the upper 32bits being
    chopped off (which handily include our read/write/exec permissions).

    As such, we convert everything internally to using pgprot_t directly and
    simply convert over with pgprot_val() where needed. With this in place,
    transparent fixmap utilization for early ioremap works as expected.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

18 Jan, 2010

3 commits


28 Jan, 2008

1 commit


04 Jun, 2007

1 commit


13 Feb, 2007

1 commit


09 Dec, 2006

1 commit


06 Dec, 2006

1 commit

  • This adds some preliminary support for the SH-X2 MMU, used by
    newer SH-4A parts (particularly SH7785).

    This MMU implements a 'compat' mode with SH-X MMUs and an
    'extended' mode for SH-X2 extended features. Extended features
    include additional page sizes (8kB, 4MB, 64MB), as well as the
    addition of page execute permissions.

    The extended mode attributes are placed in a second data array,
    which requires us to switch to 64-bit PTEs when in X2 mode.

    With the addition of the exec perms, we also overhaul the mmap
    prots somewhat, now that it's possible to handle them more
    intelligently.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

27 Sep, 2006

1 commit

  • Inhibit mapping through page tables in __ioremap() for PCI memory
    apertures on SH7751 and SH7780-style PCI controllers, translation is
    not possible for these areas. For other users that map a small window
    in P1/P2 space, ioremap() traps that already, and should never make
    it to __ioremap().

    Signed-off-by: Paul Mundt

    Paul Mundt
     

17 Jan, 2006

1 commit

  • This introduces a few changes in the way that the I/O routines are defined on
    SH, specifically so that things like the iomap API properly wrap through the
    machvec for board-specific quirks.

    In addition to this, the old p3_ioremap() work is converted to a more generic
    __ioremap() that will map through the PMB if it's available, or fall back on
    page tables for everything else.

    An alpha-like IO_CONCAT is also added so we can start to clean up the
    board-specific io.h mess, which will be handled in board update patches..

    Signed-off-by: Paul Mundt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     

30 Oct, 2005

1 commit

  • First step in pushing down the page_table_lock. init_mm.page_table_lock has
    been used throughout the architectures (usually for ioremap): not to serialize
    kernel address space allocation (that's usually vmlist_lock), but because
    pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.

    Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
    architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
    and drop it when allocating a new one, to check lest a racing task already
    did. Similarly no page_table_lock in vmalloc's map_vm_area.

    Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
    user mms, which are converted only by a later patch, for now they have to lock
    differently according to whether or not it's init_mm.

    If sources get muddled, there's a danger that an arch source taking
    init_mm.page_table_lock will be mixed with common source also taking it (or
    neither take it). So break the rules and make another change, which should
    break the build for such a mismatch: remove the redundant mm arg from
    pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).

    Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
    used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
    pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
    map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
    took page_table_lock for no good reason.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds