25 May, 2011

2 commits

  • For m32r, N_NORMAL_MEMORY represents all nodes that have present memory
    since it does not support HIGHMEM. This patch sets the bit at the time
    the node is initialized.

    If N_NORMAL_MEMORY is not accurate, slub may encounter errors since it
    uses this nodemask to setup per-cache kmem_cache_node data structures.

    Signed-off-by: David Rientjes
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Fold all the mmu_gather rework patches into one for submission

    Signed-off-by: Peter Zijlstra
    Reported-by: Hugh Dickins
    Cc: Benjamin Herrenschmidt
    Cc: David Miller
    Cc: Martin Schwidefsky
    Cc: Russell King
    Cc: Paul Mundt
    Cc: Jeff Dike
    Cc: Richard Weinberger
    Cc: Tony Luck
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Namhyung Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

31 Mar, 2011

1 commit


05 Jun, 2010

1 commit

  • As explained in commit 1c0fe6e3bd ("mm: invoke oom-killer from page
    fault") , we want to call the architecture independent oom killer when
    getting an unexplained OOM from handle_mm_fault, rather than simply
    killing current.

    Signed-off-by: Nick Piggin
    Acked-by: David Rientjes
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

21 Feb, 2010

1 commit

  • On VIVT ARM, when we have multiple shared mappings of the same file
    in the same MM, we need to ensure that we have coherency across all
    copies. We do this via make_coherent() by making the pages
    uncacheable.

    This used to work fine, until we allowed highmem with highpte - we
    now have a page table which is mapped as required, and is not available
    for modification via update_mmu_cache().

    Ralf Beache suggested getting rid of the PTE value passed to
    update_mmu_cache():

    On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
    to construct a pointer to the pte again. Passing a pte_t * is much
    more elegant. Maybe we might even replace the pte argument with the
    pte_t?

    Ben Herrenschmidt would also like the pte pointer for PowerPC:

    Passing the ptep in there is exactly what I want. I want that
    -instead- of the PTE value, because I have issue on some ppc cases,
    for I$/D$ coherency, where set_pte_at() may decide to mask out the
    _PAGE_EXEC.

    So, pass in the mapped page table pointer into update_mmu_cache(), and
    remove the PTE value, updating all implementations and call sites to
    suit.

    Includes a fix from Stephen Rothwell:

    sparc: fix fallout from update_mmu_cache API change

    Signed-off-by: Stephen Rothwell

    Acked-by: Benjamin Herrenschmidt
    Signed-off-by: Russell King

    Russell King
     

04 Oct, 2009

2 commits

  • In case CONFIG_DISCONTIGMEM is set, the memory size of system was
    always determined by CONFIG_MEMORY_SIZE and was not changeable.

    This patch fixes set_memory() of arch/m32r/mm/discontig.c so that
    we can specify memory size by the "mem=" kernel parameter.

    Signed-off-by: Hirokazu Takata

    Hirokazu Takata
     
  • Fix pmd_bad check code of tme_handler (TLB Miss Exception handler).
    The correct _KERNPG_TABLE value is not 0x263(=611) but 0x163.

    Signed-off-by: Hirokazu Takata

    Hirokazu Takata
     

22 Sep, 2009

1 commit

  • Commit 96177299416dbccb73b54e6b344260154a445375 ("Drop free_pages()")
    modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
    int'. This made the casts to 'unsigned long' in most callers superfluous,
    so remove them.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Geert Uytterhoeven
    Reviewed-by: Christoph Lameter
    Acked-by: Ingo Molnar
    Acked-by: Russell King
    Acked-by: David S. Miller
    Acked-by: Kyle McMartin
    Acked-by: WANG Cong
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Haavard Skinnemoen
    Cc: Mikael Starvik
    Cc: "Luck, Tony"
    Cc: Hirokazu Takata
    Cc: Ralf Baechle
    Cc: David Howells
    Acked-by: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Cc: Chris Zankel
    Cc: Michal Simek
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Geert Uytterhoeven
     

22 Jun, 2009

1 commit

  • This allows the callers to now pass down the full set of FAULT_FLAG_xyz
    flags to handle_mm_fault(). All callers have been (mechanically)
    converted to the new calling convention, there's almost certainly room
    for architectures to clean up their code and then add FAULT_FLAG_RETRY
    when that support is added.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

17 Jun, 2009

1 commit

  • ALLOC_WMARK_MIN, ALLOC_WMARK_LOW and ALLOC_WMARK_HIGH determin whether
    pages_min, pages_low or pages_high is used as the zone watermark when
    allocating the pages. Two branches in the allocator hotpath determine
    which watermark to use.

    This patch uses the flags as an array index into a watermark array that is
    indexed with WMARK_* defines accessed via helpers. All call sites that
    use zone->pages_* are updated to use the helpers for accessing the values
    and the array offsets for setting.

    Signed-off-by: Mel Gorman
    Reviewed-by: Christoph Lameter
    Cc: KOSAKI Motohiro
    Cc: Pekka Enberg
    Cc: Peter Zijlstra
    Cc: Nick Piggin
    Cc: Dave Hansen
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

14 Sep, 2008

1 commit

  • PFN_PHYS, as its name suggests, turns a pfn into a physical address.
    However, it is a macro which just operates on its argument without
    modifying its type. pfns are typed unsigned long, but an unsigned
    long may not be long enough to hold a physical address (32-bit systems
    with more than 32 bits of physcial address).

    Make sure we cast to phys_addr_t to return a complete result.

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Ingo Molnar

    Jeremy Fitzhardinge
     

27 Jul, 2008

1 commit

  • Remove arch-specific show_mem() in favor of the generic version.

    This also removes the following redundant information display:

    - free swap pages, printed by show_swap_cache_info()
    - pages in swapcache, printed by show_swap_cache_info()

    where show_mem() calls show_free_areas(), which calls
    show_swap_cache_info().

    Signed-off-by: Johannes Weiner
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

25 Jul, 2008

3 commits

  • Almost all users of this field need a PFN instead of a physical address,
    so replace node_boot_start with node_min_pfn.

    [Lee.Schermerhorn@hp.com: fix spurious BUG_ON() in mark_bootmem()]
    Signed-off-by: Johannes Weiner
    Cc:
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • free_area_init_node() gets passed in the node id as well as the node
    descriptor. This is redundant as the function can trivially get the node
    descriptor itself by means of NODE_DATA() and the node's id.

    I checked all the users and NODE_DATA() seems to be usable everywhere
    from where this function is called.

    Signed-off-by: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • There are a lot of places that define either a single bootmem descriptor or an
    array of them. Use only one central array with MAX_NUMNODES items instead.

    Signed-off-by: Johannes Weiner
    Acked-by: Ralf Baechle
    Cc: Ingo Molnar
    Cc: Richard Henderson
    Cc: Russell King
    Cc: Tony Luck
    Cc: Hirokazu Takata
    Cc: Geert Uytterhoeven
    Cc: Kyle McMartin
    Cc: Paul Mackerras
    Cc: Paul Mundt
    Cc: David S. Miller
    Cc: Yinghai Lu
    Cc: Christoph Lameter
    Cc: Mel Gorman
    Cc: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

08 Feb, 2008

1 commit

  • This patchset adds a flags variable to reserve_bootmem() and uses the
    BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
    between crashkernel area and already used memory.

    This patch:

    Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
    If that flag is set, the function returns with -EBUSY if the memory already
    has been reserved in the past. This is to avoid conflicts.

    Because that code runs before SMP initialisation, there's no race condition
    inside reserve_bootmem_core().

    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: fix powerpc build]
    Signed-off-by: Bernhard Walle
    Cc:
    Cc: "Eric W. Biederman"
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bernhard Walle
     

20 Oct, 2007

1 commit

  • is_init() is an ambiguous name for the pid==1 check. Split it into
    is_global_init() and is_container_init().

    A cgroup init has it's tsk->pid == 1.

    A global init also has it's tsk->pid == 1 and it's active pid namespace
    is the init_pid_ns. But rather than check the active pid namespace,
    compare the task structure with 'init_pid_ns.child_reaper', which is
    initialized during boot to the /sbin/init process and never changes.

    Changelog:

    2.6.22-rc4-mm2-pidns1:
    - Use 'init_pid_ns.child_reaper' to determine if a given task is the
    global init (/sbin/init) process. This would improve performance
    and remove dependence on the task_pid().

    2.6.21-mm2-pidns2:

    - [Sukadev Bhattiprolu] Changed is_container_init() calls in {powerpc,
    ppc,avr32}/traps.c for the _exception() call to is_global_init().
    This way, we kill only the cgroup if the cgroup's init has a
    bug rather than force a kernel panic.

    [akpm@linux-foundation.org: fix comment]
    [sukadev@us.ibm.com: Use is_global_init() in arch/m32r/mm/fault.c]
    [bunk@stusta.de: kernel/pid.c: remove unused exports]
    [sukadev@us.ibm.com: Fix capability.c to work with threaded init]
    Signed-off-by: Serge E. Hallyn
    Signed-off-by: Sukadev Bhattiprolu
    Acked-by: Pavel Emelianov
    Cc: Eric W. Biederman
    Cc: Cedric Le Goater
    Cc: Dave Hansen
    Cc: Herbert Poetzel
    Cc: Kirill Korotaev
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     

17 Oct, 2007

1 commit

  • We have had complaints where a threaded application is left in a bad state
    after one of it's threads is killed when we hit a VM: out_of_memory
    condition.

    Killing just one of the process threads can leave the application in a bad
    state, whereas killing the entire process group would allow for the
    application to restart, or be otherwise handled, and makes it very obvious
    that something has gone wrong.

    This change allows the entire process group to be taken down, rather
    than just the one thread.

    Signed-off-by: Will Schmidt
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Russell King
    Cc: Ian Molton
    Cc: Haavard Skinnemoen
    Cc: Mikael Starvik
    Cc: David Howells
    Cc: Andi Kleen
    Cc: "Luck, Tony"
    Cc: Hirokazu Takata
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc: Ralf Baechle
    Cc: Kyle McMartin
    Cc: Matthew Wilcox
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: Heiko Carstens
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Cc: Kazumoto Kojima
    Cc: Richard Curnow
    Cc: William Lee Irwin III
    Cc: "David S. Miller"
    Cc: Chris Zankel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Will Schmidt
     

20 Jul, 2007

1 commit

  • This patch completes Linus's wish that the fault return codes be made into
    bit flags, which I agree makes everything nicer. This requires requires
    all handle_mm_fault callers to be modified (possibly the modifications
    should go further and do things like fault accounting in handle_mm_fault --
    however that would be for another patch).

    [akpm@linux-foundation.org: fix alpha build]
    [akpm@linux-foundation.org: fix s390 build]
    [akpm@linux-foundation.org: fix sparc build]
    [akpm@linux-foundation.org: fix sparc64 build]
    [akpm@linux-foundation.org: fix ia64 build]
    Signed-off-by: Nick Piggin
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Russell King
    Cc: Ian Molton
    Cc: Bryan Wu
    Cc: Mikael Starvik
    Cc: David Howells
    Cc: Yoshinori Sato
    Cc: "Luck, Tony"
    Cc: Hirokazu Takata
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc: Greg Ungerer
    Cc: Matthew Wilcox
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: Heiko Carstens
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Cc: Kazumoto Kojima
    Cc: Richard Curnow
    Cc: William Lee Irwin III
    Cc: "David S. Miller"
    Cc: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Cc: Miles Bader
    Cc: Chris Zankel
    Acked-by: Kyle McMartin
    Acked-by: Haavard Skinnemoen
    Acked-by: Ralf Baechle
    Acked-by: Andi Kleen
    Signed-off-by: Andrew Morton
    [ Still apparently needs some ARM and PPC loving - Linus ]
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

11 May, 2007

1 commit

  • Fix the tlb-miss handler (tme_handler) to check _PAGE_PRESENT bit
    in order to handle file-mapped or swapped-out pages correctly.

    This patch is required to fix unexpected page errors for m32r.

    Signed-off-by: Hitoshi Yamamoto
    Signed-off-by: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hirokazu Takata
     

09 May, 2007

1 commit


12 Feb, 2007

3 commits

  • Part of long forgotten patch
    http://groups.google.com/group/fa.linux.kernel/msg/e98e941ce1cf29f6?dmode=source
    Since then, m32r grabbed two copies.

    Leave s390 copy because of important absence of CONFIG_VT, but remove
    references to non-existent timerlist_lock. ia64 also loses timerlist_lock.

    Signed-off-by: Alexey Dobriyan
    Acked-by: Martin Schwidefsky
    Cc: Andi Kleen
    Cc: "Luck, Tony"
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill Korotaev
     
  • Cosmetic updates and trivial fixes of m32r arch-dependent files.
    - Remove RCS ID strings and trailing white lines
    - Other misc. cosmetic updates

    Signed-off-by: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hirokazu Takata
     
  • Fix do_page_fault and update_mmu_cache.

    * Fix do_page_fault (vmalloc_fault:) to pass error_code correctly
    to update_mmu_cache by using a thread-fault code for all m32r chips.

    * Fix update_mmu_cache for OPSP chip
    - #ifdef CONFIG_CHIP_OPSP portion is a workaround of OPSP;
    Add a notfound-case operation to update_mmu_cache for OPSP
    like other m32r chip.
    - Fix pte_data that was not initialized if no entry found.

    Signed-off-by: Kazuhiro Inaoka
    Signed-off-by: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hirokazu Takata
     

09 Dec, 2006

1 commit

  • Don't mask the lower 12-bit of the page fault address.

    In the current m32r kernel implementation, we use an access exception
    to detect page faults.

    This patch fixes ace_handler (access exception handler) for m32r. In order to
    check userspace address in do_page_fault, we have to pass full 32-bit address
    to do_page_fault.

    Signed-off-by: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hirokazu Takata
     

08 Dec, 2006

1 commit


04 Oct, 2006

1 commit


01 Oct, 2006

1 commit


30 Sep, 2006

1 commit

  • This is an updated version of Eric Biederman's is_init() patch.
    (http://lkml.org/lkml/2006/2/6/280). It applies cleanly to 2.6.18-rc3 and
    replaces a few more instances of ->pid == 1 with is_init().

    Further, is_init() checks pid and thus removes dependency on Eric's other
    patches for now.

    Eric's original description:

    There are a lot of places in the kernel where we test for init
    because we give it special properties. Most significantly init
    must not die. This results in code all over the kernel test
    ->pid == 1.

    Introduce is_init to capture this case.

    With multiple pid spaces for all of the cases affected we are
    looking for only the first process on the system, not some other
    process that has pid == 1.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Sukadev Bhattiprolu
    Cc: Dave Hansen
    Cc: Serge Hallyn
    Cc: Cedric Le Goater
    Cc:
    Acked-by: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sukadev Bhattiprolu
     

26 Sep, 2006

1 commit


01 Jul, 2006

2 commits


28 Mar, 2006

3 commits

  • Just about every architecture defines some macros to do operations on pfns.
    They're all virtually identical. This patch consolidates all of them.

    One minor glitch is that at least i386 uses them in a very skeletal header
    file. To keep away from #include dependency hell, I stuck the new
    definitions in a new, isolated header.

    Of all of the implementations, sh64 is the only one that varied by a bit.
    It used some masks to ensure that any sign-extension got ripped away before
    the arithmetic is done. This has been posted to that sh64 maintainers and
    the development list.

    Compiles on x86, x86_64, ia64 and ppc64.

    Signed-off-by: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen
     
  • Because pgdat_list was linked to pgdat_list in *reverse* order, (By default)
    some of arch has to sort it by themselves.

    for_each_pgdat has gone..for_each_online_pgdat() uses node_online_map, which
    doesn't need to be sorted.

    This patch removes codes for sorting pgdat.

    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Replace for_each_pgdat() with for_each_online_pgdat().

    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

22 Mar, 2006

1 commit

  • set_page_count usage outside mm/ is limited to setting the refcount to 1.
    Remove set_page_count from outside mm/, and replace those users with
    init_page_count() and set_page_refcounted().

    This allows more debug checking, and tighter control on how code is allowed
    to play around with page->_count.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

07 Jan, 2006

2 commits

  • This patch fixes cache memory parameter setting for the M32104 target. So
    far, its performance seemed to have been degraded due to incorrect cache
    parameter setting.

    * arch/m32r/boot/setup.S: Set SFR(Special Fuction Registers) region
    to be non-cachable explicitly.
    * arch/m32r/mm/cache.c: Fix cache flushing routines not to switch off
    the M32104 cache.

    Signed-off-by: Hayato Fujiwara
    Signed-off-by: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hirokazu Takata
     
  • This patch is for supporting a new target platform, Renesas M32104UT
    evaluation board.

    The M32104UT is an eval board based on an uT-Engine specification. This board
    has an MMU-less M32R family processor, M32104.
    http://www-wa0.personal-media.co.jp/pmc/archive/te/te_m32104_e.pdf

    This board is one of the most popular M32R platform, so we have ported
    Linux/M32R to it.

    Signed-off-by: Naoto Sugai
    Signed-off-by: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hirokazu Takata
     

30 Oct, 2005

1 commit

  • pgdat->node_size_lock is basically only neeeded in one place in the normal
    code: show_mem(), which is the arch-specific sysrq-m printing function.

    Strictly speaking, the architectures not doing memory hotplug do no need this
    locking in show_mem(). However, they are all included for completeness. This
    should also make any future consolidation of all of the implementations a
    little more straightforward.

    This lock is also held in the sparsemem code during a memory removal, as
    sections are invalidated. This is the place there pfn_valid() is made false
    for a memory area that's being removed. The lock is only required when doing
    pfn_valid() operations on memory which the user does not already have a
    reference on the page, such as in show_mem().

    Signed-off-by: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen