09 Dec, 2011

1 commit

  • ia64 used early_node_map[] just to prime free_area_init_nodes(). Now
    memblock can be used for the same purpose and early_node_map[] is
    scheduled to be dropped. Use memblock instead.

    Signed-off-by: Tejun Heo
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: linux-ia64@vger.kernel.org

    Tejun Heo
     

25 May, 2011

2 commits

  • Fold all the mmu_gather rework patches into one for submission

    Signed-off-by: Peter Zijlstra
    Reported-by: Hugh Dickins
    Cc: Benjamin Herrenschmidt
    Cc: David Miller
    Cc: Martin Schwidefsky
    Cc: Russell King
    Cc: Paul Mundt
    Cc: Jeff Dike
    Cc: Richard Weinberger
    Cc: Tony Luck
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Namhyung Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Architectures that implement their own show_mem() function did not pass
    the filter argument to show_free_areas() to appropriately avoid emitting
    the state of nodes that are disallowed in the current context. This patch
    now passes the filter argument to show_free_areas() so those nodes are now
    avoided.

    This patch also removes the show_free_areas() wrapper around
    __show_free_areas() and converts existing callers to pass an empty filter.

    ia64 emits additional information for each node, so skip_free_areas_zone()
    must be made global to filter disallowed nodes and it is converted to use
    a nid argument rather than a zone for this use case.

    Signed-off-by: David Rientjes
    Cc: Russell King
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: Kyle McMartin
    Cc: Helge Deller
    Cc: James Bottomley
    Cc: "David S. Miller"
    Cc: Guan Xuetao
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

21 May, 2011

1 commit

  • Commit e66eed651fd1 ("list: remove prefetching from regular list
    iterators") removed the include of prefetch.h from list.h, which
    uncovered several cases that had apparently relied on that rather
    obscure header file dependency.

    So this fixes things up a bit, using

    grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]')
    grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]')

    to guide us in finding files that either need
    inclusion, or have it despite not needing it.

    There are more of them around (mostly network drivers), but this gets
    many core ones.

    Reported-by: Stephen Rothwell
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

25 Mar, 2011

1 commit

  • Commit ddd588b5dd55 ("oom: suppress nodes that are not allowed from
    meminfo on oom kill") moved lib/show_mem.o out of lib/lib.a, which
    resulted in build warnings on all architectures that implement their own
    versions of show_mem():

    lib/lib.a(show_mem.o): In function `show_mem':
    show_mem.c:(.text+0x1f4): multiple definition of `show_mem'
    arch/sparc/mm/built-in.o:(.text+0xd70): first defined here

    The fix is to remove __show_mem() and add its argument to show_mem() in
    all implementations to prevent this breakage.

    Architectures that implement their own show_mem() actually don't do
    anything with the argument yet, but they could be made to filter nodes
    that aren't allowed in the current context in the future just like the
    generic implementation.

    Reported-by: Stephen Rothwell
    Reported-by: James Bottomley
    Suggested-by: Andrew Morton
    Signed-off-by: David Rientjes
    Signed-off-by: Linus Torvalds

    David Rientjes
     

14 Jan, 2011

1 commit

  • pte alloc routines must wait for split_huge_page if the pmd is not present
    and not null (i.e. pmd_trans_splitting). The additional branches are
    optimized away at compile time by pmd_trans_splitting if the config option
    is off. However we must pass the vma down in order to know the anon_vma
    lock to wait for.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Andrea Arcangeli
    Acked-by: Rik van Riel
    Acked-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     

01 Jul, 2010

1 commit

  • Typo in down_spin() meant it only read the low 32 bits of the
    "serve" value, instead of the full 64 bits. This results in the
    system hanging when the values in ticket/serve get larger than
    32-bits. A big enough system running the right test can hit this
    in a just a few hours.

    Broken since 883a3acf5b0d4782ac35981227a0d094e8b44850
    [IA64] Re-implement spinaphores using ticket lock concepts

    Reported via IRC by Bjorn Helgaas

    Signed-off-by: Tony Luck

    Tony Luck
     

31 May, 2010

1 commit

  • This partially reverts commit 4ec37de89d8c758ee8115e0e64b3f994910789ee
    ("[IA64] Fix build breakage"), since the commit that made it necessary
    got reverted earlier (see commit 35926ff5fba8, 'Revert "cpusets:
    randomize node rotor used in cpuset_mem_spread_node()"')

    Even if we ever re-introduce this, there is no reason to make
    __node_random be some architecture-specific function.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

28 May, 2010

1 commit

  • In commit 0ac0c0d0f837c499afd02a802f9cf52d3027fa3b
    cpusets: randomize node rotor used in cpuset_mem_spread_node()

    Jack Steiner fixed a problem with too many small tasks being
    assigned to node 0. Copy his code to ia64 to avoid build error.

    arch/ia64/kernel/smpboot.c:641: error: ‘cpu_to_node_map’ undeclared (first use in this function)

    In commit 3bccd996276b108c138e8176793a26ecef54d573
    numa: ia64: use generic percpu var numa_node_id() implementation

    Lee Schermerhorn added some set_numa_node() calls - but these
    only work on CONFIG_NUMA=y configurations. Surround the calls
    with #ifdef CONFIG_NUMA

    Signed-off-by: Tony Luck

    Tony Luck
     

19 May, 2010

1 commit

  • As explained in commit 1c0fe6e3bd, we want to call the architecture independent
    oom killer when getting an unexplained OOM from handle_mm_fault, rather than
    simply killing current.

    Acked-by: David Rientjes
    Signed-off-by: Nick Piggin
    Signed-off-by: Tony Luck

    npiggin@suse.de
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

07 Mar, 2010

1 commit

  • The old anon_vma code can lead to scalability issues with heavily forking
    workloads. Specifically, each anon_vma will be shared between the parent
    process and all its child processes.

    In a workload with 1000 child processes and a VMA with 1000 anonymous
    pages per process that get COWed, this leads to a system with a million
    anonymous pages in the same anon_vma, each of which is mapped in just one
    of the 1000 processes. However, the current rmap code needs to walk them
    all, leading to O(N) scanning complexity for each page.

    This can result in systems where one CPU is walking the page tables of
    1000 processes in page_referenced_one, while all other CPUs are stuck on
    the anon_vma lock. This leads to catastrophic failure for a benchmark
    like AIM7, where the total number of processes can reach in the tens of
    thousands. Real workloads are still a factor 10 less process intensive
    than AIM7, but they are catching up.

    This patch changes the way anon_vmas and VMAs are linked, which allows us
    to associate multiple anon_vmas with a VMA. At fork time, each child
    process gets its own anon_vmas, in which its COWed pages will be
    instantiated. The parents' anon_vma is also linked to the VMA, because
    non-COWed pages could be present in any of the children.

    This reduces rmap scanning complexity to O(1) for the pages of the 1000
    child processes, with O(N) complexity for at most 1/N pages in the system.
    This reduces the average scanning cost in heavily forking workloads from
    O(N) to 2.

    The only real complexity in this patch stems from the fact that linking a
    VMA to anon_vmas now involves memory allocations. This means vma_adjust
    can fail, if it needs to attach a VMA to anon_vma structures. This in
    turn means error handling needs to be added to the calling functions.

    A second source of complexity is that, because there can be multiple
    anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
    "the" anon_vma lock. To prevent the rmap code from walking up an
    incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
    flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
    to make sure it is impossible to compile a kernel that needs both symbolic
    values for the same bitflag.

    Some test results:

    Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
    box with 16GB RAM and not quite enough IO), the system ends up running
    >99% in system time, with every CPU on the same anon_vma lock in the
    pageout code.

    With these changes, AIM7 hits the cross-over point around 29.7k users.
    This happens with ~99% IO wait time, there never seems to be any spike in
    system time. The anon_vma lock contention appears to be resolved.

    [akpm@linux-foundation.org: cleanups]
    Signed-off-by: Rik van Riel
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Cc: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     

03 Mar, 2010

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
    percpu: add __percpu sparse annotations to what's left
    percpu: add __percpu sparse annotations to fs
    percpu: add __percpu sparse annotations to core kernel subsystems
    local_t: Remove leftover local.h
    this_cpu: Remove pageset_notifier
    this_cpu: Page allocator conversion
    percpu, x86: Generic inc / dec percpu instructions
    local_t: Move local.h include to ringbuffer.c and ring_buffer_benchmark.c
    module: Use this_cpu_xx to dynamically allocate counters
    local_t: Remove cpu_local_xx macros
    percpu: refactor the code in pcpu_[de]populate_chunk()
    percpu: remove compile warnings caused by __verify_pcpu_ptr()
    percpu: make accessors check for percpu pointer in sparse
    percpu: add __percpu for sparse.
    percpu: make access macros universal
    percpu: remove per_cpu__ prefix.

    Linus Torvalds
     

09 Feb, 2010

1 commit

  • This has been broken since May 2008 when Al Viro killed altroot support.
    Since nobody has complained, it would appear that there are no users of
    this code (A plausible theory since the main OSVs that support ia64 prefer
    to use the IA32-EL software emulation).

    Signed-off-by: Tony Luck

    Tony Luck
     

02 Feb, 2010

1 commit


08 Jan, 2010

1 commit

  • __per_cpu_idtrs is statically allocated ... on CONFIG_NR_CPUS=4096
    systems it hogs 16MB of memory. This is way too much for a quite
    probably unused facility (only KVM uses dynamic TR registers).

    Change to an array of pointers, and allocate entries as needed on
    a per cpu basis. Change the name too as the __per_cpu_ prefix is
    confusing (this isn't a classic type object).

    Signed-off-by: Tony Luck

    Tony Luck
     

07 Jan, 2010

1 commit

  • Make sure compiler won't do weird things with limits. E.g. fetching
    them twice may return 2 different values after writable limits are
    implemented.

    I.e. either use rlimit helpers added in
    3e10e716abf3c71bdb5d86b8f507f9e72236c9cd
    or ACCESS_ONCE if not applicable.

    Signed-off-by: Jiri Slaby
    Cc: Fenghua Yu
    Signed-off-by: Tony Luck

    Jiri Slaby
     

05 Jan, 2010

1 commit


17 Dec, 2009

1 commit

  • * git://git.infradead.org/iommu-2.6:
    implement early_io{re,un}map for ia64
    Revert "Intel IOMMU: Avoid memory allocation failures in dma map api calls"
    intel-iommu: ignore page table validation in pass through mode
    intel-iommu: Fix oops with intel_iommu=igfx_off
    intel-iommu: Check for an RMRR which ends before it starts.
    intel-iommu: Apply BIOS sanity checks for interrupt remapping too.
    intel-iommu: Detect DMAR in hyperspace at probe time.
    dmar: Fix build failure without NUMA, warn on bogus RHSA tables and don't abort
    iommu: Allocate dma-remapping structures using numa locality info
    intr_remap: Allocate intr-remapping table using numa locality info
    dmar: Allocate queued invalidation structure using numa locality info
    dmar: support for parsing Remapping Hardware Static Affinity structure

    Linus Torvalds
     

16 Dec, 2009

1 commit


15 Dec, 2009

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

29 Oct, 2009

2 commits

  • Now that the return from alloc_percpu is compatible with the address
    of per-cpu vars, it makes sense to hand around the address of per-cpu
    variables. To make this sane, we remove the per_cpu__ prefix we used
    created to stop people accidentally using these vars directly.

    Now we have sparse, we can use that (next patch).

    tj: * Updated to convert stuff which were missed by or added after the
    original patch.

    * Kill per_cpu_var() macro.

    Signed-off-by: Rusty Russell
    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter

    Rusty Russell
     
  • This patch updates percpu related symbols in ia64 such that percpu
    symbols are unique and don't clash with local symbols. This serves
    two purposes of decreasing the possibility of global percpu symbol
    collision and allowing dropping per_cpu__ prefix from percpu symbols.

    * arch/ia64/kernel/setup.c: s/cpu_info/ia64_cpu_info/

    Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
    which cause name clashes" patch.

    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter
    Cc: Rusty Russell
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: linux-ia64@vger.kernel.org

    Tejun Heo
     

10 Oct, 2009

1 commit


02 Oct, 2009

3 commits

  • Unlike other archs, ia64 reserves space for percpu areas during early
    memory initialization. These areas occupy a contiguous region indexed
    by cpu number on contiguous memory model or are grouped by node on
    discontiguous memory model.

    As allocation and initialization are done by the arch code, all that
    setup_per_cpu_areas() needs to do is communicating the determined
    layout to the percpu allocator. This patch implements
    setup_per_cpu_areas() for both contig and discontig memory models and
    drops HAVE_LEGACY_PER_CPU_AREA.

    Please note that for contig model, the allocation itself is modified
    only to allocate for possible cpus instead of NR_CPUS. As dynamic
    percpu allocator can handle non-direct mapping, there's no reason to
    allocate memory for cpus which aren't possible.

    Signed-off-by: Tejun Heo
    Acked-by: Tony Luck
    Cc: Fenghua Yu
    Cc: linux-ia64

    Tejun Heo
     
  • cpu0 used special percpu area reserved by the linker, __cpu0_per_cpu,
    which is set up early in boot by head.S. However, this doesn't
    guarantee that the area will be on the same node as cpu0 and the
    percpu area for cpu0 ends up very far away from percpu areas for other
    cpus which cause problems for congruent percpu allocator.

    This patch makes percpu area initialization allocate percpu area for
    cpu0 like any other cpus and copy it from __cpu0_per_cpu which now
    resides in the __init area. This means that for cpu0, percpu area is
    first setup at __cpu0_per_cpu early by head.S and then moved to an
    area in the linear mapping during memory initialization and it's not
    allowed to take a pointer to percpu variables between head.S and
    memory initialization.

    Signed-off-by: Tejun Heo
    Acked-by: Tony Luck
    Cc: Fenghua Yu
    Cc: linux-ia64

    Tejun Heo
     
  • If CONFIG_VIRTUAL_MEM_MAP is enabled, ia64 defines macro VMALLOC_END
    as unsigned long variable vmalloc_end which is adjusted to prepare
    room for vmemmap. This becomes probnlematic if a local variables
    vmalloc_end is defined in some function (not very unlikely) and
    VMALLOC_END is used in the function - the function thinks its
    referencing the global VMALLOC_END value but would be referencing its
    own local vmalloc_end variable.

    There's no reason VMALLOC_END should be a macro. Just define it as an
    unsigned long variable if CONFIG_VIRTUAL_MEM_MAP is set to avoid nasty
    surprises.

    Signed-off-by: Tejun Heo
    Acked-by: Tony Luck
    Cc: Fenghua Yu
    Cc: linux-ia64
    Cc: Christoph Lameter

    Tejun Heo
     

23 Sep, 2009

4 commits

  • For /proc/kcore, each arch registers its memory range by kclist_add().
    In usual,

    - range of physical memory
    - range of vmalloc area
    - text, etc...

    are registered but "range of physical memory" has some troubles. It
    doesn't updated at memory hotplug and it tend to include unnecessary
    memory holes. Now, /proc/iomem (kernel/resource.c) includes required
    physical memory range information and it's properly updated at memory
    hotplug. Then, it's good to avoid using its own code(duplicating
    information) and to rebuild kclist for physical memory based on
    /proc/iomem.

    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Jiri Slaby
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Some 64bit arch has special segment for mapping kernel text. It should be
    entried to /proc/kcore in addtion to direct-linear-map, vmalloc area.
    This patch unifies KCORE_TEXT entry scattered under x86 and ia64.

    I'm not familiar with other archs (mips has its own even after this patch)
    but range of [_stext ..._end) is a valid area of text and it's not in
    direct-map area, defining CONFIG_ARCH_PROC_KCORE_TEXT is only a necessary
    thing to do.

    Note: I left mips as it is now.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • For /proc/kcore, vmalloc areas are registered per arch. But, all of them
    registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
    them. By this. archs which have no kclist_add() hooks can see vmalloc
    area correctly.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Presently, kclist_add() only eats start address and size as its arguments.
    Considering to make kclist dynamically reconfigulable, it's necessary to
    know which kclists are for System RAM and which are not.

    This patch add kclist types as
    KCORE_RAM
    KCORE_VMALLOC
    KCORE_TEXT
    KCORE_OTHER

    This "type" is used in a patch following this for detecting KCORE_RAM.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

22 Sep, 2009

1 commit

  • Commit 96177299416dbccb73b54e6b344260154a445375 ("Drop free_pages()")
    modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
    int'. This made the casts to 'unsigned long' in most callers superfluous,
    so remove them.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Geert Uytterhoeven
    Reviewed-by: Christoph Lameter
    Acked-by: Ingo Molnar
    Acked-by: Russell King
    Acked-by: David S. Miller
    Acked-by: Kyle McMartin
    Acked-by: WANG Cong
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Haavard Skinnemoen
    Cc: Mikael Starvik
    Cc: "Luck, Tony"
    Cc: Hirokazu Takata
    Cc: Ralf Baechle
    Cc: David Howells
    Acked-by: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Cc: Chris Zankel
    Cc: Michal Simek
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Geert Uytterhoeven
     

22 Jun, 2009

1 commit

  • This allows the callers to now pass down the full set of FAULT_FLAG_xyz
    flags to handle_mm_fault(). All callers have been (mechanically)
    converted to the new calling convention, there's almost certainly room
    for architectures to clean up their code and then add FAULT_FLAG_RETRY
    when that support is added.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

18 Jun, 2009

1 commit

  • It is generally agreed that it would be beneficial for u64 to be an
    unsigned long long on all architectures. ia64 (in common with several
    other 64-bit architectures) currently uses unsigned long. Migrating
    piecemeal is too painful; this giant patch fixes all compilation warnings
    and errors that come as a result of switching to use int-ll64.h.

    Note that userspace will still see __u64 defined as unsigned long. This
    is important as it affects C++ name mangling.

    [Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use
    u64 for start/end rather than unsigned long]

    Signed-off-by: Matthew Wilcox
    Signed-off-by: Tony Luck

    Matthew Wilcox
     

16 Jun, 2009

1 commit


12 Jun, 2009

1 commit

  • It's theoretically possible that there are exception table entries
    which point into the (freed) init text of modules. These could cause
    future problems if other modules get loaded into that memory and cause
    an exception as we'd see the wrong fixup. The only case I know of is
    kvm-intel.ko (when CONFIG_CC_OPTIMIZE_FOR_SIZE=n).

    Amerigo fixed this long-standing FIXME in the x86 version, but this
    patch is more general.

    This implements trim_init_extable(); most archs are simple since they
    use the standard lib/extable.c sort code. Alpha and IA64 use relative
    addresses in their fixups, so thier trimming is a slight variation.

    Sparc32 is unique; it doesn't seem to define ARCH_HAS_SORT_EXTABLE,
    yet it defines its own sort_extable() which overrides the one in lib.
    It doesn't sort, so we have to mark deleted entries instead of
    actually trimming them.

    Inspired-by: Amerigo Wang
    Signed-off-by: Rusty Russell
    Cc: linux-alpha@vger.kernel.org
    Cc: sparclinux@vger.kernel.org
    Cc: linux-ia64@vger.kernel.org

    Rusty Russell
     

02 Apr, 2009

1 commit


01 Apr, 2009

1 commit


27 Mar, 2009

2 commits