18 Jul, 2008

3 commits


26 Jun, 2008

1 commit


20 May, 2008

2 commits

  • When a cpu really is stuck in the kernel, it can be often
    impossible to figure out which cpu is stuck where. The
    worst case is when the stuck cpu has interrupts disabled.

    Therefore, implement a global cpu state capture that uses
    SMP message interrupts which are not disabled by the
    normal IRQ enable/disable APIs of the kernel.

    As long as we can get a sysrq 'y' to the kernel, we can
    get a dump. Even if the console interrupt cpu is wedged,
    we can trigger it from userspace using /proc/sysrq-trigger

    The output is made compact so that this facility is more
    useful on high cpu count systems, which is where this
    facility will likely find itself the most useful :)

    Signed-off-by: David S. Miller

    David S. Miller
     
  • This patch removes the CVS keywords that weren't updated for a long time
    from comments.

    Signed-off-by: Adrian Bunk
    Signed-off-by: David S. Miller

    Adrian Bunk
     

17 May, 2008

1 commit


12 May, 2008

1 commit

  • Read all of the OF memory and translation tables, then read
    the physical available memory list twice.

    When making these requests, OF can allocate more memory to
    do it's job, which can remove pages from the available
    memory list.

    So fetch in all of the tables at once, and fetch the available
    list last to make sure we read a stable value.

    Signed-off-by: David S. Miller

    David S. Miller
     

07 May, 2008

1 commit


06 May, 2008

1 commit

  • The identical online_page() implementations from all architectures got
    moved to mm/memory_hotplug.c - except for the sparc64 one that even was
    dead code due to MEMORY_HOTPLUG not being available there.

    Signed-off-by: Adrian Bunk
    Signed-off-by: David S. Miller

    Adrian Bunk
     

30 Apr, 2008

1 commit


29 Apr, 2008

2 commits

  • Current limitations:

    1) On SMP single stepping has some fundamental issues,
    shared with other sw single-step architectures such
    as mips and arm.

    2) On 32-bit sparc we don't support SMP kgdb yet. That
    requires some reworking of the IPI mechanisms and
    infrastructure on that platform.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • ext4 uses ZERO_PAGE(0) to zero out blocks. We need to export
    different symbols in different arches for the usage of ZERO_PAGE
    in modules.

    Signed-off-by: Aneesh Kumar K.V
    Acked-by: David S. Miller
    Signed-off-by: "Theodore Ts'o"

    Aneesh Kumar K.V
     

28 Apr, 2008

1 commit

  • NR_PAGEFLAGS specifies the number of page flags we are using. From that we
    can calculate the number of bits leftover that can be used for zone, node (and
    maybe the sections id). There is no need anymore for FLAGS_RESERVED if we use
    NR_PAGEFLAGS.

    Use the new methods to make NR_PAGEFLAGS available via the preprocessor.
    NR_PAGEFLAGS is used to calculate field boundaries in the page flags fields.
    These field widths have to be available to the preprocessor.

    Signed-off-by: Christoph Lameter
    Cc: David Miller
    Cc: Andy Whitcroft
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

24 Apr, 2008

10 commits


29 Mar, 2008

1 commit


26 Mar, 2008

3 commits


22 Mar, 2008

1 commit

  • Currently kernel images are limited to 8MB in size, and this causes
    problems especially when enabling features that take up a lot of
    kernel image space such as lockdep.

    The code now will align the kernel image size up to 4MB and map that
    many locked TLB entries. So, the only practical limitation is the
    number of available locked TLB entries which is 16 on Cheetah and 64
    on pre-Cheetah sparc64 cpus. Niagara cpus don't actually have hw
    locked TLB entry support. Rather, the hypervisor transparently
    provides support for "locked" TLB entries since it runs with physical
    addressing and does the initial TLB miss processing.

    Fully utilizing this change requires some help from SILO, a patch for
    which will be submitted to the maintainer. Essentially, SILO will
    only currently map up to 8MB for the kernel image and that needs to be
    increased.

    Note that neither this patch nor the SILO bits will help with network
    booting. The openfirmware code will only map up to a certain amount
    of kernel image during a network boot and there isn't much we can to
    about that other than to implemented a layered network booting
    facility. Solaris has this, and calls it "wanboot" and we may
    implement something similar at some point.

    Signed-off-by: David S. Miller

    David S. Miller
     

29 Feb, 2008

1 commit


27 Feb, 2008

1 commit

  • Some parts of the kernel now do things like do *_user() accesses while
    set_fs(KERNEL_DS) that fault on purpose.

    See, for example, the code added by changeset
    a0c1e9073ef7428a14309cba010633a6cd6719ea ("futex: runtime enable pi
    and robust functionality").

    That trips up the ASI sanity checking we make in do_kernel_fault().

    Just remove it for now. Maybe we can add it back later with an added
    conditional which looks at the current get_fs() value.

    Signed-off-by: David S. Miller

    David S. Miller
     

25 Feb, 2008

1 commit

  • Fix following warnings:
    WARNING: vmlinux.o(.text+0x4f980): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem()
    WARNING: vmlinux.o(.text+0x4f9cc): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem()

    alloc_bootmem() is only used during early init and for any subsequent
    call to kernel_map_range() the program logic avoid the call.
    So annotate kernel_map_range() with __ref to tell modpost to
    ignore the reference to a __init function.

    Signed-off-by: Sam Ravnborg
    Signed-off-by: David S. Miller

    Sam Ravnborg
     

18 Feb, 2008

1 commit


13 Feb, 2008

1 commit


08 Feb, 2008

1 commit

  • This patchset adds a flags variable to reserve_bootmem() and uses the
    BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
    between crashkernel area and already used memory.

    This patch:

    Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
    If that flag is set, the function returns with -EBUSY if the memory already
    has been reserved in the past. This is to avoid conflicts.

    Because that code runs before SMP initialisation, there's no race condition
    inside reserve_bootmem_core().

    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: fix powerpc build]
    Signed-off-by: Bernhard Walle
    Cc:
    Cc: "Eric W. Biederman"
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bernhard Walle
     

31 Jan, 2008

1 commit

  • Sparc64 has a way of providing the base address for the per cpu area of the
    currently executing processor in a global register.

    Sparc64 also provides a way to calculate the address of a per cpu area
    from a base address instead of performing an array lookup.

    Cc: David Miller
    Signed-off-by: Mike Travis
    Signed-off-by: Ingo Molnar

    travis@sgi.com
     

13 Dec, 2007

1 commit

  • This was caught and identified by Greg Onufer.

    Since we setup the 256M/4M bitmap table after taking over the trap
    table, it's possible for some 4M mapping to get loaded in the TLB
    beforhand which later will be 256M mappings.

    This can cause illegal TLB multiple-match conditions. Fix this by
    setting up the bitmap before we take over the trap table.

    Next, __flush_tlb_all() was not doing anything on hypervisor
    platforms. Fix by adding sun4v_mmu_demap_all() and calling it.

    Signed-off-by: David S. Miller

    David S. Miller
     

01 Nov, 2007

2 commits


27 Oct, 2007

1 commit