17 Oct, 2007

35 commits

  • Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Rework the generic block "cont" routines to handle the new aops. Supporting
    cont_prepare_write would take quite a lot of code to support, so remove it
    instead (and we later convert all filesystems to use it).

    write_begin gets passed AOP_FLAG_CONT_EXPAND when called from
    generic_cont_expand, so filesystems can avoid the old hacks they used.

    Signed-off-by: Nick Piggin
    Cc: OGAWA Hirofumi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • These are intended to replace prepare_write and commit_write with more
    flexible alternatives that are also able to avoid the buffered write
    deadlock problems efficiently (which prepare_write is unable to do).

    [mark.fasheh@oracle.com: API design contributions, code review and fixes]
    [akpm@linux-foundation.org: various fixes]
    [dmonakhov@sw.ru: new aop block_write_begin fix]
    Signed-off-by: Nick Piggin
    Signed-off-by: Mark Fasheh
    Signed-off-by: Dmitriy Monakhov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Add an iterator data structure to operate over an iovec. Add usercopy
    operators needed by generic_file_buffered_write, and convert that function
    over.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Modify the core write() code so that it won't take a pagefault while holding a
    lock on the pagecache page. There are a number of different deadlocks possible
    if we try to do such a thing:

    1. generic_buffered_write
    2. lock_page
    3. prepare_write
    4. unlock_page+vmtruncate
    5. copy_from_user
    6. mmap_sem(r)
    7. handle_mm_fault
    8. lock_page (filemap_nopage)
    9. commit_write
    10. unlock_page

    a. sys_munmap / sys_mlock / others
    b. mmap_sem(w)
    c. make_pages_present
    d. get_user_pages
    e. handle_mm_fault
    f. lock_page (filemap_nopage)

    2,8 - recursive deadlock if page is same
    2,8;2,8 - ABBA deadlock is page is different
    2,6;b,f - ABBA deadlock if page is same

    The solution is as follows:
    1. If we find the destination page is uptodate, continue as normal, but use
    atomic usercopies which do not take pagefaults and do not zero the uncopied
    tail of the destination. The destination is already uptodate, so we can
    commit_write the full length even if there was a partial copy: it does not
    matter that the tail was not modified, because if it is dirtied and written
    back to disk it will not cause any problems (uptodate *means* that the
    destination page is as new or newer than the copy on disk).

    1a. The above requires that fault_in_pages_readable correctly returns access
    information, because atomic usercopies cannot distinguish between
    non-present pages in a readable mapping, from lack of a readable mapping.

    2. If we find the destination page is non uptodate, unlock it (this could be
    made slightly more optimal), then allocate a temporary page to copy the
    source data into. Relock the destination page and continue with the copy.
    However, instead of a usercopy (which might take a fault), copy the data
    from the pinned temporary page via the kernel address space.

    (also, rename maxlen to seglen, because it was confusing)

    This increases the CPU/memory copy cost by almost 50% on the affected
    workloads. That will be solved by introducing a new set of pagecache write
    aops in a subsequent patch.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Allow an application to query the memories allowed by its context.

    Updated numa_memory_policy.txt to mention that applications can use this to
    obtain allowed memories for constructing valid policies.

    TODO: update out-of-tree libnuma wrapper[s], or maybe add a new
    wrapper--e.g., numa_get_mems_allowed() ?

    Also, update numa syscall man pages.

    Tested with memtoy V>=0.13.

    Signed-off-by: Lee Schermerhorn
    Acked-by: Christoph Lameter
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • Move the definitions of struct mm_struct and struct vma_area_struct to
    include/mm_types.h. This allows to define more function in asm/pgtable.h
    and friends with inline assemblies instead of macros. Compile tested on
    i386, powerpc, powerpc64, s390-32, s390-64 and x86_64.

    [aurelien@aurel32.net: build fix]
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Aurelien Jarno
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Martin Schwidefsky
     
  • Rather than sign direct radix-tree pointers with a special bit, sign the
    indirect one that hangs off the root. This means that, given a lookup_slot
    operation, the invalid result will be differentiated from the valid
    (previously, valid results could have the bit either set or clear).

    This does not affect slot lookups which occur under lock -- they can never
    return an invalid result. Is needed in future for lockless pagecache.

    Signed-off-by: Nick Piggin
    Acked-by: Peter Zijlstra
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • The commit b5810039a54e5babf428e9a1e89fc1940fabff11 contains the note

    A last caveat: the ZERO_PAGE is now refcounted and managed with rmap
    (and thus mapcounted and count towards shared rss). These writes to
    the struct page could cause excessive cacheline bouncing on big
    systems. There are a number of ways this could be addressed if it is
    an issue.

    And indeed this cacheline bouncing has shown up on large SGI systems.
    There was a situation where an Altix system was essentially livelocked
    tearing down ZERO_PAGE pagetables when an HPC app aborted during startup.
    This situation can be avoided in userspace, but it does highlight the
    potential scalability problem with refcounting ZERO_PAGE, and corner
    cases where it can really hurt (we don't want the system to livelock!).

    There are several broad ways to fix this problem:
    1. add back some special casing to avoid refcounting ZERO_PAGE
    2. per-node or per-cpu ZERO_PAGES
    3. remove the ZERO_PAGE completely

    I will argue for 3. The others should also fix the problem, but they
    result in more complex code than does 3, with little or no real benefit
    that I can see.

    Why? Inserting a ZERO_PAGE for anonymous read faults appears to be a
    false optimisation: if an application is performance critical, it would
    not be doing many read faults of new memory, or at least it could be
    expected to write to that memory soon afterwards. If cache or memory use
    is critical, it should not be working with a significant number of
    ZERO_PAGEs anyway (a more compact representation of zeroes should be
    used).

    As a sanity check -- mesuring on my desktop system, there are never many
    mappings to the ZERO_PAGE (eg. 2 or 3), thus memory usage here should not
    increase much without it.

    When running a make -j4 kernel compile on my dual core system, there are
    about 1,000 mappings to the ZERO_PAGE created per second, but about 1,000
    ZERO_PAGE COW faults per second (less than 1 ZERO_PAGE mapping per second
    is torn down without being COWed). So removing ZERO_PAGE will save 1,000
    page faults per second when running kbuild, while keeping it only saves
    less than 1 page clearing operation per second. 1 page clear is cheaper
    than a thousand faults, presumably, so there isn't an obvious loss.

    Neither the logical argument nor these basic tests give a guarantee of no
    regressions. However, this is a reasonable opportunity to try to remove
    the ZERO_PAGE from the pagefault path. If it is found to cause regressions,
    we can reintroduce it and just avoid refcounting it.

    The /dev/zero ZERO_PAGE usage and TLB tricks also get nuked. I don't see
    much use to them except on benchmarks. All other users of ZERO_PAGE are
    converted just to use ZERO_PAGE(0) for simplicity. We can look at
    replacing them all and maybe ripping out ZERO_PAGE completely when we are
    more satisfied with this solution.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus "snif" Torvalds

    Nick Piggin
     
  • This gets rid of all kmalloc caches larger than page size. A kmalloc
    request larger than PAGE_SIZE > 2 is going to be passed through to the page
    allocator. This works both inline where we will call __get_free_pages
    instead of kmem_cache_alloc and in __kmalloc.

    kfree is modified to check if the object is in a slab page. If not then
    the page is freed via the page allocator instead. Roughly similar to what
    SLOB does.

    Advantages:
    - Reduces memory overhead for kmalloc array
    - Large kmalloc operations are faster since they do not
    need to pass through the slab allocator to get to the
    page allocator.
    - Performance increase of 10%-20% on alloc and 50% on free for
    PAGE_SIZEd allocations.
    SLUB must call page allocator for each alloc anyways since
    the higher order pages which that allowed avoiding the page alloc calls
    are not available in a reliable way anymore. So we are basically removing
    useless slab allocator overhead.
    - Large kmallocs yields page aligned object which is what
    SLAB did. Bad things like using page sized kmalloc allocations to
    stand in for page allocate allocs can be transparently handled and are not
    distinguishable from page allocator uses.
    - Checking for too large objects can be removed since
    it is done by the page allocator.

    Drawbacks:
    - No accounting for large kmalloc slab allocations anymore
    - No debugging of large kmalloc slab allocations.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Convert some 'unsigned long' to pgoff_t.

    Signed-off-by: Fengguang Wu
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Remove VM_MAX_CACHE_HIT, MAX_RA_PAGES and MIN_RA_PAGES.

    Signed-off-by: Fengguang Wu
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Introduce radix_tree_next_hole(root, index, max_scan) to scan radix tree for
    the first hole. It will be used in interleaved readahead.

    The implementation is dumb and obviously correct. It can help debug(and
    document) the possible smart one in future.

    Cc: Nick Piggin
    Signed-off-by: Fengguang Wu
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Combine the file_ra_state members
    unsigned long prev_index
    unsigned int prev_offset
    into
    loff_t prev_pos

    It is more consistent and better supports huge files.

    Thanks to Peter for the nice proposal!

    [akpm@linux-foundation.org: fix shift overflow]
    Cc: Peter Zijlstra
    Signed-off-by: Fengguang Wu
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Fold file_ra_state.mmap_hit into file_ra_state.mmap_miss and make it an int.

    Signed-off-by: Fengguang Wu
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Use 'unsigned int' instead of 'unsigned long' for readahead sizes.

    This helps reduce memory consumption on 64bit CPU when a lot of files are
    opened.

    CC: Andi Kleen
    Signed-off-by: Fengguang Wu
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • This patch cleans up duplicate includes in
    include/linux/memory_hotplug.h

    Signed-off-by: Jesper Juhl
    Acked-by: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Juhl
     
  • Enable virtual memmap support for SPARSEMEM on PPC64 systems. Slice a 16th
    off the end of the linear mapping space and use that to hold the vmemmap.
    Uses the same size mapping as uses in the linear 1:1 kernel mapping.

    [pbadari@gmail.com: fix warning]
    Signed-off-by: Andy Whitcroft
    Acked-by: Mel Gorman
    Cc: Christoph Lameter
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Badari Pulavarty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Whitcroft
     
  • [apw@shadowen.org: style fixups]
    [apw@shadowen.org: vmemmap sparc64: convert to new config options]
    Signed-off-by: Andy Whitcroft
    Acked-by: Mel Gorman
    Acked-by: Christoph Lameter
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Miller
     
  • Equip IA64 sparsemem with a virtual memmap. This is similar to the existing
    CONFIG_VIRTUAL_MEM_MAP functionality for DISCONTIGMEM. It uses a PAGE_SIZE
    mapping.

    This is provided as a minimally intrusive solution. We split the 128TB
    VMALLOC area into two 64TB areas and use one for the virtual memmap.

    This should replace CONFIG_VIRTUAL_MEM_MAP long term.

    [apw@shadowen.org: convert to new helper based initialisation]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andy Whitcroft
    Acked-by: Mel Gorman
    Cc: "Luck, Tony"
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • x86_64 uses 2M page table entries to map its 1-1 kernel space. We also
    implement the virtual memmap using 2M page table entries. So there is no
    additional runtime overhead over FLATMEM, initialisation is slightly more
    complex. As FLATMEM still references memory to obtain the mem_map pointer and
    SPARSEMEM_VMEMMAP uses a compile time constant, SPARSEMEM_VMEMMAP should be
    superior.

    With this SPARSEMEM becomes the most efficient way of handling virt_to_page,
    pfn_to_page and friends for UP, SMP and NUMA on x86_64.

    [apw@shadowen.org: code resplit, style fixups]
    [apw@shadowen.org: vmemmap x86_64: ensure end of section memmap is initialised]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andy Whitcroft
    Acked-by: Mel Gorman
    Cc: Andi Kleen
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Convert the common vmemmap population into initialisation helpers for use by
    architecture vmemmap populators. All architecture implementing the
    SPARSEMEM_VMEMMAP variant supply an architecture specific vmemmap_populate()
    initialiser, which may make use of the helpers.

    This allows us to clean up and remove the initialisation Kconfig entries.
    With this patch there is a single SPARSEMEM_VMEMMAP_ENABLE Kconfig option to
    indicate use of that variant.

    Signed-off-by: Andy Whitcroft
    Acked-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Whitcroft
     
  • SPARSEMEM is a pretty nice framework that unifies quite a bit of code over all
    the arches. It would be great if it could be the default so that we can get
    rid of various forms of DISCONTIG and other variations on memory maps. So far
    what has hindered this are the additional lookups that SPARSEMEM introduces
    for virt_to_page and page_address. This goes so far that the code to do this
    has to be kept in a separate function and cannot be used inline.

    This patch introduces a virtual memmap mode for SPARSEMEM, in which the memmap
    is mapped into a virtually contigious area, only the active sections are
    physically backed. This allows virt_to_page page_address and cohorts become
    simple shift/add operations. No page flag fields, no table lookups, nothing
    involving memory is required.

    The two key operations pfn_to_page and page_to_page become:

    #define __pfn_to_page(pfn) (vmemmap + (pfn))
    #define __page_to_pfn(page) ((page) - vmemmap)

    By having a virtual mapping for the memmap we allow simple access without
    wasting physical memory. As kernel memory is typically already mapped 1:1
    this introduces no additional overhead. The virtual mapping must be big
    enough to allow a struct page to be allocated and mapped for all valid
    physical pages. This vill make a virtual memmap difficult to use on 32 bit
    platforms that support 36 address bits.

    However, if there is enough virtual space available and the arch already maps
    its 1-1 kernel space using TLBs (f.e. true of IA64 and x86_64) then this
    technique makes SPARSEMEM lookups even more efficient than CONFIG_FLATMEM.
    FLATMEM needs to read the contents of the mem_map variable to get the start of
    the memmap and then add the offset to the required entry. vmemmap is a
    constant to which we can simply add the offset.

    This patch has the potential to allow us to make SPARSMEM the default (and
    even the only) option for most systems. It should be optimal on UP, SMP and
    NUMA on most platforms. Then we may even be able to remove the other memory
    models: FLATMEM, DISCONTIG etc.

    [apw@shadowen.org: config cleanups, resplit code etc]
    [kamezawa.hiroyu@jp.fujitsu.com: Fix sparsemem_vmemmap init]
    [apw@shadowen.org: vmemmap: remove excess debugging]
    [apw@shadowen.org: simplify initialisation code and reduce duplication]
    [apw@shadowen.org: pull out the vmemmap code into its own file]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andy Whitcroft
    Acked-by: Mel Gorman
    Cc: "Luck, Tony"
    Cc: Andi Kleen
    Cc: "David S. Miller"
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • We have flags to indicate whether a section actually has a valid mem_map
    associated with it. This is never set and we rely solely on the present bit
    to indicate a section is valid. By definition a section is not valid if it
    has no mem_map and there is a window during init where the present bit is set
    but there is no mem_map, during which pfn_valid() will return true
    incorrectly.

    Use the existing SECTION_HAS_MEM_MAP flag to indicate the presence of a valid
    mem_map. Switch valid_section{,_nr} and pfn_valid() to this bit. Add a new
    present_section{,_nr} and pfn_present() interfaces for those users who care to
    know that a section is going to be valid.

    [akpm@linux-foundation.org: coding-syle fixes]
    Signed-off-by: Andy Whitcroft
    Acked-by: Mel Gorman
    Cc: Christoph Lameter
    Cc: "Luck, Tony"
    Cc: Andi Kleen
    Cc: "David S. Miller"
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Whitcroft
     
  • x86(-64) are the last architectures still using the page fault notifier
    cruft for the kprobes page fault hook. This patch converts them to the
    proper direct calls, and removes the now unused pagefault notifier bits
    aswell as the cruft in kprobes.c that was related to this mess.

    I know Andi didn't really like this, but all other architecture maintainers
    agreed the direct calls are much better and besides the obvious cruft
    removal a common way of dealing with kprobes across architectures is
    important aswell.

    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: fix sparc64]
    Signed-off-by: Christoph Hellwig
    Cc: Andi Kleen
    Cc:
    Cc: Prasanna S Panchamukhi
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     
  • Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu
    variable. This saves sizeof(cpumask_t) * NR unused cpus. Access is mostly
    from startup and CPU HOTPLUG functions.

    Signed-off-by: Mike Travis
    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: "Siddha, Suresh B"
    Cc: "David S. Miller"
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Travis
     
  • This is from an earlier message from 'Christoph Lameter':

    cpu_core_map is currently an array defined using NR_CPUS. This means that
    we overallocate since we will rarely really use maximum configured cpu.

    If we put the cpu_core_map into the per cpu area then it will be allocated
    for each processor as it comes online.

    This means that the core map cannot be accessed until the per cpu area
    has been allocated. Xen does a weird thing here looping over all processors
    and zeroing the masks that are not yet allocated and that will be zeroed
    when they are allocated. I commented the code out.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Mike Travis
    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: "Siddha, Suresh B"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Travis
     
  • Enable wakeup from serial ports, make it run-time configurable over sysfs,
    e.g.,

    echo enabled > /sys/devices/platform/serial8250.0/tty/ttyS0/power/wakeup

    Requires

    # CONFIG_SYSFS_DEPRECATED is not set

    Following suggestions from Alan and Russell moved the may_wake_up checks
    to serial_core.c. This time actually tested - it does even work. Could
    someone, please, verify, that put_device after device_find_child is
    correct?

    Also would be nice to test with a Natsemi UART, that can wake up the system,
    if such systems exist.

    For this you just have to apply the patch below, issue the above "echo"
    command to one of your Natsemi port, suspend and resume your system, and
    verify that your Natsemi port still works. If you are actually capable of
    waking up the system from that port, would be nice to test that as well.

    Signed-off-by: Guennadi Liakhovetski
    Cc: Alan Cox
    Cc: Russell King
    Cc: Kay Sievers
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Guennadi Liakhovetski
     
  • Provide {enable,disable}_irq_wakeup dummies for undefined
    cross-compilers for platforms without CONFIG_GENERIC_IRQ.

    Needed by wake-up-from-a-serial-port.patch

    Signed-off-by: Guennadi Liakhovetski
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Guennadi Liakhovetski
     
  • Add support for a whole range of boards. Some are partly autodetected but
    not fully correctly others (PCI Express notably) not at all. Stick all
    the right entries in.

    Thanks to Mainpine for information and testing.

    Signed-off-by: Alan Cox
    Cc: Russell King
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alan Cox
     
  • Most non cardbus devices can't do dma, so flag them as such in the device
    creation routine.

    Signed-off-by: James Bottomley
    Cc: Andi Kleen
    Cc: Alan Cox
    Cc: Tejun Heo
    Cc: Natalie Protasevich
    Cc: Jeff Garzik
    Cc: Dominik Brodowski
    Cc: Russell King
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    James Bottomley
     
  • Some devices are incapable of DMA and need to be recognised as such.
    Introduce a NONE dma mask to facilitate this plus an inline function:
    is_device_dma_capable() to check this.

    Signed-off-by: James Bottomley
    Cc: Andi Kleen
    Cc: Alan Cox
    Cc: Tejun Heo
    Cc: Natalie Protasevich
    Cc: Jeff Garzik
    Cc: Dominik Brodowski
    Cc: Russell King
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    James Bottomley
     
  • Only a few definitions is in xxs1500.h .
    They can be move to au1000_xxs1500.c .

    [m.kozlowski@tuxland.pl: fix unbalanced parenthesis]
    Signed-off-by: Yoichi Yuasa
    Cc: Ralf Baechle
    Cc: Dominik Brodowski
    Signed-off-by: Mariusz Kozlowski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yoichi Yuasa
     
  • I need __INIT_REFOK to fix a MODPOST warning for a few MIPS configs which
    have to call init code from .text very early in the game due to bootloader
    issues. __INITDATA_REFOK is just for consistency.

    Signed-off-by: Ralf Baechle
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ralf Baechle
     
  • Optionally add a boot delay after each kernel printk() call, crudely
    measured in milliseconds, with a maximum delay of 10 seconds per printk.

    Enable CONFIG_BOOT_PRINTK_DELAY=y and then add (e.g.):
    "lpj=loops_per_jiffy boot_delay=100"
    to the kernel command line.

    It has been useful in cases like "during boot, my machine just reboots or the
    screen goes black" by slowing down printk, (and adding initcall_debug), we can
    usually see the last thing that happened before the lights went out which is
    usually a valuable clue.

    [akpm@linux-foundation.org: not all architectures implement CONFIG_HZ]
    [akpm@linux-foundation.org: fix lots of stuff]
    [bunk@stusta.de: kernel/printk.c: make 2 variables static]
    [heiko.carstens@de.ibm.com: fix slow down printk on boot compile error]
    Signed-off-by: Randy Dunlap
    Signed-off-by: Dave Jones
    Signed-off-by: Adrian Bunk
    Signed-off-by: Heiko Carstens
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

16 Oct, 2007

5 commits

  • Fix filesystems docbook warnings.

    Warning(linux-2.6.23-git8//fs/debugfs/file.c:241): No description found for parameter 'name'
    Warning(linux-2.6.23-git8//fs/debugfs/file.c:241): No description found for parameter 'mode'
    Warning(linux-2.6.23-git8//fs/debugfs/file.c:241): No description found for parameter 'parent'
    Warning(linux-2.6.23-git8//fs/debugfs/file.c:241): No description found for parameter 'value'
    Warning(linux-2.6.23-git8//include/linux/jbd.h:404): No description found for parameter 'h_lockdep_map'

    Signed-off-by: Randy Dunlap
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • Fix USB docbook warnings.

    Warning(linux-2.6.23-git8//include/linux/usb/gadget.h:487): No description found for parameter 'g'
    Warning(linux-2.6.23-git8//include/linux/usb/gadget.h:506): No description found for parameter 'g'

    Warning(linux-2.6.23-git8//drivers/usb/core/hub.c:1416): No description found for parameter 'usb_dev'

    Signed-off-by: Randy Dunlap
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • * 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (95 commits)
    [ARM] 4578/1: CM-x270: PCMCIA support
    [ARM] 4577/1: ITE 8152 PCI bridge support
    [ARM] 4576/1: CM-X270 machine support
    [ARM] pxa: Avoid pxa_gpio_mode() in gpio_direction_{in,out}put()
    [ARM] pxa: move pxa_set_mode() from pxa2xx_mainstone.c to mainstone.c
    [ARM] pxa: move pxa_set_mode() from pxa2xx_lubbock.c to lubbock.c
    [ARM] pxa: Make cpu_is_pxaXXX dependent on configuration symbols
    [ARM] pxa: PXA3xx base support
    [NET] smc91x: fix PXA DMA support code
    [SERIAL] Fix console initialisation ordering
    [ARM] pxa: tidy up arch/arm/mach-pxa/Makefile
    [ARM] Update arch/arm/Kconfig for drivers/Kconfig changes
    [ARM] 4600/1: fix kernel build failure with build-id-supporting binutils
    [ARM] 4599/1: Preserve ATAG list for use with kexec (2.6.23)
    [ARM] Rename consistent_sync() as dma_cache_maint()
    [ARM] 4572/1: ep93xx: add cirrus logic edb9307 support
    [ARM] 4596/1: S3C2412: Correct IRQs for SDI+CF and add decoding support
    [ARM] 4595/1: ns9xxx: define registers as void __iomem * instead of volatile u32
    [ARM] 4594/1: ns9xxx: use the new gpio functions
    [ARM] 4593/1: ns9xxx: implement generic clockevents
    ...

    Linus Torvalds
     
  • * 'locks' of git://linux-nfs.org/~bfields/linux:
    nfsd: remove IS_ISMNDLCK macro
    Rework /proc/locks via seq_files and seq_list helpers
    fs/locks.c: use list_for_each_entry() instead of list_for_each()
    NFS: clean up explicit check for mandatory locks
    AFS: clean up explicit check for mandatory locks
    9PFS: clean up explicit check for mandatory locks
    GFS2: clean up explicit check for mandatory locks
    Cleanup macros for distinguishing mandatory locks
    Documentation: move locks.txt in filesystems/
    locks: add warning about mandatory locking races
    Documentation: move mandatory locking documentation to filesystems/
    locks: Fix potential OOPS in generic_setlease()
    Use list_first_entry in locks_wake_up_blocks
    locks: fix flock_lock_file() comment
    Memory shortage can result in inconsistent flocks state
    locks: kill redundant local variable
    locks: reverse order of posix_locks_conflict() arguments

    Linus Torvalds
     
  • * 'release' of ssh://master.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6:
    [IA64] build fix for scatterlist

    Linus Torvalds