09 Jan, 2008

2 commits

  • When using FLAT_MEMORY and ARCH_PFN_OFFSET is not 0, the kernel crashes in
    memmap_init_zone(). This bug got introduced by commit
    c713216deebd95d2b0ab38fef8bb2361c0180c2d

    Signed-off-by: Thomas Bogendoerfer
    Acked-by: Mel Gorman
    Cc: Bob Picco
    Cc: Dave Hansen
    Cc: Andy Whitcroft
    Cc: Andi Kleen
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: "Keith Mannthey"
    Cc: "Luck, Tony"
    Cc: KAMEZAWA Hiroyuki
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Bogendoerfer
     
  • The use of get_zeroed_page() with __GFP_HIGHMEM is invalid. Use
    alloc_page() with __GFP_ZERO instead of invalid get_zeroed_page().

    (This patch is only compile tested)

    Cc: Carsten Otte
    Signed-off-by: Akinobu Mita
    Acked-by: Hugh Dickins
    Acked-by: Carsten Otte
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

03 Jan, 2008

1 commit

  • Both SLUB and SLAB really did almost exactly the same thing for
    /proc/slabinfo setup, using duplicate code and per-allocator #ifdef's.

    This just creates a common CONFIG_SLABINFO that is enabled by both SLUB
    and SLAB, and shares all the setup code. Maybe SLOB will want this some
    day too.

    Reviewed-by: Pekka Enberg
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

02 Jan, 2008

1 commit

  • This adds a read-only /proc/slabinfo file on SLUB, that makes slabtop work.

    [ mingo@elte.hu: build fix. ]

    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: Peter Zijlstra
    Signed-off-by: Pekka Enberg
    Signed-off-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Pekka J Enberg
     

22 Dec, 2007

1 commit

  • Increase the mininum number of partial slabs to keep around and put
    partial slabs to the end of the partial queue so that they can add
    more objects.

    Signed-off-by: Christoph Lameter
    Reviewed-by: Pekka Enberg
    Acked-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

20 Dec, 2007

1 commit

  • Krzysztof Oledzki noticed a dirty page accounting leak on some of his
    machines, causing the machine to eventually lock up when the kernel
    decided that there was too much dirty data, but nobody could actually
    write anything out to fix it.

    The culprit turns out to be filesystems (cough ext3 with data=journal
    cough) that re-dirty the page when the "->invalidatepage()" callback is
    called.

    Fix it up by doing a final dirty page accounting check when we actually
    remove the page from the page cache.

    This fixes bugzilla entry 9182:

    http://bugzilla.kernel.org/show_bug.cgi?id=9182

    Tested-by: Ingo Molnar
    Tested-by: Krzysztof Oledzki
    Cc: Andrew Morton
    Cc: Nick Piggin
    Cc: Peter Zijlstra
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

18 Dec, 2007

7 commits

  • Remove a recently added useless masking of GFP_ZERO. GFP_ZERO is already
    masked out in new_slab() (See how it calls allocate_slab). No need to do
    it twice.

    This reverts the SLUB parts of 7fd272550bd43cc1d7289ef0ab2fa50de137e767.

    Cc: Matt Mackall
    Reviewed-by: Pekka Enberg
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • This reverts commit 54f9f80d6543fb7b157d3b11e2e7911dc1379790 ("hugetlb:
    Add hugetlb_dynamic_pool sysctl")

    Given the new sysctl nr_overcommit_hugepages, the boolean dynamic pool
    sysctl is not needed, as its semantics can be expressed by 0 in the
    overcommit sysctl (no dynamic pool) and non-0 in the overcommit sysctl
    (pool enabled).

    (Needed in 2.6.24 since it reverts a post-2.6.23 userspace-visible change)

    Signed-off-by: Nishanth Aravamudan
    Acked-by: Adam Litke
    Cc: William Lee Irwin III
    Cc: Dave Hansen
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     
  • hugetlb: introduce nr_overcommit_hugepages sysctl

    While examining the code to support /proc/sys/vm/hugetlb_dynamic_pool, I
    became convinced that having a boolean sysctl was insufficient:

    1) To support per-node control of hugepages, I have previously submitted
    patches to add a sysfs attribute related to nr_hugepages. However, with
    a boolean global value and per-mount quota enforcement constraining the
    dynamic pool, adding corresponding control of the dynamic pool on a
    per-node basis seems inconsistent to me.

    2) Administration of the hugetlb dynamic pool with multiple hugetlbfs
    mount points is, arguably, more arduous than it needs to be. Each quota
    would need to be set separately, and the sum would need to be monitored.

    To ease the administration, and to help make the way for per-node
    control of the static & dynamic hugepage pool, I added a separate
    sysctl, nr_overcommit_hugepages. This value serves as a high watermark
    for the overall hugepage pool, while nr_hugepages serves as a low
    watermark. The boolean sysctl can then be removed, as the condition

    nr_overcommit_hugepages > 0

    indicates the same administrative setting as

    hugetlb_dynamic_pool == 1

    Quotas still serve as local enforcement of the size of the pool on a
    per-mount basis.

    A few caveats:

    1) There is a race whereby the global surplus huge page counter is
    incremented before a hugepage has allocated. Another process could then
    try grow the pool, and fail to convert a surplus huge page to a normal
    huge page and instead allocate a fresh huge page. I believe this is
    benign, as no memory is leaked (the actual pages are still tracked
    correctly) and the counters won't go out of sync.

    2) Shrinking the static pool while a surplus is in effect will allow the
    number of surplus huge pages to exceed the overcommit value. As long as
    this condition holds, however, no more surplus huge pages will be
    allowed on the system until one of the two sysctls are increased
    sufficiently, or the surplus huge pages go out of use and are freed.

    Successfully tested on x86_64 with the current libhugetlbfs snapshot,
    modified to use the new sysctl.

    Signed-off-by: Nishanth Aravamudan
    Acked-by: Adam Litke
    Cc: William Lee Irwin III
    Cc: Dave Hansen
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     
  • In some cases the IO subsystem is able to merge requests if the pages are
    adjacent in physical memory. This was achieved in the allocator by having
    expand() return pages in physically contiguous order in situations were a
    large buddy was split. However, list-based anti-fragmentation changed the
    order pages were returned in to avoid searching in buffered_rmqueue() for a
    page of the appropriate migrate type.

    This patch restores behaviour of rmqueue_bulk() preserving the physical
    order of pages returned by the allocator without incurring increased search
    costs for anti-fragmentation.

    Signed-off-by: Mel Gorman
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Mark Lord
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Improve the error handling for mm/sparse.c::sparse_add_one_section(). And I
    see no reason to check 'usemap' until holding the 'pgdat_resize_lock'.

    [geoffrey.levand@am.sony.com: sparse_index_init() returns -EEXIST]
    Cc: Christoph Lameter
    Acked-by: Dave Hansen
    Cc: Rik van Riel
    Acked-by: Yasunori Goto
    Cc: Andy Whitcroft
    Signed-off-by: WANG Cong
    Signed-off-by: Geoff Levand
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Cong
     
  • Since sparse_index_alloc() can return NULL on memory allocation failure,
    we must deal with the failure condition when calling it.

    Signed-off-by: WANG Cong
    Cc: Christoph Lameter
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Cong
     
  • SPARSEMEM_VMEMMAP needs to be a selectable config option to support
    building the kernel both with and without sparsemem vmemmap support. This
    selection is desirable for platforms which could be configured one way for
    platform specific builds and the other for multi-platform builds.

    Signed-off-by: Miguel Botón
    Signed-off-by: Geoff Levand
    Acked-by: Yasunori Goto
    Cc: Christoph Lameter
    Cc: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Geoff Levand
     

11 Dec, 2007

1 commit

  • The follow_hugetlb_page() fix I posted (merged as git commit
    5b23dbe8173c212d6a326e35347b038705603d39) missed one case. If the pte is
    present, but not writable and write access is requested by the caller to
    get_user_pages(), the code will do the wrong thing. Rather than calling
    hugetlb_fault to make the pte writable, it notes the presence of the pte
    and continues.

    This simple one-liner makes sure we also fault on the pte for this case.
    Please apply.

    Signed-off-by: Adam Litke
    Acked-by: Dave Kleikamp
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     

10 Dec, 2007

1 commit

  • Both slob and slub react to __GFP_ZERO by clearing the allocation, which
    means that passing the GFP_ZERO bit down to the page allocator is just
    wasteful and pointless.

    Acked-by: Matt Mackall
    Reviewed-by: Pekka Enberg
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

06 Dec, 2007

6 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6:
    VM/Security: add security hook to do_brk
    Security: round mmap hint address above mmap_min_addr
    security: protect from stack expantion into low vm addresses
    Security: allow capable check to permit mmap or low vm space
    SELinux: detect dead booleans
    SELinux: do not clear f_op when removing entries

    Linus Torvalds
     
  • Given a specifically crafted binary do_brk() can be used to get low pages
    available in userspace virtual memory and can thus be used to circumvent
    the mmap_min_addr low memory protection. Add security checks in do_brk().

    Signed-off-by: Eric Paris
    Acked-by: Alan Cox
    Cc: Stephen Smalley
    Cc: James Morris
    Cc: Chris Wright
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Paris
     
  • I can't pass memory allocated by kmalloc() to ksize() if it is allocated by
    SLUB allocator and size is larger than (I guess) PAGE_SIZE / 2.

    The error of ksize() seems to be that it does not check if the allocation
    was made by SLUB or the page allocator.

    Reviewed-by: Pekka Enberg
    Tested-by: Tetsuo Handa
    Cc: Christoph Lameter , Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vegard Nossum
     
  • Writing to XIP files at a non-page-aligned offset results in data corruption
    because the writes were always sent to the start of the page.

    Signed-off-by: Nick Piggin
    Cc: Christian Borntraeger
    Acked-by: Carsten Otte
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • mm/slub.c exports ksize(), but mm/slob.c and mm/slab.c don't.

    It's used by binfmt_flat, which can be built as a module.

    Signed-off-by: Tetsuo Handa
    Cc: Christoph Lameter
    Cc: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tetsuo Handa
     
  • this call should use the array index j, not i. But with this approach, just
    one int i is enough, int j is not needed.

    Signed-off-by: Denis Cheng
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Denis Cheng
     

05 Dec, 2007

3 commits

  • Given a specifically crafted binary do_brk() can be used to get low
    pages available in userspace virtually memory and can thus be used to
    circumvent the mmap_min_addr low memory protection. Add security checks
    in do_brk().

    Signed-off-by: Eric Paris
    Acked-by: Alan Cox
    Signed-off-by: James Morris

    Eric Paris
     
  • If mmap_min_addr is set and a process attempts to mmap (not fixed) with a
    non-null hint address less than mmap_min_addr the mapping will fail the
    security checks. Since this is just a hint address this patch will round
    such a hint address above mmap_min_addr.

    gcj was found to try to be very frugal with vm usage and give hint addresses
    in the 8k-32k range. Without this patch all such programs failed and with
    the patch they happily get a higher address.

    This patch is wrappad in CONFIG_SECURITY since mmap_min_addr doesn't exist
    without it and there would be no security check possible no matter what. So
    we should not bother compiling in this rounding if it is just a waste of
    time.

    Signed-off-by: Eric Paris
    Signed-off-by: James Morris

    Eric Paris
     
  • Add security checks to make sure we are not attempting to expand the
    stack into memory protected by mmap_min_addr

    Signed-off-by: Eric Paris
    Signed-off-by: James Morris

    Eric Paris
     

01 Dec, 2007

1 commit

  • The database performance group have found that half the cycles spent
    in kmem_cache_free are spent in this one call to BUG_ON. Moving it
    into the CONFIG_SLAB_DEBUG-only function cache_free_debugcheck() is a
    performance win of almost 0.5% on their particular benchmark.

    The call was added as part of commit ddc2e812d592457747c4367fb73edcaa8e1e49ff
    with the comment that "overhead should be minimal". It may have been
    minimal at the time, but it isn't now.

    [ Quoth Pekka Enberg: "I don't think the BUG_ON per se caused the
    performance regression but rather the virt_to_head_page() changes to
    virt_to_cache() that were added later." ]

    Signed-off-by: Matthew Wilcox
    Acked-by: Pekka J Enberg
    Signed-off-by: Linus Torvalds

    Matthew Wilcox
     

30 Nov, 2007

2 commits

  • Fixes section mismatch below.

    WARNING: vmlinux.o(.text+0x946b5): Section mismatch: reference to .init.text:'
    __alloc_bootmem_node (between 'vmemmap_alloc_block' and 'vmemmap_pgd_populate')

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Kamalesh Babulal
    Cc: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Ordinarily the size of a pageblock is determined at compile-time based on the
    hugepage size. On PPC64, the hugepage size is determined at runtime based on
    what is supported by the machine. With legacy machines such as iSeries that
    do not support hugepages, HPAGE_SHIFT is 0. This results in pageblock_order
    being set to -PAGE_SHIFT and a crash results shortly afterwards.

    This patch adds a function to select a sensible value for pageblock order by
    default when HUGETLB_PAGE_SIZE_VARIABLE is set. It checks that HPAGE_SHIFT
    is a sensible value before using the hugepage size; if it is not MAX_ORDER-1
    is used.

    This is a fix for 2.6.24.

    Credit goes to Stephen Rothwell for identifying the bug and testing candidate
    patches. Additional credit goes to Andy Whitcroft for spotting a problem
    with respects to IA-64 before releasing. Additional credit to David Gibson
    for testing with the libhugetlbfs test suite.

    Signed-off-by: Mel Gorman
    Tested-by: Stephen Rothwell
    Cc: Benjamin Herrenschmidt
    Acked-by: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

29 Nov, 2007

2 commits

  • 2.6.11 gave __GFP_ZERO's prep_zero_page a bogus "highmem may have to wait"
    assertion. Presumably added under the misconception that clear_highpage
    uses nonatomic kmap; but then and now it uses kmap_atomic, so no problem.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • tmpfs was misconverted to __GFP_ZERO in 2.6.11. There's an unusual case in
    which shmem_getpage receives the page from its caller instead of allocating.
    We must cover this case by clear_highpage before SetPageUptodate, as before.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

20 Nov, 2007

1 commit

  • page_mkclean used to call page_clear_dirty for every given page. This
    is different to all other architectures, where the dirty bit in the
    PTEs is only resetted, if page_mapping() returns a non-NULL pointer.
    We can move the page_test_dirty/page_clear_dirty sequence into the
    2nd if to avoid unnecessary iske/sske sequences, which are expensive.

    This change also helps kvm for s390 as the host must transfer the
    dirty bit into the guest status bits. By moving the page_clear_dirty
    operation into the 2nd if, the vm will only call page_clear_dirty
    for pages where it walks the mapping anyway. There it calls
    ptep_clear_flush for writable ptes, so we can transfer the dirty bit
    to the guest.

    Signed-off-by: Christian Borntraeger
    Signed-off-by: Martin Schwidefsky

    Christian Borntraeger
     

16 Nov, 2007

2 commits

  • This code harks back to the days when we didn't count dirty mapped
    pages, which led us to try to balance the number of dirty unmapped pages
    by how much unmapped memory there was in the system.

    That makes no sense any more, since now the dirty counts include the
    mapped pages. Not to mention that the math doesn't work with HIGHMEM
    machines anyway, and causes the unmapped_ratio to potentially turn
    negative (which we do catch thanks to clamping it at a minimum value,
    but I mention that as an indication of how broken the code is).

    The code also was written at a time when the default dirty ratio was
    much larger, and the unmapped_ratio logic effectively capped that large
    dirty ratio a bit. Again, we've since lowered the dirty ratio rather
    aggressively, further lessening the point of that code.

    Acked-by: Peter Zijlstra
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • Previously, it would be possible for prev->next to point to
    &free_slob_pages, and thus we would try to move a list onto itself, and
    bad things would happen.

    It seems a bit hairy to be doing list operations with the list marker as
    an entry, rather than a head, but...

    this resolves the following crash:

    http://bugzilla.kernel.org/show_bug.cgi?id=9379

    Signed-off-by: Nick Piggin
    Signed-off-by: Ingo Molnar
    Acked-by: Matt Mackall
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

15 Nov, 2007

8 commits

  • The delay incurred in lock_page() should also be accounted in swap delay
    accounting

    Reported-by: Nick Piggin
    Signed-off-by: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Balbir Singh
     
  • Mark start_cpu_timer() as __cpuinit instead of __devinit.
    Fixes this section warning:

    WARNING: vmlinux.o(.text+0x60e53): Section mismatch: reference to .init.text:start_cpu_timer (between 'vmstat_cpuup_callback' and 'vmstat_show')

    Signed-off-by: Randy Dunlap
    Acked-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • Commit ef8b4520bd9f8294ffce9abd6158085bde5dc902 added one NULL check for
    "p" in krealloc(), but that doesn't seem to be enough since there
    doesn't seem to be any guarantee that memcpy(ret, NULL, 0) works
    (spotted by the Coverity checker).

    For making it clearer what happens this patch also removes the pointless
    min().

    Signed-off-by: Adrian Bunk
    Acked-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     
  • For administrative purpose, we want to query actual block usage for
    hugetlbfs file via fstat. Currently, hugetlbfs always return 0. Fix that
    up since kernel already has all the information to track it properly.

    Signed-off-by: Ken Chen
    Acked-by: Adam Litke
    Cc: Badari Pulavarty
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ken Chen
     
  • return_unused_surplus_pages() can become static.

    Signed-off-by: Adrian Bunk
    Acked-by: Adam Litke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     
  • When a MAP_SHARED mmap of a hugetlbfs file succeeds, huge pages are reserved
    to guarantee no problems will occur later when instantiating pages. If quotas
    are in force, page instantiation could fail due to a race with another process
    or an oversized (but approved) shared mapping.

    To prevent these scenarios, debit the quota for the full reservation amount up
    front and credit the unused quota when the reservation is released.

    Signed-off-by: Adam Litke
    Cc: Ken Chen
    Cc: Andy Whitcroft
    Cc: Dave Hansen
    Cc: David Gibson
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • Add a second parameter 'delta' to hugetlb_get_quota and hugetlb_put_quota to
    allow bulk updating of the sbinfo->free_blocks counter. This will be used by
    the next patch in the series.

    Signed-off-by: Adam Litke
    Cc: Ken Chen
    Cc: Andy Whitcroft
    Cc: Dave Hansen
    Cc: David Gibson
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • Now that quota is credited by free_huge_page(), calls to hugetlb_get_quota()
    seem out of place. The alloc/free API is unbalanced because we handle the
    hugetlb_put_quota() but expect the caller to open-code hugetlb_get_quota().
    Move the get inside alloc_huge_page to clean up this disparity.

    This patch has been kept apart from the previous patch because of the somewhat
    dodgy ERR_PTR() use herein. Moving the quota logic means that
    alloc_huge_page() has two failure modes. Quota failure must result in a
    SIGBUS while a standard allocation failure is OOM. Unfortunately, ERR_PTR()
    doesn't like the small positive errnos we have in VM_FAULT_* so they must be
    negated before they are used.

    Does anyone take issue with the way I am using PTR_ERR. If so, what are your
    thoughts on how to clean this up (without needing an if,else if,else block at
    each alloc_huge_page() callsite)?

    Signed-off-by: Adam Litke
    Cc: Ken Chen
    Cc: Andy Whitcroft
    Cc: Dave Hansen
    Cc: David Gibson
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke