30 Jul, 2009

8 commits

  • * 'pm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6:
    PM / Hibernate: Replace bdget call with simple atomic_inc of i_count
    PM / ACPI: HP G7000 Notebook needs a SCI_EN resume quirk

    Linus Torvalds
     
  • The page allocator warns once when an order >= MAX_ORDER is specified.
    This is to catch callers of the allocator that are always falling back to
    their worst-case when it was not expected. However, there are cases where
    the caller is behaving correctly but cannot suppress the warning. This
    patch allows the warning to be suppressed by the callers by specifying
    __GFP_NOWARN.

    Signed-off-by: Mel Gorman
    Acked-by: David Rientjes
    Cc: Arnaldo Carvalho de Melo
    Cc: "David S. Miller"
    Cc: "Rafael J. Wysocki"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • After commit ec64f51545fffbc4cb968f0cea56341a4b07e85a ("cgroup: fix
    frequent -EBUSY at rmdir"), cgroup's rmdir (especially against memcg)
    doesn't return -EBUSY by temporary ref counts. That commit expects all
    refs after pre_destroy() is temporary but...it wasn't. Then, rmdir can
    wait permanently. This patch tries to fix that and change followings.

    - set CGRP_WAIT_ON_RMDIR flag before pre_destroy().
    - clear CGRP_WAIT_ON_RMDIR flag when the subsys finds racy case.
    if there are sleeping ones, wakes them up.
    - rmdir() sleeps only when CGRP_WAIT_ON_RMDIR flag is set.

    Tested-by: Daisuke Nishimura
    Reported-by: Daisuke Nishimura
    Reviewed-by: Paul Menage
    Acked-by: Balbir Sigh
    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • As reported in Red Hat bz #509671, i_blocks for files on hugetlbfs get
    accounting wrong when doing something like:

    $ > foo
    $ date > foo
    date: write error: Invalid argument
    $ /usr/bin/stat foo
    File: `foo'
    Size: 0 Blocks: 18446744073709547520 IO Block: 2097152 regular
    ...

    This is because hugetlb_unreserve_pages() is unconditionally removing
    blocks_per_huge_page(h) on each call rather than using the freed amount.
    If there were 0 blocks, it goes negative, resulting in the above.

    This is a regression from commit a5516438959d90b071ff0a484ce4f3f523dc3152
    ("hugetlb: modular state for hugetlb page size")

    which did:

    - inode->i_blocks -= BLOCKS_PER_HUGEPAGE * freed;
    + inode->i_blocks -= blocks_per_huge_page(h);

    so just put back the freed multiplier, and it's all happy again.

    Signed-off-by: Eric Sandeen
    Acked-by: Andi Kleen
    Cc: William Lee Irwin III
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Sandeen
     
  • If a task is oom killed and still cannot find memory when trying with
    no watermarks, it's better to fail the allocation attempt than to loop
    endlessly. Direct reclaim has already failed and the oom killer will
    be a no-op since current has yet to die, so there is no other
    alternative for allocations that are not __GFP_NOFAIL.

    Acked-by: Mel Gorman
    Signed-off-by: David Rientjes
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Fix a post-2.6.24 performace regression caused by
    3dfa5721f12c3d5a441448086bee156887daa961 ("page-allocator: preserve PFN
    ordering when __GFP_COLD is set").

    Narayanan reports "The regression is around 15%. There is no disk controller
    as our setup is based on Samsung OneNAND used as a memory mapped device on a
    OMAP2430 based board."

    The page allocator tries to preserve contiguous PFN ordering when returning
    pages such that repeated callers to the allocator have a strong chance of
    getting physically contiguous pages, particularly when external fragmentation
    is low. However, of the bulk of the allocations have __GFP_COLD set as they
    are due to aio_read() for example, then the PFNs are in reverse PFN order.
    This can cause performance degration when used with IO controllers that could
    have merged the requests.

    This patch attempts to preserve the contiguous ordering of PFNs for users of
    __GFP_COLD.

    Signed-off-by: Mel Gorman
    Reported-by: Narayananu Gopalakrishnan
    Tested-by: Narayanan Gopalakrishnan
    Cc: KAMEZAWA Hiroyuki
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Objects passed to kmemleak_seq_next() have an incremented reference
    count (hence not freed) but they may point via object_list.next to
    other freed objects. To avoid this, the whole start/next/stop sequence
    must be protected by rcu_read_lock().

    Signed-off-by: Catalin Marinas
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • Create bdgrab(). This function copies an existing reference to a
    block_device. It is safe to call from any context.

    Hibernation code wishes to copy a reference to the active swap device.
    Right now it calls bdget() under a spinlock, but this is wrong because
    bdget() can sleep. It doesn't need a full bdget() because we already
    hold a reference to active swap devices (and the spinlock protects
    against swapoff).

    Fixes http://bugzilla.kernel.org/show_bug.cgi?id=13827

    Signed-off-by: Alan Jenkins
    Signed-off-by: Rafael J. Wysocki

    Alan Jenkins
     

28 Jul, 2009

1 commit

  • mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()

    Upcoming paches to support the new 64-bit "BookE" powerpc architecture
    will need to have the virtual address corresponding to PTE page when
    freeing it, due to the way the HW table walker works.

    Basically, the TLB can be loaded with "large" pages that cover the whole
    virtual space (well, sort-of, half of it actually) represented by a PTE
    page, and which contain an "indirect" bit indicating that this TLB entry
    RPN points to an array of PTEs from which the TLB can then create direct
    entries. Thus, in order to invalidate those when PTE pages are deleted,
    we need the virtual address to pass to tlbilx or tlbivax instructions.

    The old trick of sticking it somewhere in the PTE page struct page sucks
    too much, the address is almost readily available in all call sites and
    almost everybody implemets these as macros, so we may as well add the
    argument everywhere. I added it to the pmd and pud variants for consistency.

    Signed-off-by: Benjamin Herrenschmidt
    Acked-by: David Howells [MN10300 & FRV]
    Acked-by: Nick Piggin
    Acked-by: Martin Schwidefsky [s390]
    Signed-off-by: Linus Torvalds

    Benjamin Herrenschmidt
     

13 Jul, 2009

1 commit

  • * 'kmemleak' of git://linux-arm.org/linux-2.6:
    kmemleak: Remove alloc_bootmem annotations introduced in the past
    kmemleak: Add callbacks to the bootmem allocator
    kmemleak: Allow partial freeing of memory blocks
    kmemleak: Trace the kmalloc_large* functions in slub
    kmemleak: Scan objects allocated during a scanning episode
    kmemleak: Do not acquire scan_mutex in kmemleak_open()
    kmemleak: Remove the reported leaks number limitation
    kmemleak: Add more cond_resched() calls in the scanning thread
    kmemleak: Renice the scanning thread to +10

    Linus Torvalds
     

11 Jul, 2009

1 commit


10 Jul, 2009

1 commit


08 Jul, 2009

6 commits

  • This patch adds kmemleak_alloc/free callbacks to the bootmem allocator.
    This would allow scanning of such blocks and help avoiding a whole class
    of false positives and more kmemleak annotations.

    Signed-off-by: Catalin Marinas
    Cc: Ingo Molnar
    Acked-by: Pekka Enberg
    Reviewed-by: Johannes Weiner

    Catalin Marinas
     
  • Functions like free_bootmem() are allowed to free only part of a memory
    block. This patch adds support for this via the kmemleak_free_part()
    callback which removes the original object and creates one or two
    additional objects as a result of the memory block split.

    Signed-off-by: Catalin Marinas
    Cc: Ingo Molnar
    Acked-by: Pekka Enberg

    Catalin Marinas
     
  • The kmalloc_large() and kmalloc_large_node() functions were missed when
    adding the kmemleak hooks to the slub allocator. However, they should be
    traced to avoid false positives.

    Signed-off-by: Catalin Marinas
    Cc: Christoph Lameter
    Acked-by: Pekka Enberg

    Catalin Marinas
     
  • Many of the false positives in kmemleak happen on busy systems where
    objects are allocated during a kmemleak scanning episode. These objects
    aren't scanned by default until the next memory scan. When such object
    is added, for example, at the head of a list, it is possible that all
    the other objects in the list become unreferenced until the next scan.

    This patch adds checking for newly allocated objects at the end of the
    scan and repeats the scanning on these objects. If Linux allocates
    new objects at a higher rate than their scanning, it stops after a
    predefined number of passes.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • Initially, the scan_mutex was acquired in kmemleak_open() and released
    in kmemleak_release() (corresponding to /sys/kernel/debug/kmemleak
    operations). This was causing some lockdep reports when the file was
    closed from a different task than the one opening it. This patch moves
    the scan_mutex acquiring in kmemleak_write() or kmemleak_seq_start()
    with releasing in kmemleak_seq_stop().

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • Since the leaks are no longer printed to the syslog, there is no point
    in keeping this limitation. All the suspected leaks are shown on
    /sys/kernel/debug/kmemleak file.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     

07 Jul, 2009

5 commits

  • Following recent fix to no longer reschedule in the scan_block()
    function, the system may become unresponsive with !PREEMPT. This patch
    re-adds the cond_resched() call to scan_block() but conditioned by the
    allow_resched parameter.

    Signed-off-by: Catalin Marinas
    Cc: Ingo Molnar

    Catalin Marinas
     
  • This is a long-running thread but not high-priority. So it makes sense
    to renice it to +10.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
    SLAB: Fix lockdep annotations
    fix RCU-callback-after-kmem_cache_destroy problem in sl[aou]b

    Linus Torvalds
     
  • These warnings were observed on MIPS32 using 2.6.31-rc1 and gcc-4.2.0:

    mm/page_alloc.c: In function 'alloc_pages_exact':
    mm/page_alloc.c:1986: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast

    drivers/usb/mon/mon_bin.c: In function 'mon_alloc_buff':
    drivers/usb/mon/mon_bin.c:1264: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast

    [akpm@linux-foundation.org: fix kernel/perf_counter.c too]
    Signed-off-by: Kevin Cernekee
    Cc: Andi Kleen
    Cc: Ralf Baechle
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kevin Cernekee
     
  • In testing a backport of the write_begin/write_end AOPs, a 10% re-read
    regression was noticed when running iozone. This regression was
    introduced because the old AOPs would always do a mark_page_accessed(page)
    after the commit_write, but when the new AOPs where introduced, the only
    place this was kept was in pagecache_write_end().

    This patch does the same thing in the generic case as what is done in
    pagecache_write_end(), which is just to mark the page accessed before we
    do write_end().

    Signed-off-by: Josef Bacik
    Acked-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Josef Bacik
     

06 Jul, 2009

1 commit


02 Jul, 2009

2 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6:
    sh: LCDC dcache flush for deferred io
    sh: Fix compiler error and include the definition of IS_ERR_VALUE
    sh: re-add LCDC fbdev support to the Migo-R defconfig
    sh: fix se7724 ceu names
    sh: ms7724se: Enable sh_eth in defconfig.
    arch/sh/boards/mach-se/7206/io.c: Remove unnecessary semicolons
    sh: ms7724se: Add sh_eth support
    nommu: provide follow_pfn().
    sh: Kill off unused DEBUG_BOOTMEM symbol.
    perf_counter tools: add cpu_relax()/rmb() definitions for sh.
    sh64: Hook up page fault events for software perf counters.
    sh: Hook up page fault events for software perf counters.
    sh: make set_perf_counter_pending() static inline.
    clocksource: sh_tmu: Make undefined TCOR behaviour less undefined.

    Linus Torvalds
     
  • One of the kmemleak changes caused the following
    scheduling-while-holding-the-tasklist-lock regression on x86:

    BUG: sleeping function called from invalid context at mm/kmemleak.c:795
    in_atomic(): 1, irqs_disabled(): 0, pid: 1737, name: kmemleak
    2 locks held by kmemleak/1737:
    #0: (scan_mutex){......}, at: [] kmemleak_scan_thread+0x45/0x86
    #1: (tasklist_lock){......}, at: [] kmemleak_scan+0x1a9/0x39c
    Pid: 1737, comm: kmemleak Not tainted 2.6.31-rc1-tip #59266
    Call Trace:
    [] ? __debug_show_held_locks+0x1e/0x20
    [] __might_sleep+0x10a/0x111
    [] scan_yield+0x17/0x3b
    [] scan_block+0x39/0xd4
    [] kmemleak_scan+0x1bb/0x39c
    [] ? kmemleak_scan_thread+0x0/0x86
    [] kmemleak_scan_thread+0x4a/0x86
    [] kthread+0x6e/0x73
    [] ? kthread+0x0/0x73
    [] kernel_thread_helper+0x7/0x10
    kmemleak: 834 new suspected memory leaks (see /sys/kernel/debug/kmemleak)

    The bit causing it is highly dubious:

    static void scan_yield(void)
    {
    might_sleep();

    if (time_is_before_eq_jiffies(next_scan_yield)) {
    schedule();
    next_scan_yield = jiffies + jiffies_scan_yield;
    }
    }

    It called deep inside the codepath and in a conditional way,
    and that is what crapped up when one of the new scan_block()
    uses grew a tasklist_lock dependency.

    This minimal patch removes that yielding stuff and adds the
    proper cond_resched().

    The background scanning thread could probably also be reniced
    to +10.

    Signed-off-by: Ingo Molnar
    Acked-by: Pekka Enberg
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

01 Jul, 2009

4 commits

  • * 'kmemleak' of git://linux-arm.org/linux-2.6:
    kmemleak: Inform kmemleak about pid_hash
    kmemleak: Do not warn if an unknown object is freed
    kmemleak: Do not report new leaked objects if the scanning was stopped
    kmemleak: Slightly change the policy on newly allocated objects
    kmemleak: Do not trigger a scan when reading the debug/kmemleak file
    kmemleak: Simplify the reports logged by the scanning thread
    kmemleak: Enable task stacks scanning by default
    kmemleak: Allow the early log buffer to be configurable.

    Linus Torvalds
     
  • Nathan reported that

    | commit 73d60b7f747176dbdff826c4127d22e1fd3f9f74
    | Author: Yinghai Lu
    | Date: Tue Jun 16 15:33:00 2009 -0700
    |
    | page-allocator: clear N_HIGH_MEMORY map before we set it again
    |
    | SRAT tables may contains nodes of very small size. The arch code may
    | decide to not activate such a node. However, currently the early boot
    | code sets N_HIGH_MEMORY for such nodes. These nodes therefore seem to be
    | active although these nodes have no present pages.
    |
    | For 64bit N_HIGH_MEMORY == N_NORMAL_MEMORY, so that works for 64 bit too

    unintentionally and incorrectly clears the cpuset.mems cgroup attribute on
    an i386 kvm guest, meaning that cpuset.mems can not be used.

    Fix this by only clearing node_states[N_NORMAL_MEMORY] for 64bit only.
    and need to do save/restore for that in find_zone_movable_pfn

    Reported-by: Nathan Lynch
    Tested-by: Nathan Lynch
    Signed-off-by: Yinghai Lu
    Cc: Christoph Lameter
    Cc: Ingo Molnar ,
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yinghai Lu
     
  • balance_dirty_pages can overreact and move all of the dirty pages to
    writeback unnecessarily.

    balance_dirty_pages makes its decision to throttle based on the number of
    dirty plus writeback pages that are over the calculated limit,so it will
    continue to move pages even when there are plenty of pages in writeback
    and less than the threshold still dirty.

    This allows it to overshoot its limits and move all the dirty pages to
    writeback while waiting for the drives to catch up and empty the writeback
    list.

    A simple fio test easily demonstrates this problem.

    fio --name=f1 --directory=/disk1 --size=2G -rw=write --name=f2 --directory=/disk2 --size=1G --rw=write --startdelay=10

    This is the simplest fix I could find, but I'm not entirely sure that it
    alone will be enough for all cases. But it certainly is an improvement on
    my desktop machine writing to 2 disks.

    Do we need something more for machines with large arrays where
    bdi_threshold * number_of_drives is greater than the dirty_ratio ?

    Signed-off-by: Richard Kennedy
    Acked-by: Peter Zijlstra
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Richard Kennedy
     
  • show_pools() walks the page_list of a pool w/o protection against the list
    modifications in alloc/free. Take pool->lock to avoid stomping into
    nirvana.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Matthew Wilcox
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Gleixner
     

30 Jun, 2009

2 commits


29 Jun, 2009

2 commits

  • Commit 8429db5... ("slab: setup cpu caches later on when interrupts are
    enabled") broke mm/slab.c lockdep annotations:

    [ 11.554715] =============================================
    [ 11.555249] [ INFO: possible recursive locking detected ]
    [ 11.555560] 2.6.31-rc1 #896
    [ 11.555861] ---------------------------------------------
    [ 11.556127] udevd/1899 is trying to acquire lock:
    [ 11.556436] (&nc->lock){-.-...}, at: [] kmem_cache_free+0xcd/0x25b
    [ 11.557101]
    [ 11.557102] but task is already holding lock:
    [ 11.557706] (&nc->lock){-.-...}, at: [] kfree+0x137/0x292
    [ 11.558109]
    [ 11.558109] other info that might help us debug this:
    [ 11.558720] 2 locks held by udevd/1899:
    [ 11.558983] #0: (&nc->lock){-.-...}, at: [] kfree+0x137/0x292
    [ 11.559734] #1: (&parent->list_lock){-.-...}, at: [] __drain_alien_cache+0x3b/0xbd
    [ 11.560442]
    [ 11.560443] stack backtrace:
    [ 11.561009] Pid: 1899, comm: udevd Not tainted 2.6.31-rc1 #896
    [ 11.561276] Call Trace:
    [ 11.561632] [] __lock_acquire+0x15ec/0x168f
    [ 11.561901] [] ? __lock_acquire+0x1676/0x168f
    [ 11.562171] [] ? trace_hardirqs_on_caller+0x113/0x13e
    [ 11.562490] [] ? trace_hardirqs_on_thunk+0x3a/0x3f
    [ 11.562807] [] lock_acquire+0xc1/0xe5
    [ 11.563073] [] ? kmem_cache_free+0xcd/0x25b
    [ 11.563385] [] _spin_lock+0x31/0x66
    [ 11.563696] [] ? kmem_cache_free+0xcd/0x25b
    [ 11.563964] [] kmem_cache_free+0xcd/0x25b
    [ 11.564235] [] ? __free_pages+0x1b/0x24
    [ 11.564551] [] slab_destroy+0x57/0x5c
    [ 11.564860] [] free_block+0xd8/0x123
    [ 11.565126] [] __drain_alien_cache+0xa2/0xbd
    [ 11.565441] [] kfree+0x14c/0x292
    [ 11.565752] [] skb_release_data+0xc6/0xcb
    [ 11.566020] [] __kfree_skb+0x19/0x86
    [ 11.566286] [] consume_skb+0x2b/0x2d
    [ 11.566631] [] skb_free_datagram+0x14/0x3a
    [ 11.566901] [] netlink_recvmsg+0x164/0x258
    [ 11.567170] [] sock_recvmsg+0xe5/0xfe
    [ 11.567486] [] ? might_fault+0xaf/0xb1
    [ 11.567802] [] ? autoremove_wake_function+0x0/0x38
    [ 11.568073] [] ? core_sys_select+0x3d/0x2b4
    [ 11.568378] [] ? __lock_acquire+0x1676/0x168f
    [ 11.568693] [] ? sockfd_lookup_light+0x1b/0x54
    [ 11.568961] [] sys_recvfrom+0xa3/0xf8
    [ 11.569228] [] ? trace_hardirqs_on+0xd/0xf
    [ 11.569546] [] system_call_fastpath+0x16/0x1b#

    Fix that up.

    Closes-bug: http://bugzilla.kernel.org/show_bug.cgi?id=13654
    Tested-by: Venkatesh Pallipadi
    Signed-off-by: Pekka Enberg

    Pekka Enberg
     
  • …git/tip/linux-2.6-tip

    * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, delay: tsc based udelay should have rdtsc_barrier
    x86, setup: correct include file in <asm/boot.h>
    x86, setup: Fix typo "CONFIG_x86_64" in <asm/boot.h>
    x86, mce: percpu mcheck_timer should be pinned
    x86: Add sysctl to allow panic on IOCK NMI error
    x86: Fix uv bau sending buffer initialization
    x86, mce: Fix mce resume on 32bit
    x86: Move init_gbpages() to setup_arch()
    x86: ensure percpu lpage doesn't consume too much vmalloc space
    x86: implement percpu_alloc kernel parameter
    x86: fix pageattr handling for lpage percpu allocator and re-enable it
    x86: reorganize cpa_process_alias()
    x86: prepare setup_pcpu_lpage() for pageattr fix
    x86: rename remap percpu first chunk allocator to lpage
    x86: fix duplicate free in setup_pcpu_remap() failure path
    percpu: fix too lazy vunmap cache flushing
    x86: Set cpu_llc_id on AMD CPUs

    Linus Torvalds
     

27 Jun, 2009

4 commits


26 Jun, 2009

2 commits

  • Jesper noted that kmem_cache_destroy() invokes synchronize_rcu() rather than
    rcu_barrier() in the SLAB_DESTROY_BY_RCU case, which could result in RCU
    callbacks accessing a kmem_cache after it had been destroyed.

    Cc:
    Acked-by: Matt Mackall
    Reported-by: Jesper Dangaard Brouer
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Pekka Enberg

    Paul E. McKenney
     
  • With the introduction of follow_pfn() as an exported symbol, modules have
    begun making use of it. Unfortunately this was not reflected on nommu at
    the time, so the in-tree users have subsequently all blown up with link
    errors there.

    This provides a simple follow_pfn() that just returns addr >> PAGE_SHIFT,
    which will do the right thing on nommu. There is no need to do range
    checking within the vma, as the find_vma() case will already take care of
    this.

    Signed-off-by: Paul Mundt

    Paul Mundt