12 Sep, 2009

2 commits

  • * 'writeback' of git://git.kernel.dk/linux-2.6-block:
    writeback: check for registered bdi in flusher add and inode dirty
    writeback: add name to backing_dev_info
    writeback: add some debug inode list counters to bdi stats
    writeback: get rid of pdflush completely
    writeback: switch to per-bdi threads for flushing data
    writeback: move dirty inodes from super_block to backing_dev_info
    writeback: get rid of generic_sync_sb_inodes() export

    Linus Torvalds
     
  • * 'kmemleak' of git://linux-arm.org/linux-2.6:
    kmemleak: Improve the "Early log buffer exceeded" error message
    kmemleak: fix sparse warning for static declarations
    kmemleak: fix sparse warning over overshadowed flags
    kmemleak: move common painting code together
    kmemleak: add clear command support
    kmemleak: use bool for true/false questions
    kmemleak: Do no create the clean-up thread during kmemleak_disable()
    kmemleak: Scan all thread stacks
    kmemleak: Don't scan uninitialized memory when kmemcheck is enabled
    kmemleak: Ignore the aperture memory hole on x86_64
    kmemleak: Printing of the objects hex dump
    kmemleak: Do not report alloc_bootmem blocks as leaks
    kmemleak: Save the stack trace for early allocations
    kmemleak: Mark the early log buffer as __initdata
    kmemleak: Dump object information on request
    kmemleak: Allow rescheduling during an object scanning

    Linus Torvalds
     

11 Sep, 2009

7 commits

  • Based on a suggestion from Jaswinder, clarify what the user would need
    to do to avoid this error message from kmemleak.

    Reported-by: Jaswinder Singh Rajput
    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • Also a debugging aid. We want to catch dirty inodes being added to
    backing devices that don't do writeback.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This enables us to track who does what and print info. Its main use
    is catching dirty inodes on the default_backing_dev_info, so we can
    fix that up.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Add some debug entries to be able to inspect the internal state of
    the writeback details.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • It is now unused, so kill it off.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This gets rid of pdflush for bdi writeout and kupdated style cleaning.
    pdflush writeout suffers from lack of locality and also requires more
    threads to handle the same workload, since it has to work in a
    non-blocking fashion against each queue. This also introduces lumpy
    behaviour and potential request starvation, since pdflush can be starved
    for queue access if others are accessing it. A sample ffsb workload that
    does random writes to files is about 8% faster here on a simple SATA drive
    during the benchmark phase. File layout also seems a LOT more smooth in
    vmstat:

    r b swpd free buff cache si so bi bo in cs us sy id wa
    0 1 0 608848 2652 375372 0 0 0 71024 604 24 1 10 48 42
    0 1 0 549644 2712 433736 0 0 0 60692 505 27 1 8 48 44
    1 0 0 476928 2784 505192 0 0 4 29540 553 24 0 9 53 37
    0 1 0 457972 2808 524008 0 0 0 54876 331 16 0 4 38 58
    0 1 0 366128 2928 614284 0 0 4 92168 710 58 0 13 53 34
    0 1 0 295092 3000 684140 0 0 0 62924 572 23 0 9 53 37
    0 1 0 236592 3064 741704 0 0 4 58256 523 17 0 8 48 44
    0 1 0 165608 3132 811464 0 0 0 57460 560 21 0 8 54 38
    0 1 0 102952 3200 873164 0 0 4 74748 540 29 1 10 48 41
    0 1 0 48604 3252 926472 0 0 0 53248 469 29 0 7 47 45

    where vanilla tends to fluctuate a lot in the creation phase:

    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 1 0 678716 5792 303380 0 0 0 74064 565 50 1 11 52 36
    1 0 0 662488 5864 319396 0 0 4 352 302 329 0 2 47 51
    0 1 0 599312 5924 381468 0 0 0 78164 516 55 0 9 51 40
    0 1 0 519952 6008 459516 0 0 4 78156 622 56 1 11 52 37
    1 1 0 436640 6092 541632 0 0 0 82244 622 54 0 11 48 41
    0 1 0 436640 6092 541660 0 0 0 8 152 39 0 0 51 49
    0 1 0 332224 6200 644252 0 0 4 102800 728 46 1 13 49 36
    1 0 0 274492 6260 701056 0 0 4 12328 459 49 0 7 50 43
    0 1 0 211220 6324 763356 0 0 0 106940 515 37 1 10 51 39
    1 0 0 160412 6376 813468 0 0 0 8224 415 43 0 6 49 45
    1 1 0 85980 6452 886556 0 0 4 113516 575 39 1 11 54 34
    0 2 0 85968 6452 886620 0 0 0 1640 158 211 0 0 46 54

    A 10 disk test with btrfs performs 26% faster with per-bdi flushing. A
    SSD based writeback test on XFS performs over 20% better as well, with
    the throughput being very stable around 1GB/sec, where pdflush only
    manages 750MB/sec and fluctuates wildly while doing so. Random buffered
    writes to many files behave a lot better as well, as does random mmap'ed
    writes.

    A separate thread is added to sync the super blocks. In the long term,
    adding sync_supers_bdi() functionality could get rid of this thread again.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This is a first step at introducing per-bdi flusher threads. We should
    have no change in behaviour, although sb_has_dirty_inodes() is now
    ridiculously expensive, as there's no easy way to answer that question.
    Not a huge problem, since it'll be deleted in subsequent patches.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

09 Sep, 2009

4 commits


08 Sep, 2009

3 commits

  • In an ideal world your kmemleak output will be small, when its
    not (usually during initial bootup) you can use the clear command
    to ingore previously reported and unreferenced kmemleak objects. We
    do this by painting all currently reported unreferenced objects grey.
    We paint them grey instead of black to allow future scans on the same
    objects as such objects could still potentially reference newly
    allocated objects in the future.

    To test a critical section on demand with a clean
    /sys/kernel/debug/kmemleak you can do:

    echo clear > /sys/kernel/debug/kmemleak
    test your kernel or modules
    echo scan > /sys/kernel/debug/kmemleak

    Then as usual to get your report with:

    cat /sys/kernel/debug/kmemleak

    Signed-off-by: Luis R. Rodriguez
    Signed-off-by: Catalin Marinas

    Luis R. Rodriguez
     
  • Acked-by: Pekka Enberg
    Signed-off-by: Luis R. Rodriguez
    Signed-off-by: Catalin Marinas

    Luis R. Rodriguez
     
  • The kmemleak_disable() function could be called from various contexts
    including IRQ. It creates a clean-up thread but the kthread_create()
    function has restrictions on which contexts it can be called from,
    mainly because of the kthread_create_lock. The patch changes the
    kmemleak clean-up thread to a workqueue.

    Signed-off-by: Catalin Marinas
    Reported-by: Eric Paris

    Catalin Marinas
     

06 Sep, 2009

4 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
    percpu: don't assume existence of cpu0

    Linus Torvalds
     
  • * 'slab/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
    slub: Fix kmem_cache_destroy() with SLAB_DESTROY_BY_RCU

    Linus Torvalds
     
  • On low-memory systems, anti-fragmentation gets disabled as fragmentation
    cannot be avoided on a sufficiently large boundary to be worthwhile. Once
    disabled, there is a period of time when all the pageblocks are marked
    MOVABLE and the expectation is that they get marked UNMOVABLE at each call
    to __rmqueue_fallback().

    However, when MAX_ORDER is large the pageblocks do not change ownership
    because the normal criteria are not met. This has the effect of
    prematurely breaking up too many large contiguous blocks. This is most
    serious on NOMMU systems which depend on high-order allocations to boot.
    This patch causes pageblocks to change ownership on every fallback when
    anti-fragmentation is disabled. This prevents the large blocks being
    prematurely broken up.

    This is a fix to commit 49255c619fbd482d704289b5eb2795f8e3b7ff2e [page
    allocator: move check for disabled anti-fragmentation out of fastpath] and
    the problem affects 2.6.31-rc8.

    Signed-off-by: Mel Gorman
    Tested-by: Paul Mundt
    Cc: David Howells
    Cc: Pekka Enberg
    Acked-by: Greg Ungerer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Fix the error handling in do_mmap_pgoff(). If do_mmap_shared_file() or
    do_mmap_private() fail, we jump to the error_put_region label at which
    point we cann __put_nommu_region() on the region - but we haven't yet
    added the region to the tree, and so __put_nommu_region() may BUG
    because the region tree is empty or it may corrupt the region tree.

    To get around this, we can afford to add the region to the region tree
    before calling do_mmap_shared_file() or do_mmap_private() as we keep
    nommu_region_sem write-locked, so no-one can race with us by seeing a
    transient region.

    Signed-off-by: David Howells
    Acked-by: Pekka Enberg
    Acked-by: Paul Mundt
    Cc: Mel Gorman
    Acked-by: Greg Ungerer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     

04 Sep, 2009

3 commits

  • This patch changes the for_each_process() loop with the
    do_each_thread()/while_each_thread() pair.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • Ingo Molnar reported the following kmemcheck warning when running both
    kmemleak and kmemcheck enabled:

    PM: Adding info for No Bus:vcsa7
    WARNING: kmemcheck: Caught 32-bit read from uninitialized memory
    (f6f6e1a4)
    d873f9f600000000c42ae4c1005c87f70000000070665f666978656400000000
    i i i i u u u u i i i i i i i i i i i i i i i i i i i i i u u u
    ^

    Pid: 3091, comm: kmemleak Not tainted (2.6.31-rc7-tip #1303) P4DC6
    EIP: 0060:[] EFLAGS: 00010006 CPU: 0
    EIP is at scan_block+0x3f/0xe0
    EAX: f40bd700 EBX: f40bd780 ECX: f16b46c0 EDX: 00000001
    ESI: f6f6e1a4 EDI: 00000000 EBP: f10f3f4c ESP: c2605fcc
    DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
    CR0: 8005003b CR2: e89a4844 CR3: 30ff1000 CR4: 000006f0
    DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
    DR6: ffff4ff0 DR7: 00000400
    [] scan_object+0x7c/0xf0
    [] kmemleak_scan+0x1d9/0x400
    [] kmemleak_scan_thread+0x4c/0xb0
    [] kthread+0x74/0x80
    [] kernel_thread_helper+0x7/0x3c
    [] 0xffffffff
    kmemleak: 515 new suspected memory leaks (see
    /sys/kernel/debug/kmemleak)
    kmemleak: 42 new suspected memory leaks (see /sys/kernel/debug/kmemleak)

    The problem here is that kmemleak will scan partially initialized
    objects that makes kmemcheck complain. Fix that up by skipping
    uninitialized memory regions when kmemcheck is enabled.

    Reported-by: Ingo Molnar
    Acked-by: Ingo Molnar
    Acked-by: Catalin Marinas
    Signed-off-by: Pekka Enberg

    Pekka Enberg
     
  • kmem_cache_destroy() should call rcu_barrier() *after* kmem_cache_close() and
    *before* sysfs_slab_remove() or risk rcu_free_slab() being called after
    kmem_cache is deleted (kfreed).

    rmmod nf_conntrack can crash the machine because it has to kmem_cache_destroy()
    a SLAB_DESTROY_BY_RCU enabled cache.

    Cc:
    Reported-by: Zdenek Kabelac
    Signed-off-by: Eric Dumazet
    Acked-by: Paul E. McKenney
    Signed-off-by: Pekka Enberg

    Eric Dumazet
     

01 Sep, 2009

1 commit

  • percpu incorrectly assumed that cpu0 was always there which led to the
    following warning and eventual oops on sparc machines w/o cpu0.

    WARNING: at mm/percpu.c:651 pcpu_map+0xdc/0x100()
    Modules linked in:
    Call Trace:
    [000000000045eb70] warn_slowpath_common+0x50/0xa0
    [000000000045ebdc] warn_slowpath_null+0x1c/0x40
    [00000000004d493c] pcpu_map+0xdc/0x100
    [00000000004d59a4] pcpu_alloc+0x3e4/0x4e0
    [00000000004d5af8] __alloc_percpu+0x18/0x40
    [00000000005b112c] __percpu_counter_init+0x4c/0xc0
    ...
    Unable to handle kernel NULL pointer dereference
    ...
    I7:
    Disabling lock debugging due to kernel taint
    Caller[000000000053c1b0]: sysfs_new_dirent+0x30/0x120
    Caller[000000000053c7a4]: create_dir+0x24/0xc0
    Caller[000000000053c870]: sysfs_create_dir+0x30/0x80
    Caller[00000000005990e8]: kobject_add_internal+0xc8/0x200
    ...
    Kernel panic - not syncing: Attempted to kill the idle task!

    This patch fixes the problem by backporting parts from devel branch to
    make percpu core not depend on the existence of cpu0.

    Signed-off-by: Tejun Heo
    Reported-by: Meelis Roos
    Cc: David Miller

    Tejun Heo
     

27 Aug, 2009

7 commits

  • Introducing printing of the objects hex dump to the seq file.
    The number of lines to be printed is limited to HEX_MAX_LINES
    to prevent seq file spamming. The actual number of printed
    bytes is less than or equal to (HEX_MAX_LINES * HEX_ROW_SIZE).

    (slight adjustments by Catalin Marinas)

    Signed-off-by: Sergey Senozhatsky
    Signed-off-by: Catalin Marinas

    Sergey Senozhatsky
     
  • This patch sets the min_count for alloc_bootmem objects to 0 so that
    they are never reported as leaks. This is because many of these blocks
    are only referred via the physical address which is not looked up by
    kmemleak.

    Signed-off-by: Catalin Marinas
    Cc: Pekka Enberg

    Catalin Marinas
     
  • Before slab is initialised, kmemleak save the allocations in an early
    log buffer. They are later recorded as normal memory allocations. This
    patch adds the stack trace saving to the early log buffer, otherwise the
    information shown for such objects only refers to the kmemleak_init()
    function.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • This buffer isn't needed after kmemleak was initialised so it can be
    freed together with the .init.data section. This patch also marks
    functions conditionally accessing the early log variables with __ref.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • By writing dump= to the kmemleak file, kmemleak will look up an
    object with that address and dump the information it has about it to
    syslog. This is useful in debugging memory leaks.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • If the object size is bigger than a predefined value (4K in this case),
    release the object lock during scanning and call cond_resched().
    Re-acquire the lock after rescheduling and test whether the object is
    still valid.

    Signed-off-by: Catalin Marinas

    Catalin Marinas
     
  • An mlocked page might lose the isolatation race. This causes the page to
    clear PG_mlocked while it remains in a VM_LOCKED vma. This means it can
    be put onto the [in]active list. We can rescue it by using try_to_unmap()
    in shrink_page_list().

    But now, As Wu Fengguang pointed out, vmscan has a bug. If the page has
    PG_referenced, it can't reach try_to_unmap() in shrink_page_list() but is
    put into the active list. If the page is referenced repeatedly, it can
    remain on the [in]active list without being moving to the unevictable
    list.

    This patch fixes it.

    Reported-by: Wu Fengguang
    Signed-off-by: Minchan Kim
    Reviewed-by: KOSAKI Motohiro <
    Cc: Lee Schermerhorn
    Acked-by: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     

19 Aug, 2009

4 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
    percpu: use the right flag for get_vm_area()
    percpu, sparc64: fix sparse possible cpu map handling
    init: set nr_cpu_ids before setup_per_cpu_areas()

    Linus Torvalds
     
  • If node_load[] is cleared everytime build_zonelists() is
    called,node_load[] will have no help to find the next node that should
    appear in the given node's fallback list.

    Because of the bug, zonelist's node_order is not calculated as expected.
    This bug affects on big machine, which has asynmetric node distance.

    [synmetric NUMA's node distance]
    0 1 2
    0 10 12 12
    1 12 10 12
    2 12 12 10

    [asynmetric NUMA's node distance]
    0 1 2
    0 10 12 20
    1 12 10 14
    2 20 14 10

    This (my bug) is very old but no one has reported this for a long time.
    Maybe because the number of asynmetric NUMA is very small and they use
    cpuset for customizing node memory allocation fallback.

    [akpm@linux-foundation.org: fix CONFIG_NUMA=n build]
    Signed-off-by: Bo Liu
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Christoph Lameter
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bo Liu
     
  • According to the POSIX (1003.1-2008), the file descriptor shall have been
    opened with read permission, regardless of the protection options specified to
    mmap(). The ltp test cases mmap06/07 need this.

    Signed-off-by: Graff Yang
    Acked-by: Paul Mundt
    Signed-off-by: David Howells
    Acked-by: Greg Ungerer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Graff Yang
     
  • The commit 2ff05b2b (oom: move oom_adj value) moveed the oom_adj value to
    the mm_struct. It was a very good first step for sanitize OOM.

    However Paul Menage reported the commit makes regression to his job
    scheduler. Current OOM logic can kill OOM_DISABLED process.

    Why? His program has the code of similar to the following.

    ...
    set_oom_adj(OOM_DISABLE); /* The job scheduler never killed by oom */
    ...
    if (vfork() == 0) {
    set_oom_adj(0); /* Invoked child can be killed */
    execve("foo-bar-cmd");
    }
    ....

    vfork() parent and child are shared the same mm_struct. then above
    set_oom_adj(0) doesn't only change oom_adj for vfork() child, it's also
    change oom_adj for vfork() parent. Then, vfork() parent (job scheduler)
    lost OOM immune and it was killed.

    Actually, fork-setting-exec idiom is very frequently used in userland program.
    We must not break this assumption.

    Then, this patch revert commit 2ff05b2b and related commit.

    Reverted commit list
    ---------------------
    - commit 2ff05b2b4e (oom: move oom_adj value from task_struct to mm_struct)
    - commit 4d8b9135c3 (oom: avoid unnecessary mm locking and scanning for OOM_DISABLE)
    - commit 8123681022 (oom: only oom kill exiting tasks with attached memory)
    - commit 933b787b57 (mm: copy over oom_adj value at fork time)

    Signed-off-by: KOSAKI Motohiro
    Cc: Paul Menage
    Cc: David Rientjes
    Cc: KAMEZAWA Hiroyuki
    Cc: Rik van Riel
    Cc: Linus Torvalds
    Cc: Oleg Nesterov
    Cc: Nick Piggin
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     

17 Aug, 2009

1 commit

  • Currently SELinux enforcement of controls on the ability to map low memory
    is determined by the mmap_min_addr tunable. This patch causes SELinux to
    ignore the tunable and instead use a seperate Kconfig option specific to how
    much space the LSM should protect.

    The tunable will now only control the need for CAP_SYS_RAWIO and SELinux
    permissions will always protect the amount of low memory designated by
    CONFIG_LSM_MMAP_MIN_ADDR.

    This allows users who need to disable the mmap_min_addr controls (usual reason
    being they run WINE as a non-root user) to do so and still have SELinux
    controls preventing confined domains (like a web server) from being able to
    map some area of low memory.

    Signed-off-by: Eric Paris
    Signed-off-by: James Morris

    Eric Paris
     

14 Aug, 2009

2 commits

  • get_vm_area() only accepts VM_* flags, not GFP_*.

    And according to the doc of get_vm_area(), here should be
    VM_ALLOC.

    Signed-off-by: WANG Cong
    Acked-by: Tejun Heo
    Cc: Ingo Molnar

    Amerigo Wang
     
  • percpu code has been assuming num_possible_cpus() == nr_cpu_ids which
    is incorrect if cpu_possible_map contains holes. This causes percpu
    code to access beyond allocated memories and vmalloc areas. On a
    sparc64 machine with cpus 0 and 2 (u60), this triggers the following
    warning or fails boot.

    WARNING: at /devel/tj/os/work/mm/vmalloc.c:106 vmap_page_range_noflush+0x1f0/0x240()
    Modules linked in:
    Call Trace:
    [00000000004b17d0] vmap_page_range_noflush+0x1f0/0x240
    [00000000004b1840] map_vm_area+0x20/0x60
    [00000000004b1950] __vmalloc_area_node+0xd0/0x160
    [0000000000593434] deflate_init+0x14/0xe0
    [0000000000583b94] __crypto_alloc_tfm+0xd4/0x1e0
    [00000000005844f0] crypto_alloc_base+0x50/0xa0
    [000000000058b898] alg_test_comp+0x18/0x80
    [000000000058dad4] alg_test+0x54/0x180
    [000000000058af00] cryptomgr_test+0x40/0x60
    [0000000000473098] kthread+0x58/0x80
    [000000000042b590] kernel_thread+0x30/0x60
    [0000000000472fd0] kthreadd+0xf0/0x160
    ---[ end trace 429b268a213317ba ]---

    This patch fixes generic percpu functions and sparc64
    setup_per_cpu_areas() so that they handle sparse cpu_possible_map
    properly.

    Please note that on x86, cpu_possible_map() doesn't contain holes and
    thus num_possible_cpus() == nr_cpu_ids and this patch doesn't cause
    any behavior difference.

    Signed-off-by: Tejun Heo
    Acked-by: David S. Miller
    Cc: Ingo Molnar

    Tejun Heo
     

10 Aug, 2009

1 commit

  • clean up type-casting twice. "size_t" is typedef as "unsigned long" in
    64-bit system, and "unsigned int" in 32-bit system, and the intermediate
    cast to 'long' is pointless.

    Signed-off-by: Figo.zhang
    Signed-off-by: Linus Torvalds

    Figo.zhang
     

08 Aug, 2009

1 commit

  • At first, init_task's mems_allowed is initialized as this.
    init_task->mems_allowed == node_state[N_POSSIBLE]

    And cpuset's top_cpuset mask is initialized as this
    top_cpuset->mems_allowed = node_state[N_HIGH_MEMORY]

    Before 2.6.29:
    policy's mems_allowed is initialized as this.

    1. update tasks->mems_allowed by its cpuset->mems_allowed.
    2. policy->mems_allowed = nodes_and(tasks->mems_allowed, user's mask)

    Updating task's mems_allowed in reference to top_cpuset's one.
    cpuset's mems_allowed is aware of N_HIGH_MEMORY, always.

    In 2.6.30: After commit 58568d2a8215cb6f55caf2332017d7bdff954e1c
    ("cpuset,mm: update tasks' mems_allowed in time"), policy's mems_allowed
    is initialized as this.

    1. policy->mems_allowd = nodes_and(task->mems_allowed, user's mask)

    Here, if task is in top_cpuset, task->mems_allowed is not updated from
    init's one. Assume user excutes command as #numactrl --interleave=all
    ,....

    policy->mems_allowd = nodes_and(N_POSSIBLE, ALL_SET_MASK)

    Then, policy's mems_allowd can includes a possible node, which has no pgdat.

    MPOL's INTERLEAVE just scans nodemask of task->mems_allowd and access this
    directly.

    NODE_DATA(nid)->zonelist even if NODE_DATA(nid)==NULL

    Then, what's we need is making policy->mems_allowed be aware of
    N_HIGH_MEMORY. This patch does that. But to do so, extra nodemask will
    be on statck. Because I know cpumask has a new interface of
    CPUMASK_ALLOC(), I added it to node.

    This patch stands on old behavior. But I feel this fix itself is just a
    Band-Aid. But to do fundametal fix, we have to take care of memory
    hotplug and it takes time. (task->mems_allowd should be N_HIGH_MEMORY, I
    think.)

    mpol_set_nodemask() should be aware of N_HIGH_MEMORY and policy's nodemask
    should be includes only online nodes.

    In old behavior, this is guaranteed by frequent reference to cpuset's
    code. Now, most of them are removed and mempolicy has to check it by
    itself.

    To do check, a few nodemask_t will be used for calculating nodemask. But,
    size of nodemask_t can be big and it's not good to allocate them on stack.

    Now, cpumask_t has CPUMASK_ALLOC/FREE an easy code for get scratch area.
    NODEMASK_ALLOC/FREE shoudl be there.

    [akpm@linux-foundation.org: cleanups & tweaks]
    Tested-by: KOSAKI Motohiro
    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Miao Xie
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Christoph Lameter
    Cc: Paul Menage
    Cc: Nick Piggin
    Cc: Yasunori Goto
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki