28 Apr, 2008

40 commits

  • To free memmap easier, this patch aligns it to page size. Bootmem allocater
    may mix some objects in one pages. It's not good for freeing memmap of memory
    hot-remove.

    Signed-off-by: Yasunori Goto
    Cc: Badari Pulavarty
    Cc: Yinghai Lu
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yasunori Goto
     
  • This patch set is to free pages which is allocated by bootmem for
    memory-hotremove. Some structures of memory management are allocated by
    bootmem. ex) memmap, etc.

    To remove memory physically, some of them must be freed according to
    circumstance. This patch set makes basis to free those pages, and free
    memmaps.

    Basic my idea is using remain members of struct page to remember information
    of users of bootmem (section number or node id). When the section is
    removing, kernel can confirm it. By this information, some issues can be
    solved.

    1) When the memmap of removing section is allocated on other
    section by bootmem, it should/can be free.
    2) When the memmap of removing section is allocated on the
    same section, it shouldn't be freed. Because the section has to be
    logical memory offlined already and all pages must be isolated against
    page allocater. If it is freed, page allocator may use it which will
    be removed physically soon.
    3) When removing section has other section's memmap,
    kernel will be able to show easily which section should be removed
    before it for user. (Not implemented yet)
    4) When the above case 2), the page isolation will be able to check and skip
    memmap's page when logical memory offline (offline_pages()).
    Current page isolation code fails in this case because this page is
    just reserved page and it can't distinguish this pages can be
    removed or not. But, it will be able to do by this patch.
    (Not implemented yet.)
    5) The node information like pgdat has similar issues. But, this
    will be able to be solved too by this.
    (Not implemented yet, but, remembering node id in the pages.)

    Fortunately, current bootmem allocator just keeps PageReserved flags,
    and doesn't use any other members of page struct. The users of
    bootmem doesn't use them too.

    This patch:

    This is to register information which is node or section's id. Kernel can
    distinguish which node/section uses the pages allcated by bootmem. This is
    basis for hot-remove sections or nodes.

    Signed-off-by: Yasunori Goto
    Cc: Badari Pulavarty
    Cc: Yinghai Lu
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yasunori Goto
     
  • Huge ptes have a special type on s390 and cannot be handled with the standard
    pte functions in certain cases, e.g. because of a different location of the
    invalid bit. This patch adds some new architecture- specific functions to
    hugetlb common code, as a prerequisite for the s390 large page support.

    This won't affect other architectures in functionality, but I need to add some
    new dummy inline functions to the headers.

    Acked-by: Martin Schwidefsky
    Signed-off-by: Gerald Schaefer
    Cc: Paul Mundt
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gerald Schaefer
     
  • A cow break on a hugetlbfs page with page_count > 1 will set a new pte with
    set_huge_pte_at(), w/o any tlb flush operation. The old pte will remain in
    the tlb and subsequent write access to the page will result in a page fault
    loop, for as long as it may take until the tlb is flushed from somewhere else.
    This patch introduces an architecture-specific huge_ptep_clear_flush()
    function, which is called before the the set_huge_pte_at() in hugetlb_cow().

    ATTENTION: This is just a nop on all architectures for now, the s390
    implementation will come with our large page patch later. Other architectures
    should define their own huge_ptep_clear_flush() if needed.

    Acked-by: Martin Schwidefsky
    Signed-off-by: Gerald Schaefer
    Cc: Paul Mundt
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gerald Schaefer
     
  • This patch moves all architecture functions for hugetlb to architecture header
    files (include/asm-foo/hugetlb.h) and converts all macros to inline functions.
    It also removes (!) ARCH_HAS_HUGEPAGE_ONLY_RANGE,
    ARCH_HAS_HUGETLB_FREE_PGD_RANGE, ARCH_HAS_PREPARE_HUGEPAGE_RANGE,
    ARCH_HAS_SETCLEAR_HUGE_PTE and ARCH_HAS_HUGETLB_PREFAULT_HOOK.

    Getting rid of the ARCH_HAS_xxx #ifdef and macro fugliness should increase
    readability and maintainability, at the price of some code duplication. An
    asm-generic common part would have reduced the loc, but we would end up with
    new ARCH_HAS_xxx defines eventually.

    Acked-by: Martin Schwidefsky
    Signed-off-by: Gerald Schaefer
    Cc: Paul Mundt
    Cc: "Luck, Tony"
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gerald Schaefer
     
  • This patch replaces the mempolicy mode, mode_flags, and nodemask in the
    shmem_sb_info struct with a struct mempolicy pointer, initialized to NULL.
    This removes dependency on the details of mempolicy from shmem.c and hugetlbfs
    inode.c and simplifies the interfaces.

    mpol_parse_str() in mempolicy.c is changed to return, via a pointer to a
    pointer arg, a struct mempolicy pointer on success. For MPOL_DEFAULT, the
    returned pointer is NULL. Further, mpol_parse_str() now takes a 'no_context'
    argument that causes the input nodemask to be stored in the w.user_nodemask of
    the created mempolicy for use when the mempolicy is installed in a tmpfs inode
    shared policy tree. At that time, any cpuset contextualization is applied to
    the original input nodemask. This preserves the previous behavior where the
    input nodemask was stored in the superblock. We can think of the returned
    mempolicy as "context free".

    Because mpol_parse_str() is now calling mpol_new(), we can remove from
    mpol_to_str() the semantic checks that mpol_new() already performs.

    Add 'no_context' parameter to mpol_to_str() to specify that it should format
    the nodemask in w.user_nodemask for 'bind' and 'interleave' policies.

    Change mpol_shared_policy_init() to take a pointer to a "context free" struct
    mempolicy and to create a new, "contextualized" mempolicy using the mode,
    mode_flags and user_nodemask from the input mempolicy.

    Note: we know that the mempolicy passed to mpol_to_str() or
    mpol_shared_policy_init() from a tmpfs superblock is "context free". This
    is currently the only instance thereof. However, if we found more uses for
    this concept, and introduced any ambiguity as to whether a mempolicy was
    context free or not, we could add another internal mode flag to identify
    context free mempolicies. Then, we could remove the 'no_context' argument
    from mpol_to_str().

    Added shmem_get_sbmpol() to return a reference counted superblock mempolicy,
    if one exists, to pass to mpol_shared_policy_init(). We must add the
    reference under the sb stat_lock to prevent races with replacement of the mpol
    by remount. This reference is removed in mpol_shared_policy_init().

    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: another build fix]
    [akpm@linux-foundation.org: yet another build fix]
    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • For tmpfs/shmem shared policies, MPOL_DEFAULT is not necessarily equivalent to
    "local allocation". Because shared policies are at the same "scope" level
    [see Documentation/vm/numa_memory_policy.txt], as vma policies MPOL_DEFAULT
    means "fall back to current task policy".

    This patch extends the memory policy string parsing function to display
    "local" for MPOL_PREFERRED + MPOL_F_LOCAL. This allows one to specify local
    allocation as the default policy for shared memory areas via the tmpfs mpol
    mount option, regardless of the current task's policy.

    Also, "local" is now displayed for this policy. This patch allows us to
    accept the same input format as the display.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • mm/shmem.c currently contains functions to parse and display memory policy
    strings for the tmpfs 'mpol' mount option. Move this to mm/mempolicy.c with
    the rest of the mempolicy support. With subsequent patches, we'll be able to
    remove knowledge of the details [mode, flags, policy, ...] completely from
    shmem.c

    1) replace shmem_parse_mpol() in mm/shmem.c with mpol_parse_str() in
    mm/mempolicy.c. Rework to use the policy_types[] array [used by
    mpol_to_str()] to look up mode by name.

    2) use mpol_to_str() to format policy for shmem_show_mpol(). mpol_to_str()
    expects a pointer to a struct mempolicy, so temporarily construct one.
    This will be replaced with a reference to a struct mempolicy in the tmpfs
    superblock in a subsequent patch.

    NOTE 1: I changed mpol_to_str() to use a colon ':' rather than an equal
    sign '=' as the nodemask delimiter to match mpol_parse_str() and the
    tmpfs/shmem mpol mount option formatting that now uses mpol_to_str(). This
    is a user visible change to numa_maps, but then the addition of the mode
    flags already changed the display. It makes sense to me to have the mounts
    and numa_maps display the policy in the same format. However, if anyone
    objects strongly, I can pass the desired nodemask delimeter as an arg to
    mpol_to_str().

    Note 2: Like show_numa_map(), I don't check the return code from
    mpol_to_str(). I do use a longer buffer than the one provided by
    show_numa_map(), which seems to have sufficed so far.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • mpol-to-str() formats memory policies into printable strings. Currently this
    is only used to display "numa_maps". A subsequent patch will use
    mpol_to_str() for formatting tmpfs [shmem] mpol mount options, allowing us to
    remove essentially duplicate code in mm/shmem.c. This patch cleans up
    mpol_to_str() generally and in preparation for that patch.

    1) show_numa_maps() is not checking the return code from mpol_to_str().
    There's not a lot we can do in this context if mpol_to_str() did return the
    error [insufficient space in buffer]. Proposed "solution": just check,
    under DEBUG_VM, that callers are providing sufficient buffer space for the
    policy, flags, and a few nodes. This way, we'll get some display.
    show_numa_maps() is providing a 50-byte buffer, so it won't trip this
    check. 50-bytes should be sufficient unless one has a large number of
    nodes in a very sparse nodemask.

    2) The display of the new mode flags ["static" & "relative"] was set up to
    display multiple flags, separated by a "bar" '|'. However, this support is
    incomplete--e.g., need_bar was never incremented; and currently, these two
    flags are mutually exclusive. So remove the "bar" support, for now, and
    only display one flag.

    3) Use snprint() to format flags, so as not to overflow the buffer. Not
    that it's ever happed, AFAIK.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • Now that we're using "preferred local" policy for system default, we need to
    make this as fast as possible. Because of the variable size of the mempolicy
    structure [based on size of nodemasks], the preferred_node may be in a
    different cacheline from the mode. This can result in accessing an extra
    cacheline in the normal case of system default policy. Suspect this is the
    cause of an observed 2-3% slowdown in page fault testing relative to kernel
    without this patch series.

    To alleviate this, use an internal mode flag, MPOL_F_LOCAL in the mempolicy
    flags member which is guaranteed [?] to be in the same cacheline as the mode
    itself.

    Verified that reworked mempolicy now performs slightly better on 25-rc8-mm1
    for both anon and shmem segments with system default and vma [preferred local]
    policy.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • Here are a couple of "cleanups" for MPOL_PREFERRED behavior when
    v.preferred_node < 0 -- i.e., "local allocation":

    1) [do_]get_mempolicy() calls the now renamed get_policy_nodemask()
    to fetch the nodemask associated with a policy. Currently,
    get_policy_nodemask() returns the set of nodes with memory, when
    the policy 'mode' is 'PREFERRED, and the preferred_node is < 0.
    Change to return an empty nodemask, as this is what was specified
    to achieve "local allocation".

    2) When a task is moved into a [new] cpuset, mpol_rebind_policy() is
    called to adjust any task and vma policy nodes to be valid in the
    new cpuset. However, when the policy is MPOL_PREFERRED, and the
    preferred_node is
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • Currently, when one specifies MPOL_DEFAULT via a NUMA memory policy API
    [set_mempolicy(), mbind() and internal versions], the kernel simply installs a
    NULL struct mempolicy pointer in the appropriate context: task policy, vma
    policy, or shared policy. This causes any use of that policy to "fall back"
    to the next most specific policy scope.

    The only use of MPOL_DEFAULT to mean "local allocation" is in the system
    default policy. This requires extra checks/cases for MPOL_DEFAULT in many
    mempolicy.c functions.

    There is another, "preferred" way to specify local allocation via the APIs.
    That is using the MPOL_PREFERRED policy mode with an empty nodemask.
    Internally, the empty nodemask gets converted to a preferred_node id of '-1'.
    All internal usage of MPOL_PREFERRED will convert the '-1' to the id of the
    node local to the cpu where the allocation occurs.

    System default policy, except during boot, is hard-coded to "local
    allocation". By using the MPOL_PREFERRED mode with a negative value of
    preferred node for system default policy, MPOL_DEFAULT will never occur in the
    'policy' member of a struct mempolicy. Thus, we can remove all checks for
    MPOL_DEFAULT when converting policy to a node id/zonelist in the allocation
    paths.

    In slab_node() return local node id when policy pointer is NULL. No need to
    set a pol value to take the switch default. Replace switch default with
    BUG()--i.e., shouldn't happen.

    With this patch MPOL_DEFAULT is only used in the APIs, including internal
    calls to do_set_mempolicy() and in the display of policy in
    /proc//numa_maps. It always means "fall back" to the the next most
    specific policy scope. This simplifies the description of memory policies
    quite a bit, with no visible change in behavior.

    get_mempolicy() continues to return MPOL_DEFAULT and an empty nodemask when
    the requested policy [task or vma/shared] is NULL. These are the values one
    would supply via set_mempolicy() or mbind() to achieve that condition--default
    behavior.

    This patch updates Documentation to reflect this change.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • After further discussion with Christoph Lameter, it has become clear that my
    earlier attempts to clean up the mempolicy reference counting were a bit of
    overkill in some areas, resulting in superflous ref/unref in what are usually
    fast paths. In other areas, further inspection reveals that I botched the
    unref for interleave policies.

    A separate patch, suitable for upstream/stable trees, fixes up the known
    errors in the previous attempt to fix reference counting.

    This patch reworks the memory policy referencing counting and, one hopes,
    simplifies the code. Maybe I'll get it right this time.

    See the update to the numa_memory_policy.txt document for a discussion of
    memory policy reference counting that motivates this patch.

    Summary:

    Lookup of mempolicy, based on (vma, address) need only add a reference for
    shared policy, and we need only unref the policy when finished for shared
    policies. So, this patch backs out all of the unneeded extra reference
    counting added by my previous attempt. It then unrefs only shared policies
    when we're finished with them, using the mpol_cond_put() [conditional put]
    helper function introduced by this patch.

    Note that shmem_swapin() calls read_swap_cache_async() with a dummy vma
    containing just the policy. read_swap_cache_async() can call alloc_page_vma()
    multiple times, so we can't let alloc_page_vma() unref the shared policy in
    this case. To avoid this, we make a copy of any non-null shared policy and
    remove the MPOL_F_SHARED flag from the copy. This copy occurs before reading
    a page [or multiple pages] from swap, so the overhead should not be an issue
    here.

    I introduced a new static inline function "mpol_cond_copy()" to copy the
    shared policy to an on-stack policy and remove the flags that would require a
    conditional free. The current implementation of mpol_cond_copy() assumes that
    the struct mempolicy contains no pointers to dynamically allocated structures
    that must be duplicated or reference counted during copy.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • Document mempolicy return value reference semantics assumed by the rest of the
    mempolicy code for the set_ and get_policy vm_ops in --where the
    prototypes are defined--to inform any future mempolicy vm_op writers what the
    rest of the subsystem expects of them.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • As part of yet another rework of mempolicy reference counting, we want to be
    able to identify shared policies efficiently, because they have an extra ref
    taken on lookup that needs to be removed when we're finished using the policy.

    Note: the extra ref is required because the policies are
    shared between tasks/processes and can be changed/freed
    by one task while another task is using them--e.g., for
    page allocation.

    Building on David Rientjes mempolicy "mode flags" enhancement, this patch
    indicates a "shared" policy by setting a new MPOL_F_SHARED flag in the flags
    member of the struct mempolicy added by David. MPOL_F_SHARED, and any future
    "internal mode flags" are reserved from bit zero up, as they will never be
    passed in the upper bits of the mode argument of a mempolicy API.

    I set the MPOL_F_SHARED flag when the policy is installed in the shared policy
    rb-tree. Don't need/want to clear the flag when removing from the tree as the
    mempolicy is freed [unref'd] internally to the sp_delete() function. However,
    a task could hold another reference on this mempolicy from a prior lookup. We
    need the MPOL_F_SHARED flag to stay put so that any tasks holding a ref will
    unref, eventually freeing, the mempolicy.

    A later patch in this series will introduce a function to conditionally unref
    [mpol_free] a policy. The MPOL_F_SHARED flag is one reason [currently the
    only reason] to unref/free a policy via the conditional free.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • The terms 'policy' and 'mode' are both used in various places to describe the
    semantics of the value stored in the 'policy' member of struct mempolicy.
    Furthermore, the term 'policy' is used to refer to that member, to the entire
    struct mempolicy and to the more abstract concept of the tuple consisting of a
    "mode" and an optional node or set of nodes. Recently, we have added "mode
    flags" that are passed in the upper bits of the 'mode' [or sometimes,
    'policy'] member of the numa APIs.

    I'd like to resolve this confusion, which perhaps only exists in my mind, by
    renaming the 'policy' member to 'mode' throughout, and fixing up the
    Documentation. Man pages will be updated separately.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • get_vma_policy() is not handling fallback to task policy correctly when the
    get_policy() vm_op returns NULL. The NULL overwrites the 'pol' variable that
    was holding the fallback task mempolicy. So, it was falling back directly to
    system default policy.

    Fix get_vma_policy() to use only non-NULL policy returned from the vma
    get_policy op.

    shm_get_policy() was falling back to current task's mempolicy if the "backing
    file system" [tmpfs vs hugetlbfs] does not support the get_policy vm_op and
    the vma policy is null. This is incorrect for show_numa_maps() which is
    likely querying the numa_maps of some task other than current. Remove this
    fallback.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • A read of /proc//numa_maps holds the target task's mmap_sem for read
    while examining each vma's mempolicy. A vma's mempolicy can fall back to the
    task's policy. However, the task could be changing it's task policy and free
    the one that the show_numa_maps() is examining.

    To prevent this, grab the mmap_sem for write when updating task mempolicy.
    Pointed out to me by Christoph Lameter and extracted and reworked from
    Christoph's alternative mempol reference counting patch.

    This is analogous to the way that do_mbind() and do_get_mempolicy() prevent
    races between task's sharing an mm_struct [a.k.a. threads] setting and
    querying a mempolicy for a particular address.

    Note: this is necessary, but not sufficient, to allow us to stop taking an
    extra reference on "other task's mempolicy" in get_vma_policy. Subsequent
    patches will complete this update, allowing us to simplify the tests for
    whether we need to unref a mempolicy at various points in the code.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • This patch renames mpol_copy() to mpol_dup() because, well, that's what it
    does. Like, e.g., strdup() for strings, mpol_dup() takes a pointer to an
    existing mempolicy, allocates a new one and copies the contents.

    In a later patch, I want to use the name mpol_copy() to copy the contents from
    one mempolicy to another like, e.g., strcpy() does for strings.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • This is a change that was requested some time ago by Mel Gorman. Makes sense
    to me, so here it is.

    Note: I retain the name "mpol_free_shared_policy()" because it actually does
    free the shared_policy, which is NOT a reference counted object. However, ...

    The mempolicy object[s] referenced by the shared_policy are reference counted,
    so mpol_put() is used to release the reference held by the shared_policy. The
    mempolicy might not be freed at this time, because some task attached to the
    shared object associated with the shared policy may be in the process of
    allocating a page based on the mempolicy. In that case, the task performing
    the allocation will hold a reference on the mempolicy, obtained via
    mpol_shared_policy_lookup(). The mempolicy will be freed when all tasks
    holding such a reference have called mpol_put() for the mempolicy.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • Allocating huge pages directly from the buddy allocator is not guaranteed to
    succeed. Success depends on several factors (such as the amount of physical
    memory available and the level of fragmentation). With the addition of
    dynamic hugetlb pool resizing, allocations can occur much more frequently.
    For these reasons it is desirable to keep track of huge page allocation
    successes and failures.

    Add two new vmstat entries to track huge page allocations that succeed and
    fail. The presence of the two entries is contingent upon CONFIG_HUGETLB_PAGE
    being enabled.

    [akpm@linux-foundation.org: reduced ifdeffery]
    Signed-off-by: Adam Litke
    Signed-off-by: Eric Munson
    Tested-by: Mel Gorman
    Reviewed-by: Andy Whitcroft
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • Add some values of page flags to the vmcoreinfo data.

    The vmcoreinfo data has the minimum debugging information only for dump
    filtering. makedumpfile (dump filtering command) gets it to distinguish
    unnecessary pages, and makedumpfile creates a small dumpfile.

    An old makedumpfile (v1.2.4 or before) had assumed some values of page flags
    internally, and this implementation could not follow the change of these
    values. For example, Christoph Lameter is changing these values by the
    follwing patch: http://lkml.org/lkml/2008/2/29/463

    So a new makedumpfile (v1.2.5) came to need these values and I created this
    patch to let the kernel output them.

    Signed-off-by: Ken'ichi Ohmichi
    Cc: Christoph Lameter
    Cc: "Eric W. Biederman"
    Acked-by: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ken'ichi Ohmichi
     
  • Convert XIP to support non-struct page backed memory, using VM_MIXEDMAP for
    the user mappings.

    This requires the get_xip_page API to be changed to an address based one.
    Improve the API layering a little bit too, while we're here.

    This is required in order to support XIP filesystems on memory that isn't
    backed with struct page (but memory with struct page is still supported too).

    Signed-off-by: Nick Piggin
    Acked-by: Carsten Otte
    Cc: Jared Hulbert
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Convert XIP to support non-struct page backed memory, using VM_MIXEDMAP for
    the user mappings.

    This requires the get_xip_page API to be changed to an address based one.
    Improve the API layering a little bit too, while we're here.

    This is required in order to support XIP filesystems on memory that isn't
    backed with struct page (but memory with struct page is still supported too).

    Signed-off-by: Nick Piggin
    Acked-by: Carsten Otte
    Cc: Jared Hulbert
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Alter the block device ->direct_access() API to work with the new
    get_xip_mem() API (that requires both kaddr and pfn are returned).

    Some architectures will not do the right thing in their virt_to_page() for use
    by XIP (to translate from the kernel virtual address returned by
    direct_access(), to a user mappable pfn in XIP's page fault handler.

    However, we can't switch it to just return the pfn and not the kaddr, because
    we have no good way to get a kva from a pfn, and XIP requires the kva for its
    read(2) and write(2) handlers. So we have to return both.

    Signed-off-by: Jared Hulbert
    Signed-off-by: Nick Piggin
    Cc: Carsten Otte
    Cc: Heiko Carstens
    Cc: linux-mm@kvack.org
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jared Hulbert
     
  • vm_insert_mixed will insert either a raw pfn or a refcounted struct page into
    the page tables, depending on whether vm_normal_page() will return the page or
    not. With the introduction of the new pte bit, this is now a too tricky for
    drivers to be doing themselves.

    filemap_xip uses this in a subsequent patch.

    Signed-off-by: Nick Piggin
    Cc: Jared Hulbert
    Cc: Carsten Otte
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
    model (which is more dynamic than most). Instead, they had proposed to
    implement it with an additional path through vm_normal_page(), using a bit in
    the pte to determine whether or not the page should be refcounted:

    vm_normal_page()
    {
    ...
    if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
    if (vma->vm_flags & VM_MIXEDMAP) {
    #ifdef s390
    if (!mixedmap_refcount_pte(pte))
    return NULL;
    #else
    if (!pfn_valid(pfn))
    return NULL;
    #endif
    goto out;
    }
    ...
    }

    This is fine, however if we are allowed to use a bit in the pte to determine
    refcountedness, we can use that to _completely_ replace all the vma based
    schemes. So instead of adding more cases to the already complex vma-based
    scheme, we can have a clearly seperate and simple pte-based scheme (and get
    slightly better code generation in the process):

    vm_normal_page()
    {
    #ifdef s390
    if (!mixedmap_refcount_pte(pte))
    return NULL;
    return pte_page(pte);
    #else
    ...
    #endif
    }

    And finally, we may rather make this concept usable by any architecture rather
    than making it s390 only, so implement a new type of pte state for this.
    Unfortunately the old vma based code must stay, because some architectures may
    not be able to spare pte bits. This makes vm_normal_page a little bit more
    ugly than we would like, but the 2 cases are clearly seperate.

    So introduce a pte_special pte state, and use it in mm/memory.c. It is
    currently a noop for all architectures, so this doesn't actually result in any
    compiled code changes to mm/memory.o.

    BTW:
    I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
    The reason is that, regardless of where vm_normal_page is actually
    implemented, the *abstraction* is still exactly the same. Also, while it
    depends on whether the architecture has pte_special or not, that is the
    only two possible cases, and it really isn't an arch specific function --
    the role of the arch code should be to provide primitive functions and
    accessors with which to build the core code; pte_special does that. We do
    not want architectures to know or care about vm_normal_page itself, and
    we definitely don't want them being able to invent something new there
    out of sight of mm/ code. If we made vm_normal_page an arch function, then
    we have to make vm_insert_mixed (next patch) an arch function too. So I
    don't think moving it to arch code fundamentally improves any abstractions,
    while it does practically make the code more difficult to follow, for both
    mm and arch developers, and easier to misuse.

    [akpm@linux-foundation.org: build fix]
    Signed-off-by: Nick Piggin
    Acked-by: Carsten Otte
    Cc: Jared Hulbert
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • This series introduces some important infrastructure work. The overall result
    is that:

    1. We now support XIP backed filesystems using memory that have no
    struct page allocated to them. And patches 6 and 7 actually implement
    this for s390.

    This is pretty important in a number of cases. As far as I understand,
    in the case of virtualisation (eg. s390), each guest may mount a
    readonly copy of the same filesystem (eg. the distro). Currently,
    guests need to allocate struct pages for this image. So if you have
    100 guests, you already need to allocate more memory for the struct
    pages than the size of the image. I think. (Carsten?)

    For other (eg. embedded) systems, you may have a very large non-
    volatile filesystem. If you have to have struct pages for this, then
    your RAM consumption will go up proportionally to fs size. Even
    though it is just a small proportion, the RAM can be much more costly
    eg in terms of power, so every KB less that Linux uses makes it more
    attractive to a lot of these guys.

    2. VM_MIXEDMAP allows us to support mappings where you actually do want
    to refcount _some_ pages in the mapping, but not others, and support
    COW on arbitrary (non-linear) mappings. Jared needs this for his NVRAM
    filesystem in progress. Future iterations of this filesystem will
    most likely want to migrate pages between pagecache and XIP backing,
    which is where the requirement for mixed (some refcounted, some not)
    comes from.

    3. pte_special also has a peripheral usage that I need for my lockless
    get_user_pages patch. That was shown to speed up "oltp" on db2 by
    10% on a 2 socket system, which is kind of significant because they
    scrounge for months to try to find 0.1% improvement on these
    workloads. I'm hoping we might finally be faster than AIX on
    pSeries with this :). My reference to lockless get_user_pages is not
    meant to justify this patchset (which doesn't include lockless gup),
    but just to show that pte_special is not some s390 specific thing that
    should be hidden in arch code or xip code: I definitely want to use it
    on at least x86 and powerpc as well.

    This patch:

    Introduce a new type of mapping, VM_MIXEDMAP. This is unlike VM_PFNMAP in
    that it can support COW mappings of arbitrary ranges including ranges without
    struct page *and* ranges with a struct page that we actually want to refcount
    (PFNMAP can only support COW in those cases where the un-COW-ed translations
    are mapped linearly in the virtual address, and can only support non
    refcounted ranges).

    VM_MIXEDMAP achieves this by refcounting all pfn_valid pages, and not
    refcounting !pfn_valid pages (which is not an option for VM_PFNMAP, because it
    needs to avoid refcounting pfn_valid pages eg. for /dev/mem mappings).

    Signed-off-by: Jared Hulbert
    Signed-off-by: Nick Piggin
    Acked-by: Carsten Otte
    Cc: Jared Hulbert
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jared Hulbert
     
  • Show the amount of swap for each vma. This can be used to see where all the
    swap goes.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Peter Zijlstra
    Acked-by: Matt Mackall
    Acked-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Having separate page flags for the head and the tail of a compound page allows
    the compiler to use bitops instead of operations on a word to check for a tail
    page. That is f.e. important for virt_to_head_page() which is used in
    various critical code paths (kfree for example):

    Code for PageTail(page)

    Before:

    mov (%rdi),%rdx page->flags
    mov %rdx,%rax 3 bytes
    and $0x12000,%eax 5 bytes
    cmp $0x12000,%rax 6 bytes
    je 897

    After:

    mov (%rdi),%rax
    test $0x40,%ah (3 bytes)
    jne 887

    So we go from 14 bytes to 3 bytes and from 3 instructions to one. From the
    use of 2 registers we go to none.

    We can only use page flags for this if we have page flags available. This
    patch introduces CONFIG_PAGEFLAGS_EXTENDED that is set if pageflags are not
    scarce due to SPARSEMEM using page flags for its sectionid on 32 bit NUMA
    platforms.

    Additional page flag definitions can be added to the CONFIG_PAGEFLAGS_EXTENDED
    section in page-flags.h if the functionality depends on PAGEFLAGS_EXTENDED or
    if more page flag overlapping tricks are used for the !PAGEFLAGS_EXTENDED
    fallback (the upcoming virtual compound patch may hook in here and Rik's/Lee's
    additional page flags to solve the reclaim issues could also be added there
    [hint... hint... where are these patchsets?]).

    Avoiding the overlaying of Pg_reclaim also clears the way for possible use of
    compound pages for the pagecache or on the LRU.

    Signed-off-by: Christoph Lameter
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • It was used to compensate because MAX_NR_ZONES was not available to the
    #ifdefs. Export MAX_NR_ZONES via the new mechanism and get rid of
    __ZONE_COUNT.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Turns out that there are a number of times that a flag is simply always
    returning 0. Define a macro for that.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Remove the special setup for PG_uncached and simply make it part of the enum.
    The page flag will only be allocated when the kernel build includes the
    uncached allocator.

    Acked-by: Dean Nelson
    Cc: Jes Sorensen
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Remove aliases of PG_xxx. We can easily drop those now and alias by
    specifying the PG_xxx flag in the macro that generates the functions.

    Signed-off-by: Christoph Lameter
    Cc: Andy Whitcroft
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Xen uses bitops to manipulate page flags. Make it use proper page flag
    functions.

    Signed-off-by: Christoph Lameter
    Cc: Andy Whitcroft
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Replace explicit definitions of page flags through the use of macros.
    Significantly reduces the size of the definitions and removes a lot of
    opportunity for errors. Additonal page flags can typically be generated with
    a single line.

    Signed-off-by: Christoph Lameter
    Cc: Andy Whitcroft
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Introduce a set of macros that generate functions to handle page flags.

    A page flag function group typically starts with either

    SETPAGEFLAG(,)

    to create a set of page flag operations that are atomic. Or

    __SETPAGEFLAG(,
    Cc: Andy Whitcroft
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • NR_PAGEFLAGS specifies the number of page flags we are using. From that we
    can calculate the number of bits leftover that can be used for zone, node (and
    maybe the sections id). There is no need anymore for FLAGS_RESERVED if we use
    NR_PAGEFLAGS.

    Use the new methods to make NR_PAGEFLAGS available via the preprocessor.
    NR_PAGEFLAGS is used to calculate field boundaries in the page flags fields.
    These field widths have to be available to the preprocessor.

    Signed-off-by: Christoph Lameter
    Cc: David Miller
    Cc: Andy Whitcroft
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Use an enum to ease the maintenance of page flags. This is going to change
    the numbering from 0 to 18.

    Signed-off-by: Christoph Lameter
    Cc: Andy Whitcroft
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Add the ability to pass comments into asm-offsets.h by generating asm
    output like

    -># comment line

    Mips needs this feature to preserve the comments that are in
    asm-mips/asm-offsets.h right now.

    Then remove the special handling for mips from Kbuild and convert mips to use
    the new string to include the comments.

    Cc: Ralf Baechle
    Signed-off-by: Christoph Lameter
    Cc: Sam Ravnborg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter