Commit 7b2259b3e53f128c10a9fded0965e69d4a949847

Authored by Christoph Lameter
Committed by Linus Torvalds
1 parent 68402ddc67

[PATCH] page migration: Support a vma migration function

Hooks for calling vma specific migration functions

With this patch a vma may define a vma->vm_ops->migrate function.  That
function may perform page migration on its own (some vmas may not contain page
structs and therefore cannot be handled by regular page migration.  Pages in a
vma may require special preparatory treatment before migration is possible
etc) .  Only mmap_sem is held when the migration function is called.  The
migrate() function gets passed two sets of nodemasks describing the source and
the target of the migration.  The flags parameter either contains

MPOL_MF_MOVE	which means that only pages used exclusively by
		the specified mm should be moved

or

MPOL_MF_MOVE_ALL which means that pages shared with other processes
		should also be moved.

The migration function returns 0 on success or an error condition.  An error
condition will prevent regular page migration from occurring.

On its own this patch cannot be included since there are no users for this
functionality.  But it seems that the uncached allocator will need this
functionality at some point.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

Showing 4 changed files with 37 additions and 2 deletions Side-by-side Diff

include/linux/migrate.h
... ... @@ -16,7 +16,9 @@
16 16 struct page *, struct page *);
17 17  
18 18 extern int migrate_prep(void);
19   -
  19 +extern int migrate_vmas(struct mm_struct *mm,
  20 + const nodemask_t *from, const nodemask_t *to,
  21 + unsigned long flags);
20 22 #else
21 23  
22 24 static inline int isolate_lru_page(struct page *p, struct list_head *list)
... ... @@ -29,6 +31,13 @@
29 31 struct vm_area_struct *vma, int dest) { return 0; }
30 32  
31 33 static inline int migrate_prep(void) { return -ENOSYS; }
  34 +
  35 +static inline int migrate_vmas(struct mm_struct *mm,
  36 + const nodemask_t *from, const nodemask_t *to,
  37 + unsigned long flags)
  38 +{
  39 + return -ENOSYS;
  40 +}
32 41  
33 42 /* Possible settings for the migrate_page() method in address_operations */
34 43 #define migrate_page NULL
... ... @@ -206,6 +206,8 @@
206 206 int (*set_policy)(struct vm_area_struct *vma, struct mempolicy *new);
207 207 struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
208 208 unsigned long addr);
  209 + int (*migrate)(struct vm_area_struct *vma, const nodemask_t *from,
  210 + const nodemask_t *to, unsigned long flags);
209 211 #endif
210 212 };
211 213  
... ... @@ -632,6 +632,10 @@
632 632  
633 633 down_read(&mm->mmap_sem);
634 634  
  635 + err = migrate_vmas(mm, from_nodes, to_nodes, flags);
  636 + if (err)
  637 + goto out;
  638 +
635 639 /*
636 640 * Find a 'source' bit set in 'tmp' whose corresponding 'dest'
637 641 * bit in 'to' is not also set in 'tmp'. Clear the found 'source'
... ... @@ -691,7 +695,7 @@
691 695 if (err < 0)
692 696 break;
693 697 }
694   -
  698 +out:
695 699 up_read(&mm->mmap_sem);
696 700 if (err < 0)
697 701 return err;
... ... @@ -975,4 +975,25 @@
975 975 return err;
976 976 }
977 977 #endif
  978 +
  979 +/*
  980 + * Call migration functions in the vma_ops that may prepare
  981 + * memory in a vm for migration. migration functions may perform
  982 + * the migration for vmas that do not have an underlying page struct.
  983 + */
  984 +int migrate_vmas(struct mm_struct *mm, const nodemask_t *to,
  985 + const nodemask_t *from, unsigned long flags)
  986 +{
  987 + struct vm_area_struct *vma;
  988 + int err = 0;
  989 +
  990 + for(vma = mm->mmap; vma->vm_next && !err; vma = vma->vm_next) {
  991 + if (vma->vm_ops && vma->vm_ops->migrate) {
  992 + err = vma->vm_ops->migrate(vma, to, from, flags);
  993 + if (err)
  994 + break;
  995 + }
  996 + }
  997 + return err;
  998 +}