Commit e0dc0d8f4a327d033bfb63d43f113d5f31d11b3c

Authored by Nick Piggin
Committed by Linus Torvalds
1 parent 2ca48ed5cc

[PATCH] add vm_insert_pfn()

Add a vm_insert_pfn helper, so that ->fault handlers can have nopfn
functionality by installing their own pte and returning NULL.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 2 changed files with 47 additions and 0 deletions Side-by-side Diff

... ... @@ -1124,6 +1124,8 @@
1124 1124 int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
1125 1125 unsigned long pfn, unsigned long size, pgprot_t);
1126 1126 int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
  1127 +int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
  1128 + unsigned long pfn);
1127 1129  
1128 1130 struct page *follow_page(struct vm_area_struct *, unsigned long address,
1129 1131 unsigned int foll_flags);
... ... @@ -1277,6 +1277,51 @@
1277 1277 }
1278 1278 EXPORT_SYMBOL(vm_insert_page);
1279 1279  
  1280 +/**
  1281 + * vm_insert_pfn - insert single pfn into user vma
  1282 + * @vma: user vma to map to
  1283 + * @addr: target user address of this page
  1284 + * @pfn: source kernel pfn
  1285 + *
  1286 + * Similar to vm_inert_page, this allows drivers to insert individual pages
  1287 + * they've allocated into a user vma. Same comments apply.
  1288 + *
  1289 + * This function should only be called from a vm_ops->fault handler, and
  1290 + * in that case the handler should return NULL.
  1291 + */
  1292 +int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
  1293 + unsigned long pfn)
  1294 +{
  1295 + struct mm_struct *mm = vma->vm_mm;
  1296 + int retval;
  1297 + pte_t *pte, entry;
  1298 + spinlock_t *ptl;
  1299 +
  1300 + BUG_ON(!(vma->vm_flags & VM_PFNMAP));
  1301 + BUG_ON(is_cow_mapping(vma->vm_flags));
  1302 +
  1303 + retval = -ENOMEM;
  1304 + pte = get_locked_pte(mm, addr, &ptl);
  1305 + if (!pte)
  1306 + goto out;
  1307 + retval = -EBUSY;
  1308 + if (!pte_none(*pte))
  1309 + goto out_unlock;
  1310 +
  1311 + /* Ok, finally just insert the thing.. */
  1312 + entry = pfn_pte(pfn, vma->vm_page_prot);
  1313 + set_pte_at(mm, addr, pte, entry);
  1314 + update_mmu_cache(vma, addr, entry);
  1315 +
  1316 + retval = 0;
  1317 +out_unlock:
  1318 + pte_unmap_unlock(pte, ptl);
  1319 +
  1320 +out:
  1321 + return retval;
  1322 +}
  1323 +EXPORT_SYMBOL(vm_insert_pfn);
  1324 +
1280 1325 /*
1281 1326 * maps a range of physical memory into the requested pages. the old
1282 1327 * mappings are removed. any references to nonexistent pages results