Blame view
Documentation/vm/hmm.rst
20.1 KB
50aab9b14 mm/doc: editorial... |
1 |
.. _hmm: |
aa9f34e5d docs/vm: hmm.txt:... |
2 3 |
===================================== |
bffc33ec5 hmm: heterogeneou... |
4 |
Heterogeneous Memory Management (HMM) |
aa9f34e5d docs/vm: hmm.txt:... |
5 |
===================================== |
bffc33ec5 hmm: heterogeneou... |
6 |
|
e8eddfd2d Documentation/vm/... |
7 8 9 10 11 12 |
Provide infrastructure and helpers to integrate non-conventional memory (device memory like GPU on board memory) into regular kernel path, with the cornerstone of this being specialized struct page for such memory (see sections 5 to 7 of this document). HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., |
2076e5c04 mm/hmm: update HM... |
13 |
allowing a device to transparently access program addresses coherently with |
24844fd33 Merge branch 'mm-... |
14 15 16 |
the CPU meaning that any valid pointer on the CPU is also a valid pointer for the device. This is becoming mandatory to simplify the use of advanced heterogeneous computing where GPU, DSP, or FPGA are used to perform various |
e8eddfd2d Documentation/vm/... |
17 |
computations on behalf of a process. |
76ea470ce mm/hmm: documenta... |
18 19 20 21 22 |
This document is divided as follows: in the first section I expose the problems related to using device specific memory allocators. In the second section, I expose the hardware limitations that are inherent to many platforms. The third section gives an overview of the HMM design. The fourth section explains how |
e8eddfd2d Documentation/vm/... |
23 |
CPU page-table mirroring works and the purpose of HMM in this context. The |
76ea470ce mm/hmm: documenta... |
24 |
fifth section deals with how device memory is represented inside the kernel. |
2076e5c04 mm/hmm: update HM... |
25 26 |
Finally, the last section presents a new migration helper that allows leveraging the device DMA engine. |
76ea470ce mm/hmm: documenta... |
27 |
|
aa9f34e5d docs/vm: hmm.txt:... |
28 |
.. contents:: :local: |
76ea470ce mm/hmm: documenta... |
29 |
|
24844fd33 Merge branch 'mm-... |
30 31 |
Problems of using a device specific memory allocator ==================================================== |
bffc33ec5 hmm: heterogeneou... |
32 |
|
e8eddfd2d Documentation/vm/... |
33 |
Devices with a large amount of on board memory (several gigabytes) like GPUs |
76ea470ce mm/hmm: documenta... |
34 35 36 37 38 39 40 |
have historically managed their memory through dedicated driver specific APIs. This creates a disconnect between memory allocated and managed by a device driver and regular application memory (private anonymous, shared memory, or regular file backed memory). From here on I will refer to this aspect as split address space. I use shared address space to refer to the opposite situation: i.e., one in which any application memory region can be used by a device transparently. |
bffc33ec5 hmm: heterogeneou... |
41 |
|
2076e5c04 mm/hmm: update HM... |
42 43 |
Split address space happens because devices can only access memory allocated through a device specific API. This implies that all memory objects in a program |
e8eddfd2d Documentation/vm/... |
44 45 |
are not equal from the device point of view which complicates large programs that rely on a wide set of libraries. |
bffc33ec5 hmm: heterogeneou... |
46 |
|
2076e5c04 mm/hmm: update HM... |
47 48 |
Concretely, this means that code that wants to leverage devices like GPUs needs to copy objects between generically allocated memory (malloc, mmap private, mmap |
e8eddfd2d Documentation/vm/... |
49 50 |
share) and memory allocated through the device driver API (this still ends up with an mmap but of the device file). |
bffc33ec5 hmm: heterogeneou... |
51 |
|
e8eddfd2d Documentation/vm/... |
52 |
For flat data sets (array, grid, image, ...) this isn't too hard to achieve but |
2076e5c04 mm/hmm: update HM... |
53 |
for complex data sets (list, tree, ...) it's hard to get right. Duplicating a |
e8eddfd2d Documentation/vm/... |
54 |
complex data set needs to re-map all the pointer relations between each of its |
2076e5c04 mm/hmm: update HM... |
55 |
elements. This is error prone and programs get harder to debug because of the |
e8eddfd2d Documentation/vm/... |
56 |
duplicate data set and addresses. |
bffc33ec5 hmm: heterogeneou... |
57 |
|
e8eddfd2d Documentation/vm/... |
58 |
Split address space also means that libraries cannot transparently use data |
76ea470ce mm/hmm: documenta... |
59 |
they are getting from the core program or another library and thus each library |
e8eddfd2d Documentation/vm/... |
60 |
might have to duplicate its input data set using the device specific memory |
76ea470ce mm/hmm: documenta... |
61 62 |
allocator. Large projects suffer from this and waste resources because of the various memory copies. |
bffc33ec5 hmm: heterogeneou... |
63 |
|
e8eddfd2d Documentation/vm/... |
64 |
Duplicating each library API to accept as input or output memory allocated by |
bffc33ec5 hmm: heterogeneou... |
65 |
each device specific allocator is not a viable option. It would lead to a |
76ea470ce mm/hmm: documenta... |
66 |
combinatorial explosion in the library entry points. |
bffc33ec5 hmm: heterogeneou... |
67 |
|
76ea470ce mm/hmm: documenta... |
68 69 70 71 72 |
Finally, with the advance of high level language constructs (in C++ but in other languages too) it is now possible for the compiler to leverage GPUs and other devices without programmer knowledge. Some compiler identified patterns are only do-able with a shared address space. It is also more reasonable to use a shared address space for all other patterns. |
bffc33ec5 hmm: heterogeneou... |
73 |
|
24844fd33 Merge branch 'mm-... |
74 75 |
I/O bus, device memory characteristics ====================================== |
bffc33ec5 hmm: heterogeneou... |
76 |
|
e8eddfd2d Documentation/vm/... |
77 78 |
I/O buses cripple shared address spaces due to a few limitations. Most I/O buses only allow basic memory access from device to main memory; even cache |
2076e5c04 mm/hmm: update HM... |
79 |
coherency is often optional. Access to device memory from a CPU is even more |
e8eddfd2d Documentation/vm/... |
80 |
limited. More often than not, it is not cache coherent. |
bffc33ec5 hmm: heterogeneou... |
81 |
|
76ea470ce mm/hmm: documenta... |
82 83 |
If we only consider the PCIE bus, then a device can access main memory (often through an IOMMU) and be cache coherent with the CPUs. However, it only allows |
2076e5c04 mm/hmm: update HM... |
84 |
a limited set of atomic operations from the device on main memory. This is worse |
e8eddfd2d Documentation/vm/... |
85 86 |
in the other direction: the CPU can only access a limited range of the device memory and cannot perform atomic operations on it. Thus device memory cannot |
76ea470ce mm/hmm: documenta... |
87 |
be considered the same as regular memory from the kernel point of view. |
bffc33ec5 hmm: heterogeneou... |
88 89 |
Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0 |
76ea470ce mm/hmm: documenta... |
90 91 92 |
and 16 lanes). This is 33 times less than the fastest GPU memory (1 TBytes/s). The final limitation is latency. Access to main memory from the device has an order of magnitude higher latency than when the device accesses its own memory. |
bffc33ec5 hmm: heterogeneou... |
93 |
|
76ea470ce mm/hmm: documenta... |
94 |
Some platforms are developing new I/O buses or additions/modifications to PCIE |
2076e5c04 mm/hmm: update HM... |
95 96 |
to address some of these limitations (OpenCAPI, CCIX). They mainly allow two-way cache coherency between CPU and device and allow all atomic operations the |
e8eddfd2d Documentation/vm/... |
97 |
architecture supports. Sadly, not all platforms are following this trend and |
76ea470ce mm/hmm: documenta... |
98 |
some major architectures are left without hardware solutions to these problems. |
bffc33ec5 hmm: heterogeneou... |
99 |
|
e8eddfd2d Documentation/vm/... |
100 101 |
So for shared address space to make sense, not only must we allow devices to access any memory but we must also permit any memory to be migrated to device |
2076e5c04 mm/hmm: update HM... |
102 |
memory while the device is using it (blocking CPU access while it happens). |
bffc33ec5 hmm: heterogeneou... |
103 |
|
24844fd33 Merge branch 'mm-... |
104 105 |
Shared address space and migration ================================== |
bffc33ec5 hmm: heterogeneou... |
106 |
|
2076e5c04 mm/hmm: update HM... |
107 |
HMM intends to provide two main features. The first one is to share the address |
76ea470ce mm/hmm: documenta... |
108 109 |
space by duplicating the CPU page table in the device page table so the same address points to the same physical memory for any valid main memory address in |
bffc33ec5 hmm: heterogeneou... |
110 |
the process address space. |
76ea470ce mm/hmm: documenta... |
111 |
To achieve this, HMM offers a set of helpers to populate the device page table |
bffc33ec5 hmm: heterogeneou... |
112 |
while keeping track of CPU page table updates. Device page table updates are |
76ea470ce mm/hmm: documenta... |
113 114 115 |
not as easy as CPU page table updates. To update the device page table, you must allocate a buffer (or use a pool of pre-allocated buffers) and write GPU specific commands in it to perform the update (unmap, cache invalidations, and |
e8eddfd2d Documentation/vm/... |
116 |
flush, ...). This cannot be done through common code for all devices. Hence |
76ea470ce mm/hmm: documenta... |
117 118 |
why HMM provides helpers to factor out everything that can be while leaving the hardware specific details to the device driver. |
e8eddfd2d Documentation/vm/... |
119 |
The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that |
2076e5c04 mm/hmm: update HM... |
120 |
allows allocating a struct page for each page of device memory. Those pages |
e8eddfd2d Documentation/vm/... |
121 |
are special because the CPU cannot map them. However, they allow migrating |
76ea470ce mm/hmm: documenta... |
122 |
main memory to device memory using existing migration mechanisms and everything |
2076e5c04 mm/hmm: update HM... |
123 124 125 |
looks like a page that is swapped out to disk from the CPU point of view. Using a struct page gives the easiest and cleanest integration with existing mm mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE |
76ea470ce mm/hmm: documenta... |
126 |
memory for the device memory and second to perform migration. Policy decisions |
2076e5c04 mm/hmm: update HM... |
127 |
of what and when to migrate is left to the device driver. |
76ea470ce mm/hmm: documenta... |
128 129 130 131 132 133 134 |
Note that any CPU access to a device page triggers a page fault and a migration back to main memory. For example, when a page backing a given CPU address A is migrated from a main memory page to a device page, then any CPU access to address A triggers a page fault and initiates a migration back to main memory. With these two features, HMM not only allows a device to mirror process address |
2076e5c04 mm/hmm: update HM... |
135 136 |
space and keeps both CPU and device page tables synchronized, but also leverages device memory by migrating the part of the data set that is actively being |
76ea470ce mm/hmm: documenta... |
137 |
used by the device. |
bffc33ec5 hmm: heterogeneou... |
138 |
|
aa9f34e5d docs/vm: hmm.txt:... |
139 140 |
Address space mirroring implementation and API ============================================== |
bffc33ec5 hmm: heterogeneou... |
141 |
|
76ea470ce mm/hmm: documenta... |
142 143 |
Address space mirroring's main objective is to allow duplication of a range of CPU page table into a device page table; HMM helps keep both synchronized. A |
e8eddfd2d Documentation/vm/... |
144 |
device driver that wants to mirror a process address space must start with the |
a22dd5064 mm/hmm: remove hm... |
145 |
registration of a mmu_interval_notifier:: |
5292e24a6 mm/mmu_notifiers:... |
146 147 148 149 |
int mmu_interval_notifier_insert(struct mmu_interval_notifier *interval_sub, struct mm_struct *mm, unsigned long start, unsigned long length, const struct mmu_interval_notifier_ops *ops); |
a22dd5064 mm/hmm: remove hm... |
150 |
|
5292e24a6 mm/mmu_notifiers:... |
151 152 153 |
During the ops->invalidate() callback the device driver must perform the update action to the range (mark range read only, or fully unmap, etc.). The device must complete the update before the driver callback returns. |
bffc33ec5 hmm: heterogeneou... |
154 |
|
76ea470ce mm/hmm: documenta... |
155 |
When the device driver wants to populate a range of virtual addresses, it can |
d45d464b1 mm/hmm: merge hmm... |
156 |
use:: |
aa9f34e5d docs/vm: hmm.txt:... |
157 |
|
be957c886 mm/hmm: make hmm_... |
158 |
int hmm_range_fault(struct hmm_range *range); |
bffc33ec5 hmm: heterogeneou... |
159 |
|
6bfef2f91 mm/hmm: remove HM... |
160 161 162 |
It will trigger a page fault on missing or read-only entries if write access is requested (see below). Page faults use the generic mm page fault code path just like a CPU page fault. |
bffc33ec5 hmm: heterogeneou... |
163 |
|
76ea470ce mm/hmm: documenta... |
164 165 166 167 |
Both functions copy CPU page table entries into their pfns array argument. Each entry in that array corresponds to an address in the virtual range. HMM provides a set of flags to help the driver identify special CPU page table entries. |
bffc33ec5 hmm: heterogeneou... |
168 |
|
2076e5c04 mm/hmm: update HM... |
169 170 171 |
Locking within the sync_cpu_device_pagetables() callback is the most important aspect the driver must respect in order to keep things properly synchronized. The usage pattern is:: |
bffc33ec5 hmm: heterogeneou... |
172 173 174 175 176 |
int driver_populate_range(...) { struct hmm_range range; ... |
25f23a0c7 mm/hmm: improve a... |
177 |
|
5292e24a6 mm/mmu_notifiers:... |
178 |
range.notifier = &interval_sub; |
25f23a0c7 mm/hmm: improve a... |
179 180 |
range.start = ...; range.end = ...; |
2733ea144 mm/hmm: remove th... |
181 |
range.hmm_pfns = ...; |
a3e0d41c2 mm/hmm: improve d... |
182 |
|
5292e24a6 mm/mmu_notifiers:... |
183 |
if (!mmget_not_zero(interval_sub->notifier.mm)) |
a22dd5064 mm/hmm: remove hm... |
184 |
return -EFAULT; |
25f23a0c7 mm/hmm: improve a... |
185 |
|
bffc33ec5 hmm: heterogeneou... |
186 |
again: |
5292e24a6 mm/mmu_notifiers:... |
187 |
range.notifier_seq = mmu_interval_read_begin(&interval_sub); |
3e4e28c5a mmap locking API:... |
188 |
mmap_read_lock(mm); |
6bfef2f91 mm/hmm: remove HM... |
189 |
ret = hmm_range_fault(&range); |
25f23a0c7 mm/hmm: improve a... |
190 |
if (ret) { |
3e4e28c5a mmap locking API:... |
191 |
mmap_read_unlock(mm); |
a22dd5064 mm/hmm: remove hm... |
192 193 |
if (ret == -EBUSY) goto again; |
bffc33ec5 hmm: heterogeneou... |
194 |
return ret; |
25f23a0c7 mm/hmm: improve a... |
195 |
} |
3e4e28c5a mmap locking API:... |
196 |
mmap_read_unlock(mm); |
a22dd5064 mm/hmm: remove hm... |
197 |
|
bffc33ec5 hmm: heterogeneou... |
198 |
take_lock(driver->update); |
a22dd5064 mm/hmm: remove hm... |
199 |
if (mmu_interval_read_retry(&ni, range.notifier_seq) { |
bffc33ec5 hmm: heterogeneou... |
200 201 202 |
release_lock(driver->update); goto again; } |
a22dd5064 mm/hmm: remove hm... |
203 204 |
/* Use pfns array content to update device page table, * under the update lock */ |
bffc33ec5 hmm: heterogeneou... |
205 206 207 208 |
release_lock(driver->update); return 0; } |
76ea470ce mm/hmm: documenta... |
209 |
The driver->update lock is the same lock that the driver takes inside its |
a22dd5064 mm/hmm: remove hm... |
210 211 212 |
invalidate() callback. That lock must be held before calling mmu_interval_read_retry() to avoid any race with a concurrent CPU page table update. |
bffc33ec5 hmm: heterogeneou... |
213 |
|
023a019a9 mm/hmm: add defau... |
214 215 |
Leverage default_flags and pfn_flags_mask ========================================= |
2076e5c04 mm/hmm: update HM... |
216 217 218 |
The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify fault or snapshot policy for the whole range instead of having to set them for each entry in the pfns array. |
2733ea144 mm/hmm: remove th... |
219 220 |
For instance if the device driver wants pages for a range with at least read permission, it sets:: |
023a019a9 mm/hmm: add defau... |
221 |
|
2733ea144 mm/hmm: remove th... |
222 |
range->default_flags = HMM_PFN_REQ_FAULT; |
023a019a9 mm/hmm: add defau... |
223 |
range->pfn_flags_mask = 0; |
2076e5c04 mm/hmm: update HM... |
224 |
and calls hmm_range_fault() as described above. This will fill fault all pages |
023a019a9 mm/hmm: add defau... |
225 |
in the range with at least read permission. |
2076e5c04 mm/hmm: update HM... |
226 227 |
Now let's say the driver wants to do the same except for one page in the range for which it wants to have write permission. Now driver set:: |
91173c6e1 mm: fix Documenta... |
228 |
|
2733ea144 mm/hmm: remove th... |
229 230 231 |
range->default_flags = HMM_PFN_REQ_FAULT; range->pfn_flags_mask = HMM_PFN_REQ_WRITE; range->pfns[index_of_write] = HMM_PFN_REQ_WRITE; |
023a019a9 mm/hmm: add defau... |
232 |
|
2076e5c04 mm/hmm: update HM... |
233 |
With this, HMM will fault in all pages with at least read (i.e., valid) and for the |
023a019a9 mm/hmm: add defau... |
234 |
address == range->start + (index_of_write << PAGE_SHIFT) it will fault with |
2076e5c04 mm/hmm: update HM... |
235 |
write permission i.e., if the CPU pte does not have write permission set then HMM |
023a019a9 mm/hmm: add defau... |
236 |
will call handle_mm_fault(). |
2733ea144 mm/hmm: remove th... |
237 238 239 |
After hmm_range_fault completes the flag bits are set to the current state of the page tables, ie HMM_PFN_VALID | HMM_PFN_WRITE will be set if the page is writable. |
023a019a9 mm/hmm: add defau... |
240 |
|
aa9f34e5d docs/vm: hmm.txt:... |
241 242 |
Represent and manage device memory from core kernel point of view ================================================================= |
bffc33ec5 hmm: heterogeneou... |
243 |
|
2076e5c04 mm/hmm: update HM... |
244 245 246 |
Several different designs were tried to support device memory. The first one used a device specific data structure to keep information about migrated memory and HMM hooked itself in various places of mm code to handle any access to |
76ea470ce mm/hmm: documenta... |
247 248 249 |
addresses that were backed by device memory. It turns out that this ended up replicating most of the fields of struct page and also needed many kernel code paths to be updated to understand this new kind of memory. |
bffc33ec5 hmm: heterogeneou... |
250 |
|
76ea470ce mm/hmm: documenta... |
251 252 253 254 255 |
Most kernel code paths never try to access the memory behind a page but only care about struct page contents. Because of this, HMM switched to directly using struct page for device memory which left most kernel code paths unaware of the difference. We only need to make sure that no one ever tries to map those pages from the CPU side. |
bffc33ec5 hmm: heterogeneou... |
256 |
|
24844fd33 Merge branch 'mm-... |
257 258 |
Migration to and from device memory =================================== |
bffc33ec5 hmm: heterogeneou... |
259 |
|
f7ebd9ed7 mm/doc: add usage... |
260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 |
Because the CPU cannot access device memory directly, the device driver must use hardware DMA or device specific load/store instructions to migrate data. The migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize() functions are designed to make drivers easier to write and to centralize common code across drivers. Before migrating pages to device private memory, special device private ``struct page`` need to be created. These will be used as special "swap" page table entries so that a CPU process will fault if it tries to access a page that has been migrated to device private memory. These can be allocated and freed with:: struct resource *res; struct dev_pagemap pagemap; res = request_free_mem_region(&iomem_resource, /* number of bytes */, "name of driver resource"); pagemap.type = MEMORY_DEVICE_PRIVATE; pagemap.range.start = res->start; pagemap.range.end = res->end; pagemap.nr_range = 1; pagemap.ops = &device_devmem_ops; memremap_pages(&pagemap, numa_node_id()); memunmap_pages(&pagemap); release_mem_region(pagemap.range.start, range_len(&pagemap.range)); There are also devm_request_free_mem_region(), devm_memremap_pages(), devm_memunmap_pages(), and devm_release_mem_region() when the resources can be tied to a ``struct device``. The overall migration steps are similar to migrating NUMA pages within system memory (see :ref:`Page migration <page_migration>`) but the steps are split between device driver specific code and shared common code: 1. ``mmap_read_lock()`` The device driver has to pass a ``struct vm_area_struct`` to migrate_vma_setup() so the mmap_read_lock() or mmap_write_lock() needs to be held for the duration of the migration. 2. ``migrate_vma_setup(struct migrate_vma *args)`` The device driver initializes the ``struct migrate_vma`` fields and passes the pointer to migrate_vma_setup(). The ``args->flags`` field is used to filter which source pages should be migrated. For example, setting ``MIGRATE_VMA_SELECT_SYSTEM`` will only migrate system memory and ``MIGRATE_VMA_SELECT_DEVICE_PRIVATE`` will only migrate pages residing in device private memory. If the latter flag is set, the ``args->pgmap_owner`` field is used to identify device private pages owned by the driver. This avoids trying to migrate device private pages residing in other devices. Currently only anonymous private VMA ranges can be migrated to or from system memory and device private memory. One of the first steps migrate_vma_setup() does is to invalidate other device's MMUs with the ``mmu_notifier_invalidate_range_start(()`` and ``mmu_notifier_invalidate_range_end()`` calls around the page table walks to fill in the ``args->src`` array with PFNs to be migrated. The ``invalidate_range_start()`` callback is passed a ``struct mmu_notifier_range`` with the ``event`` field set to ``MMU_NOTIFY_MIGRATE`` and the ``migrate_pgmap_owner`` field set to the ``args->pgmap_owner`` field passed to migrate_vma_setup(). This is allows the device driver to skip the invalidation callback and only invalidate device private MMU mappings that are actually migrating. This is explained more in the next section. While walking the page tables, a ``pte_none()`` or ``is_zero_pfn()`` entry results in a valid "zero" PFN stored in the ``args->src`` array. This lets the driver allocate device private memory and clear it instead of copying a page of zeros. Valid PTE entries to system memory or device private struct pages will be locked with ``lock_page()``, isolated from the LRU (if system memory since device private pages are not on the LRU), unmapped from the process, and a special migration PTE is inserted in place of the original PTE. migrate_vma_setup() also clears the ``args->dst`` array. 3. The device driver allocates destination pages and copies source pages to destination pages. The driver checks each ``src`` entry to see if the ``MIGRATE_PFN_MIGRATE`` bit is set and skips entries that are not migrating. The device driver can also choose to skip migrating a page by not filling in the ``dst`` array for that page. The driver then allocates either a device private struct page or a system memory page, locks the page with ``lock_page()``, and fills in the ``dst`` array entry with:: |
f910ce526 mm/doc: fix a lit... |
348 |
dst[i] = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; |
f7ebd9ed7 mm/doc: add usage... |
349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 |
Now that the driver knows that this page is being migrated, it can invalidate device private MMU mappings and copy device private memory to system memory or another device private page. The core Linux kernel handles CPU page table invalidations so the device driver only has to invalidate its own MMU mappings. The driver can use ``migrate_pfn_to_page(src[i])`` to get the ``struct page`` of the source and either copy the source page to the destination or clear the destination device private memory if the pointer is ``NULL`` meaning the source page was not populated in system memory. 4. ``migrate_vma_pages()`` This step is where the migration is actually "committed". If the source page was a ``pte_none()`` or ``is_zero_pfn()`` page, this is where the newly allocated page is inserted into the CPU's page table. This can fail if a CPU thread faults on the same page. However, the page table is locked and only one of the new pages will be inserted. The device driver will see that the ``MIGRATE_PFN_MIGRATE`` bit is cleared if it loses the race. If the source page was locked, isolated, etc. the source ``struct page`` information is now copied to destination ``struct page`` finalizing the migration on the CPU side. 5. Device driver updates device MMU page tables for pages still migrating, rolling back pages not migrating. If the ``src`` entry still has ``MIGRATE_PFN_MIGRATE`` bit set, the device driver can update the device MMU and set the write enable bit if the ``MIGRATE_PFN_WRITE`` bit is set. 6. ``migrate_vma_finalize()`` This step replaces the special migration page table entry with the new page's page table entry and releases the reference to the source and destination ``struct page``. 7. ``mmap_read_unlock()`` The lock can now be released. |
bffc33ec5 hmm: heterogeneou... |
392 |
|
aa9f34e5d docs/vm: hmm.txt:... |
393 394 |
Memory cgroup (memcg) and rss accounting ======================================== |
bffc33ec5 hmm: heterogeneou... |
395 |
|
2076e5c04 mm/hmm: update HM... |
396 |
For now, device memory is accounted as any regular page in rss counters (either |
76ea470ce mm/hmm: documenta... |
397 |
anonymous if device page is used for anonymous, file if device page is used for |
2076e5c04 mm/hmm: update HM... |
398 |
file backed page, or shmem if device page is used for shared memory). This is a |
76ea470ce mm/hmm: documenta... |
399 400 |
deliberate choice to keep existing applications, that might start using device memory without knowing about it, running unimpacted. |
e8eddfd2d Documentation/vm/... |
401 |
A drawback is that the OOM killer might kill an application using a lot of |
76ea470ce mm/hmm: documenta... |
402 403 404 |
device memory and not a lot of regular system memory and thus not freeing much system memory. We want to gather more real world experience on how applications and system react under memory pressure in the presence of device memory before |
bffc33ec5 hmm: heterogeneou... |
405 |
deciding to account device memory differently. |
76ea470ce mm/hmm: documenta... |
406 |
Same decision was made for memory cgroup. Device memory pages are accounted |
bffc33ec5 hmm: heterogeneou... |
407 408 |
against same memory cgroup a regular page would be accounted to. This does simplify migration to and from device memory. This also means that migration |
e8eddfd2d Documentation/vm/... |
409 |
back from device memory to regular memory cannot fail because it would |
bffc33ec5 hmm: heterogeneou... |
410 |
go above memory cgroup limit. We might revisit this choice latter on once we |
76ea470ce mm/hmm: documenta... |
411 |
get more experience in how device memory is used and its impact on memory |
bffc33ec5 hmm: heterogeneou... |
412 |
resource control. |
2076e5c04 mm/hmm: update HM... |
413 |
Note that device memory can never be pinned by a device driver nor through GUP |
bffc33ec5 hmm: heterogeneou... |
414 |
and thus such memory is always free upon process exit. Or when last reference |
76ea470ce mm/hmm: documenta... |
415 |
is dropped in case of shared memory or file backed memory. |