Blame view

Documentation/cpusets.txt 25.7 KB
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
1
2
3
4
5
  				CPUSETS
  				-------
  
  Copyright (C) 2004 BULL SA.
  Written by Simon.Derr@bull.net
b4fb37662   Christoph Lameter   [PATCH] Page migr...
6
  Portions Copyright (c) 2004-2006 Silicon Graphics, Inc.
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
7
  Modified by Paul Jackson <pj@sgi.com>
b4fb37662   Christoph Lameter   [PATCH] Page migr...
8
  Modified by Christoph Lameter <clameter@sgi.com>
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
9
10
11
12
13
14
15
16
  
  CONTENTS:
  =========
  
  1. Cpusets
    1.1 What are cpusets ?
    1.2 Why are cpusets needed ?
    1.3 How are cpusets implemented ?
bd5e09cf7   Paul Jackson   [PATCH] cpuset: d...
17
18
    1.4 What are exclusive cpusets ?
    1.5 What does notify_on_release do ?
90c9cc404   Paul Jackson   [PATCH] cpuset: r...
19
    1.6 What is memory_pressure ?
825a46af5   Paul Jackson   [PATCH] cpuset me...
20
21
    1.7 What is memory spread ?
    1.8 How do I use cpusets ?
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
  2. Usage Examples and Syntax
    2.1 Basic Usage
    2.2 Adding/removing cpus
    2.3 Setting flags
    2.4 Attaching processes
  3. Questions
  4. Contact
  
  1. Cpusets
  ==========
  
  1.1 What are cpusets ?
  ----------------------
  
  Cpusets provide a mechanism for assigning a set of CPUs and Memory
  Nodes to a set of tasks.
  
  Cpusets constrain the CPU and Memory placement of tasks to only
  the resources within a tasks current cpuset.  They form a nested
  hierarchy visible in a virtual file system.  These are the essential
  hooks, beyond what is already present, required to manage dynamic
  job placement on large systems.
  
  Each task has a pointer to a cpuset.  Multiple tasks may reference
  the same cpuset.  Requests by a task, using the sched_setaffinity(2)
  system call to include CPUs in its CPU affinity mask, and using the
  mbind(2) and set_mempolicy(2) system calls to include Memory Nodes
  in its memory policy, are both filtered through that tasks cpuset,
  filtering out any CPUs or Memory Nodes not in that cpuset.  The
  scheduler will not schedule a task on a CPU that is not allowed in
  its cpus_allowed vector, and the kernel page allocator will not
  allocate a page on a node that is not allowed in the requesting tasks
  mems_allowed vector.
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
  User level code may create and destroy cpusets by name in the cpuset
  virtual file system, manage the attributes and permissions of these
  cpusets and which CPUs and Memory Nodes are assigned to each cpuset,
  specify and query to which cpuset a task is assigned, and list the
  task pids assigned to a cpuset.
  
  
  1.2 Why are cpusets needed ?
  ----------------------------
  
  The management of large computer systems, with many processors (CPUs),
  complex memory cache hierarchies and multiple Memory Nodes having
  non-uniform access times (NUMA) presents additional challenges for
  the efficient scheduling and memory placement of processes.
  
  Frequently more modest sized systems can be operated with adequate
  efficiency just by letting the operating system automatically share
  the available CPU and Memory resources amongst the requesting tasks.
  
  But larger systems, which benefit more from careful processor and
  memory placement to reduce memory access times and contention,
  and which typically represent a larger investment for the customer,
33430dc59   Jean Delvare   [PATCH] Typo fix:...
77
  can benefit from explicitly placing jobs on properly sized subsets of
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
78
79
80
81
82
83
84
85
86
  the system.
  
  This can be especially valuable on:
  
      * Web Servers running multiple instances of the same web application,
      * Servers running different applications (for instance, a web server
        and a database), or
      * NUMA systems running large HPC applications with demanding
        performance characteristics.
85d7b9498   Dinakar Guniguntala   [PATCH] Dynamic s...
87
88
89
      * Also cpu_exclusive cpusets are useful for servers running orthogonal
        workloads such as RT applications requiring low latency and HPC
        applications that are throughput sensitive
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
90
91
92
  
  These subsets, or "soft partitions" must be able to be dynamically
  adjusted, as the job mix changes, without impacting other concurrently
b4fb37662   Christoph Lameter   [PATCH] Page migr...
93
94
  executing jobs. The location of the running jobs pages may also be moved
  when the memory locations are changed.
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
95
96
97
98
99
100
101
102
103
104
  
  The kernel cpuset patch provides the minimum essential kernel
  mechanisms required to efficiently implement such subsets.  It
  leverages existing CPU and Memory Placement facilities in the Linux
  kernel to avoid any additional impact on the critical scheduler or
  memory allocator code.
  
  
  1.3 How are cpusets implemented ?
  ---------------------------------
b4fb37662   Christoph Lameter   [PATCH] Page migr...
105
106
  Cpusets provide a Linux kernel mechanism to constrain which CPUs and
  Memory Nodes are used by a process or set of processes.
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
  
  The Linux kernel already has a pair of mechanisms to specify on which
  CPUs a task may be scheduled (sched_setaffinity) and on which Memory
  Nodes it may obtain memory (mbind, set_mempolicy).
  
  Cpusets extends these two mechanisms as follows:
  
   - Cpusets are sets of allowed CPUs and Memory Nodes, known to the
     kernel.
   - Each task in the system is attached to a cpuset, via a pointer
     in the task structure to a reference counted cpuset structure.
   - Calls to sched_setaffinity are filtered to just those CPUs
     allowed in that tasks cpuset.
   - Calls to mbind and set_mempolicy are filtered to just
     those Memory Nodes allowed in that tasks cpuset.
   - The root cpuset contains all the systems CPUs and Memory
     Nodes.
   - For any cpuset, one can define child cpusets containing a subset
     of the parents CPU and Memory Node resources.
   - The hierarchy of cpusets can be mounted at /dev/cpuset, for
     browsing and manipulation from user space.
   - A cpuset may be marked exclusive, which ensures that no other
     cpuset (except direct ancestors and descendents) may contain
     any overlapping CPUs or Memory Nodes.
85d7b9498   Dinakar Guniguntala   [PATCH] Dynamic s...
131
132
     Also a cpu_exclusive cpuset would be associated with a sched
     domain.
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
133
134
135
136
   - You can list all the tasks (by pid) attached to any cpuset.
  
  The implementation of cpusets requires a few, simple hooks
  into the rest of the kernel, none in performance critical paths:
864913f30   Paul Jackson   cpuset two little...
137
   - in init/main.c, to initialize the root cpuset at system boot.
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
138
139
140
141
142
   - in fork and exit, to attach and detach a task from its cpuset.
   - in sched_setaffinity, to mask the requested CPUs by what's
     allowed in that tasks cpuset.
   - in sched.c migrate_all_tasks(), to keep migrating tasks within
     the CPUs allowed by their cpuset, if possible.
85d7b9498   Dinakar Guniguntala   [PATCH] Dynamic s...
143
144
145
   - in sched.c, a new API partition_sched_domains for handling
     sched domain changes associated with cpu_exclusive cpusets
     and related changes in both sched.c and arch/ia64/kernel/domain.c
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
146
147
   - in the mbind and set_mempolicy system calls, to mask the requested
     Memory Nodes by what's allowed in that tasks cpuset.
864913f30   Paul Jackson   cpuset two little...
148
   - in page_alloc.c, to restrict memory to allowed nodes.
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
   - in vmscan.c, to restrict page recovery to the current cpuset.
  
  In addition a new file system, of type "cpuset" may be mounted,
  typically at /dev/cpuset, to enable browsing and modifying the cpusets
  presently known to the kernel.  No new system calls are added for
  cpusets - all support for querying and modifying cpusets is via
  this cpuset file system.
  
  Each task under /proc has an added file named 'cpuset', displaying
  the cpuset name, as the path relative to the root of the cpuset file
  system.
  
  The /proc/<pid>/status file for each task has two added lines,
  displaying the tasks cpus_allowed (on which CPUs it may be scheduled)
  and mems_allowed (on which Memory Nodes it may obtain memory),
  in the format seen in the following example:
  
    Cpus_allowed:   ffffffff,ffffffff,ffffffff,ffffffff
    Mems_allowed:   ffffffff,ffffffff
  
  Each cpuset is represented by a directory in the cpuset file system
  containing the following files describing that cpuset:
  
   - cpus: list of CPUs in that cpuset
   - mems: list of Memory Nodes in that cpuset
45b07ef31   Paul Jackson   [PATCH] cpusets: ...
174
   - memory_migrate flag: if set, move pages to cpusets nodes
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
175
176
177
   - cpu_exclusive flag: is cpu placement exclusive?
   - mem_exclusive flag: is memory placement exclusive?
   - tasks: list of tasks (by pid) attached to that cpuset
bd5e09cf7   Paul Jackson   [PATCH] cpuset: d...
178
   - notify_on_release flag: run /sbin/cpuset_release_agent on exit?
bd5e09cf7   Paul Jackson   [PATCH] cpuset: d...
179
180
181
182
   - memory_pressure: measure of how much paging pressure in cpuset
  
  In addition, the root cpuset only has the following file:
   - memory_pressure_enabled flag: compute memory_pressure?
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
  
  New cpusets are created using the mkdir system call or shell
  command.  The properties of a cpuset, such as its flags, allowed
  CPUs and Memory Nodes, and attached tasks, are modified by writing
  to the appropriate file in that cpusets directory, as listed above.
  
  The named hierarchical structure of nested cpusets allows partitioning
  a large system into nested, dynamically changeable, "soft-partitions".
  
  The attachment of each task, automatically inherited at fork by any
  children of that task, to a cpuset allows organizing the work load
  on a system into related sets of tasks such that each set is constrained
  to using the CPUs and Memory Nodes of a particular cpuset.  A task
  may be re-attached to any other cpuset, if allowed by the permissions
  on the necessary cpuset file system directories.
  
  Such management of a system "in the large" integrates smoothly with
  the detailed placement done on individual tasks and memory regions
  using the sched_setaffinity, mbind and set_mempolicy system calls.
  
  The following rules apply to each cpuset:
  
   - Its CPUs and Memory Nodes must be a subset of its parents.
   - It can only be marked exclusive if its parent is.
   - If its cpu or memory is exclusive, they may not overlap any sibling.
  
  These rules, and the natural hierarchy of cpusets, enable efficient
  enforcement of the exclusive guarantee, without having to scan all
  cpusets every time any of them change to ensure nothing overlaps a
  exclusive cpuset.  Also, the use of a Linux virtual file system (vfs)
  to represent the cpuset hierarchy provides for a familiar permission
  and name space for cpusets, with a minimum of additional kernel code.
bd5e09cf7   Paul Jackson   [PATCH] cpuset: d...
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
  
  1.4 What are exclusive cpusets ?
  --------------------------------
  
  If a cpuset is cpu or mem exclusive, no other cpuset, other than
  a direct ancestor or descendent, may share any of the same CPUs or
  Memory Nodes.
  
  A cpuset that is cpu_exclusive has a scheduler (sched) domain
  associated with it.  The sched domain consists of all CPUs in the
  current cpuset that are not part of any exclusive child cpusets.
  This ensures that the scheduler load balancing code only balances
  against the CPUs that are in the sched domain as defined above and
  not all of the CPUs in the system. This removes any overhead due to
  load balancing code trying to pull tasks outside of the cpu_exclusive
  cpuset only to be prevented by the tasks' cpus_allowed mask.
  
  A cpuset that is mem_exclusive restricts kernel allocations for
  page, buffer and other data commonly shared by the kernel across
  multiple users.  All cpusets, whether mem_exclusive or not, restrict
  allocations of memory for user space.  This enables configuring a
  system so that several independent jobs can share common kernel data,
  such as file system pages, while isolating each jobs user allocation in
  its own cpuset.  To do this, construct a large mem_exclusive cpuset to
  hold all the jobs, and construct child, non-mem_exclusive cpusets for
  each individual job.  Only a small amount of typical kernel memory,
  such as requests from interrupt handlers, is allowed to be taken
  outside even a mem_exclusive cpuset.
  
  
  1.5 What does notify_on_release do ?
  ------------------------------------
  
  If the notify_on_release flag is enabled (1) in a cpuset, then whenever
  the last task in the cpuset leaves (exits or attaches to some other
  cpuset) and the last child cpuset of that cpuset is removed, then
  the kernel runs the command /sbin/cpuset_release_agent, supplying the
  pathname (relative to the mount point of the cpuset file system) of the
  abandoned cpuset.  This enables automatic removal of abandoned cpusets.
  The default value of notify_on_release in the root cpuset at system
  boot is disabled (0).  The default value of other cpusets at creation
  is the current value of their parents notify_on_release setting.
90c9cc404   Paul Jackson   [PATCH] cpuset: r...
257
  1.6 What is memory_pressure ?
bd5e09cf7   Paul Jackson   [PATCH] cpuset: d...
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
  -----------------------------
  The memory_pressure of a cpuset provides a simple per-cpuset metric
  of the rate that the tasks in a cpuset are attempting to free up in
  use memory on the nodes of the cpuset to satisfy additional memory
  requests.
  
  This enables batch managers monitoring jobs running in dedicated
  cpusets to efficiently detect what level of memory pressure that job
  is causing.
  
  This is useful both on tightly managed systems running a wide mix of
  submitted jobs, which may choose to terminate or re-prioritize jobs that
  are trying to use more memory than allowed on the nodes assigned them,
  and with tightly coupled, long running, massively parallel scientific
  computing jobs that will dramatically fail to meet required performance
  goals if they start to use more memory than allowed to them.
  
  This mechanism provides a very economical way for the batch manager
  to monitor a cpuset for signs of memory pressure.  It's up to the
  batch manager or other user code to decide what to do about it and
  take action.
  
  ==> Unless this feature is enabled by writing "1" to the special file
      /dev/cpuset/memory_pressure_enabled, the hook in the rebalance
      code of __alloc_pages() for this metric reduces to simply noticing
      that the cpuset_memory_pressure_enabled flag is zero.  So only
      systems that enable this feature will compute the metric.
  
  Why a per-cpuset, running average:
  
      Because this meter is per-cpuset, rather than per-task or mm,
      the system load imposed by a batch scheduler monitoring this
      metric is sharply reduced on large systems, because a scan of
      the tasklist can be avoided on each set of queries.
  
      Because this meter is a running average, instead of an accumulating
      counter, a batch scheduler can detect memory pressure with a
      single read, instead of having to read and accumulate results
      for a period of time.
  
      Because this meter is per-cpuset rather than per-task or mm,
      the batch scheduler can obtain the key information, memory
      pressure in a cpuset, with a single read, rather than having to
      query and accumulate results over all the (dynamically changing)
      set of tasks in the cpuset.
  
  A per-cpuset simple digital filter (requires a spinlock and 3 words
  of data per-cpuset) is kept, and updated by any task attached to that
  cpuset, if it enters the synchronous (direct) page reclaim code.
  
  A per-cpuset file provides an integer number representing the recent
  (half-life of 10 seconds) rate of direct page reclaims caused by
  the tasks in the cpuset, in units of reclaims attempted per second,
  times 1000.
825a46af5   Paul Jackson   [PATCH] cpuset me...
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
  1.7 What is memory spread ?
  ---------------------------
  There are two boolean flag files per cpuset that control where the
  kernel allocates pages for the file system buffers and related in
  kernel data structures.  They are called 'memory_spread_page' and
  'memory_spread_slab'.
  
  If the per-cpuset boolean flag file 'memory_spread_page' is set, then
  the kernel will spread the file system buffers (page cache) evenly
  over all the nodes that the faulting task is allowed to use, instead
  of preferring to put those pages on the node where the task is running.
  
  If the per-cpuset boolean flag file 'memory_spread_slab' is set,
  then the kernel will spread some file system related slab caches,
  such as for inodes and dentries evenly over all the nodes that the
  faulting task is allowed to use, instead of preferring to put those
  pages on the node where the task is running.
  
  The setting of these flags does not affect anonymous data segment or
  stack segment pages of a task.
  
  By default, both kinds of memory spreading are off, and memory
  pages are allocated on the node local to where the task is running,
  except perhaps as modified by the tasks NUMA mempolicy or cpuset
  configuration, so long as sufficient free memory pages are available.
  
  When new cpusets are created, they inherit the memory spread settings
  of their parent.
  
  Setting memory spreading causes allocations for the affected page
  or slab caches to ignore the tasks NUMA mempolicy and be spread
  instead.    Tasks using mbind() or set_mempolicy() calls to set NUMA
  mempolicies will not notice any change in these calls as a result of
  their containing tasks memory spread settings.  If memory spreading
  is turned off, then the currently specified NUMA mempolicy once again
  applies to memory page allocations.
  
  Both 'memory_spread_page' and 'memory_spread_slab' are boolean flag
  files.  By default they contain "0", meaning that the feature is off
  for that cpuset.  If a "1" is written to that file, then that turns
  the named feature on.
  
  The implementation is simple.
  
  Setting the flag 'memory_spread_page' turns on a per-process flag
  PF_SPREAD_PAGE for each task that is in that cpuset or subsequently
  joins that cpuset.  The page allocation calls for the page cache
  is modified to perform an inline check for this PF_SPREAD_PAGE task
  flag, and if set, a call to a new routine cpuset_mem_spread_node()
  returns the node to prefer for the allocation.
  
  Similarly, setting 'memory_spread_cache' turns on the flag
  PF_SPREAD_SLAB, and appropriately marked slab caches will allocate
  pages from the node returned by cpuset_mem_spread_node().
  
  The cpuset_mem_spread_node() routine is also simple.  It uses the
  value of a per-task rotor cpuset_mem_spread_rotor to select the next
  node in the current tasks mems_allowed to prefer for the allocation.
  
  This memory placement policy is also known (in other contexts) as
  round-robin or interleave.
  
  This policy can provide substantial improvements for jobs that need
  to place thread local data on the corresponding node, but that need
  to access large file system data sets that need to be spread across
  the several nodes in the jobs cpuset in order to fit.  Without this
  policy, especially for jobs that might have one thread reading in the
  data set, the memory allocation across the nodes in the jobs cpuset
  can become very uneven.
  
  
  1.8 How do I use cpusets ?
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
  --------------------------
  
  In order to minimize the impact of cpusets on critical kernel
  code, such as the scheduler, and due to the fact that the kernel
  does not support one task updating the memory placement of another
  task directly, the impact on a task of changing its cpuset CPU
  or Memory Node placement, or of changing to which cpuset a task
  is attached, is subtle.
  
  If a cpuset has its Memory Nodes modified, then for each task attached
  to that cpuset, the next time that the kernel attempts to allocate
  a page of memory for that task, the kernel will notice the change
  in the tasks cpuset, and update its per-task memory placement to
  remain within the new cpusets memory placement.  If the task was using
  mempolicy MPOL_BIND, and the nodes to which it was bound overlap with
  its new cpuset, then the task will continue to use whatever subset
  of MPOL_BIND nodes are still allowed in the new cpuset.  If the task
  was using MPOL_BIND and now none of its MPOL_BIND nodes are allowed
  in the new cpuset, then the task will be essentially treated as if it
  was MPOL_BIND bound to the new cpuset (even though its numa placement,
  as queried by get_mempolicy(), doesn't change).  If a task is moved
  from one cpuset to another, then the kernel will adjust the tasks
  memory placement, as above, the next time that the kernel attempts
  to allocate a page of memory for that task.
  
  If a cpuset has its CPUs modified, then each task using that
  cpuset does _not_ change its behavior automatically.  In order to
  minimize the impact on the critical scheduling code in the kernel,
  tasks will continue to use their prior CPU placement until they
  are rebound to their cpuset, by rewriting their pid to the 'tasks'
  file of their cpuset.  If a task had been bound to some subset of its
  cpuset using the sched_setaffinity() call, and if any of that subset
  is still allowed in its new cpuset settings, then the task will be
  restricted to the intersection of the CPUs it was allowed on before,
  and its new cpuset CPU placement.  If, on the other hand, there is
  no overlap between a tasks prior placement and its new cpuset CPU
  placement, then the task will be allowed to run on any CPU allowed
  in its new cpuset.  If a task is moved from one cpuset to another,
  its CPU placement is updated in the same way as if the tasks pid is
  rewritten to the 'tasks' file of its current cpuset.
  
  In summary, the memory placement of a task whose cpuset is changed is
  updated by the kernel, on the next allocation of a page for that task,
  but the processor placement is not updated, until that tasks pid is
  rewritten to the 'tasks' file of its cpuset.  This is done to avoid
  impacting the scheduler code in the kernel with a check for changes
  in a tasks processor placement.
45b07ef31   Paul Jackson   [PATCH] cpusets: ...
431
432
433
434
435
436
437
  Normally, once a page is allocated (given a physical page
  of main memory) then that page stays on whatever node it
  was allocated, so long as it remains allocated, even if the
  cpusets memory placement policy 'mems' subsequently changes.
  If the cpuset flag file 'memory_migrate' is set true, then when
  tasks are attached to that cpuset, any pages that task had
  allocated to it on nodes in its previous cpuset are migrated
b4fb37662   Christoph Lameter   [PATCH] Page migr...
438
439
440
441
  to the tasks new cpuset. The relative placement of the page within
  the cpuset is preserved during these migration operations if possible.
  For example if the page was on the second valid node of the prior cpuset
  then the page will be placed on the second valid node of the new cpuset.
45b07ef31   Paul Jackson   [PATCH] cpusets: ...
442
443
444
  Also if 'memory_migrate' is set true, then if that cpusets
  'mems' file is modified, pages allocated to tasks in that
  cpuset, that were on nodes in the previous setting of 'mems',
b4fb37662   Christoph Lameter   [PATCH] Page migr...
445
446
447
  will be moved to nodes in the new setting of 'mems.'
  Pages that were not in the tasks prior cpuset, or in the cpusets
  prior 'mems' setting, will not be moved.
45b07ef31   Paul Jackson   [PATCH] cpusets: ...
448

d533f6718   Tobias Klauser   [PATCH] Spelling ...
449
  There is an exception to the above.  If hotplug functionality is used
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
450
451
  to remove all the CPUs that are currently assigned to a cpuset,
  then the kernel will automatically update the cpus_allowed of all
b39c4fab2   Paul Jackson   [PATCH] cpusets+h...
452
  tasks attached to CPUs in that cpuset to allow all CPUs.  When memory
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
  hotplug functionality for removing Memory Nodes is available, a
  similar exception is expected to apply there as well.  In general,
  the kernel prefers to violate cpuset placement, over starving a task
  that has had all its allowed CPUs or Memory Nodes taken offline.  User
  code should reconfigure cpusets to only refer to online CPUs and Memory
  Nodes when using hotplug to add or remove such resources.
  
  There is a second exception to the above.  GFP_ATOMIC requests are
  kernel internal allocations that must be satisfied, immediately.
  The kernel may drop some request, in rare cases even panic, if a
  GFP_ATOMIC alloc fails.  If the request cannot be satisfied within
  the current tasks cpuset, then we relax the cpuset, and look for
  memory anywhere we can find it.  It's better to violate the cpuset
  than stress the kernel.
  
  To start a new job that is to be contained within a cpuset, the steps are:
  
   1) mkdir /dev/cpuset
   2) mount -t cpuset none /dev/cpuset
   3) Create the new cpuset by doing mkdir's and write's (or echo's) in
      the /dev/cpuset virtual file system.
   4) Start a task that will be the "founding father" of the new job.
   5) Attach that task to the new cpuset by writing its pid to the
      /dev/cpuset tasks file for that cpuset.
   6) fork, exec or clone the job tasks from this founding father task.
  
  For example, the following sequence of commands will setup a cpuset
  named "Charlie", containing just CPUs 2 and 3, and Memory Node 1,
  and then start a subshell 'sh' in that cpuset:
  
    mount -t cpuset none /dev/cpuset
    cd /dev/cpuset
    mkdir Charlie
    cd Charlie
    /bin/echo 2-3 > cpus
    /bin/echo 1 > mems
    /bin/echo $$ > tasks
    sh
    # The subshell 'sh' is now running in cpuset Charlie
    # The next line should display '/Charlie'
    cat /proc/self/cpuset
1da177e4c   Linus Torvalds   Linux-2.6.12-rc2
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
  In the future, a C library interface to cpusets will likely be
  available.  For now, the only way to query or modify cpusets is
  via the cpuset file system, using the various cd, mkdir, echo, cat,
  rmdir commands from the shell, or their equivalent from C.
  
  The sched_setaffinity calls can also be done at the shell prompt using
  SGI's runon or Robert Love's taskset.  The mbind and set_mempolicy
  calls can be done at the shell prompt using the numactl command
  (part of Andi Kleen's numa package).
  
  2. Usage Examples and Syntax
  ============================
  
  2.1 Basic Usage
  ---------------
  
  Creating, modifying, using the cpusets can be done through the cpuset
  virtual filesystem.
  
  To mount it, type:
  # mount -t cpuset none /dev/cpuset
  
  Then under /dev/cpuset you can find a tree that corresponds to the
  tree of the cpusets in the system. For instance, /dev/cpuset
  is the cpuset that holds the whole system.
  
  If you want to create a new cpuset under /dev/cpuset:
  # cd /dev/cpuset
  # mkdir my_cpuset
  
  Now you want to do something with this cpuset.
  # cd my_cpuset
  
  In this directory you can find several files:
  # ls
  cpus  cpu_exclusive  mems  mem_exclusive  tasks
  
  Reading them will give you information about the state of this cpuset:
  the CPUs and Memory Nodes it can use, the processes that are using
  it, its properties.  By writing to these files you can manipulate
  the cpuset.
  
  Set some flags:
  # /bin/echo 1 > cpu_exclusive
  
  Add some cpus:
  # /bin/echo 0-7 > cpus
  
  Now attach your shell to this cpuset:
  # /bin/echo $$ > tasks
  
  You can also create cpusets inside your cpuset by using mkdir in this
  directory.
  # mkdir my_sub_cs
  
  To remove a cpuset, just use rmdir:
  # rmdir my_sub_cs
  This will fail if the cpuset is in use (has cpusets inside, or has
  processes attached).
  
  2.2 Adding/removing cpus
  ------------------------
  
  This is the syntax to use when writing in the cpus or mems files
  in cpuset directories:
  
  # /bin/echo 1-4 > cpus		-> set cpus list to cpus 1,2,3,4
  # /bin/echo 1,2,3,4 > cpus	-> set cpus list to cpus 1,2,3,4
  
  2.3 Setting flags
  -----------------
  
  The syntax is very simple:
  
  # /bin/echo 1 > cpu_exclusive 	-> set flag 'cpu_exclusive'
  # /bin/echo 0 > cpu_exclusive 	-> unset flag 'cpu_exclusive'
  
  2.4 Attaching processes
  -----------------------
  
  # /bin/echo PID > tasks
  
  Note that it is PID, not PIDs. You can only attach ONE task at a time.
  If you have several tasks to attach, you have to do it one after another:
  
  # /bin/echo PID1 > tasks
  # /bin/echo PID2 > tasks
  	...
  # /bin/echo PIDn > tasks
  
  
  3. Questions
  ============
  
  Q: what's up with this '/bin/echo' ?
  A: bash's builtin 'echo' command does not check calls to write() against
     errors. If you use it in the cpuset file system, you won't be
     able to tell whether a command succeeded or failed.
  
  Q: When I attach processes, only the first of the line gets really attached !
  A: We can only return one error code per call to write(). So you should also
     put only ONE pid.
  
  4. Contact
  ==========
  
  Web: http://www.bullopensource.org/cpuset