Commit 2eec9ad91f71a3dbacece5c4fb5adc09fad53a96
Committed by
Linus Torvalds
1 parent
0771dfefc9
Exists in
master
and in
7 other branches
[PATCH] lightweight robust futexes: docs
Add robust-futex documentation. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Showing 2 changed files with 402 additions and 0 deletions Side-by-side Diff
Documentation/robust-futex-ABI.txt
1 | +Started by Paul Jackson <pj@sgi.com> | |
2 | + | |
3 | +The robust futex ABI | |
4 | +-------------------- | |
5 | + | |
6 | +Robust_futexes provide a mechanism that is used in addition to normal | |
7 | +futexes, for kernel assist of cleanup of held locks on task exit. | |
8 | + | |
9 | +The interesting data as to what futexes a thread is holding is kept on a | |
10 | +linked list in user space, where it can be updated efficiently as locks | |
11 | +are taken and dropped, without kernel intervention. The only additional | |
12 | +kernel intervention required for robust_futexes above and beyond what is | |
13 | +required for futexes is: | |
14 | + | |
15 | + 1) a one time call, per thread, to tell the kernel where its list of | |
16 | + held robust_futexes begins, and | |
17 | + 2) internal kernel code at exit, to handle any listed locks held | |
18 | + by the exiting thread. | |
19 | + | |
20 | +The existing normal futexes already provide a "Fast Userspace Locking" | |
21 | +mechanism, which handles uncontested locking without needing a system | |
22 | +call, and handles contested locking by maintaining a list of waiting | |
23 | +threads in the kernel. Options on the sys_futex(2) system call support | |
24 | +waiting on a particular futex, and waking up the next waiter on a | |
25 | +particular futex. | |
26 | + | |
27 | +For robust_futexes to work, the user code (typically in a library such | |
28 | +as glibc linked with the application) has to manage and place the | |
29 | +necessary list elements exactly as the kernel expects them. If it fails | |
30 | +to do so, then improperly listed locks will not be cleaned up on exit, | |
31 | +probably causing deadlock or other such failure of the other threads | |
32 | +waiting on the same locks. | |
33 | + | |
34 | +A thread that anticipates possibly using robust_futexes should first | |
35 | +issue the system call: | |
36 | + | |
37 | + asmlinkage long | |
38 | + sys_set_robust_list(struct robust_list_head __user *head, size_t len); | |
39 | + | |
40 | +The pointer 'head' points to a structure in the threads address space | |
41 | +consisting of three words. Each word is 32 bits on 32 bit arch's, or 64 | |
42 | +bits on 64 bit arch's, and local byte order. Each thread should have | |
43 | +its own thread private 'head'. | |
44 | + | |
45 | +If a thread is running in 32 bit compatibility mode on a 64 native arch | |
46 | +kernel, then it can actually have two such structures - one using 32 bit | |
47 | +words for 32 bit compatibility mode, and one using 64 bit words for 64 | |
48 | +bit native mode. The kernel, if it is a 64 bit kernel supporting 32 bit | |
49 | +compatibility mode, will attempt to process both lists on each task | |
50 | +exit, if the corresponding sys_set_robust_list() call has been made to | |
51 | +setup that list. | |
52 | + | |
53 | + The first word in the memory structure at 'head' contains a | |
54 | + pointer to a single linked list of 'lock entries', one per lock, | |
55 | + as described below. If the list is empty, the pointer will point | |
56 | + to itself, 'head'. The last 'lock entry' points back to the 'head'. | |
57 | + | |
58 | + The second word, called 'offset', specifies the offset from the | |
59 | + address of the associated 'lock entry', plus or minus, of what will | |
60 | + be called the 'lock word', from that 'lock entry'. The 'lock word' | |
61 | + is always a 32 bit word, unlike the other words above. The 'lock | |
62 | + word' holds 3 flag bits in the upper 3 bits, and the thread id (TID) | |
63 | + of the thread holding the lock in the bottom 29 bits. See further | |
64 | + below for a description of the flag bits. | |
65 | + | |
66 | + The third word, called 'list_op_pending', contains transient copy of | |
67 | + the address of the 'lock entry', during list insertion and removal, | |
68 | + and is needed to correctly resolve races should a thread exit while | |
69 | + in the middle of a locking or unlocking operation. | |
70 | + | |
71 | +Each 'lock entry' on the single linked list starting at 'head' consists | |
72 | +of just a single word, pointing to the next 'lock entry', or back to | |
73 | +'head' if there are no more entries. In addition, nearby to each 'lock | |
74 | +entry', at an offset from the 'lock entry' specified by the 'offset' | |
75 | +word, is one 'lock word'. | |
76 | + | |
77 | +The 'lock word' is always 32 bits, and is intended to be the same 32 bit | |
78 | +lock variable used by the futex mechanism, in conjunction with | |
79 | +robust_futexes. The kernel will only be able to wakeup the next thread | |
80 | +waiting for a lock on a threads exit if that next thread used the futex | |
81 | +mechanism to register the address of that 'lock word' with the kernel. | |
82 | + | |
83 | +For each futex lock currently held by a thread, if it wants this | |
84 | +robust_futex support for exit cleanup of that lock, it should have one | |
85 | +'lock entry' on this list, with its associated 'lock word' at the | |
86 | +specified 'offset'. Should a thread die while holding any such locks, | |
87 | +the kernel will walk this list, mark any such locks with a bit | |
88 | +indicating their holder died, and wakeup the next thread waiting for | |
89 | +that lock using the futex mechanism. | |
90 | + | |
91 | +When a thread has invoked the above system call to indicate it | |
92 | +anticipates using robust_futexes, the kernel stores the passed in 'head' | |
93 | +pointer for that task. The task may retrieve that value later on by | |
94 | +using the system call: | |
95 | + | |
96 | + asmlinkage long | |
97 | + sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr, | |
98 | + size_t __user *len_ptr); | |
99 | + | |
100 | +It is anticipated that threads will use robust_futexes embedded in | |
101 | +larger, user level locking structures, one per lock. The kernel | |
102 | +robust_futex mechanism doesn't care what else is in that structure, so | |
103 | +long as the 'offset' to the 'lock word' is the same for all | |
104 | +robust_futexes used by that thread. The thread should link those locks | |
105 | +it currently holds using the 'lock entry' pointers. It may also have | |
106 | +other links between the locks, such as the reverse side of a double | |
107 | +linked list, but that doesn't matter to the kernel. | |
108 | + | |
109 | +By keeping its locks linked this way, on a list starting with a 'head' | |
110 | +pointer known to the kernel, the kernel can provide to a thread the | |
111 | +essential service available for robust_futexes, which is to help clean | |
112 | +up locks held at the time of (a perhaps unexpectedly) exit. | |
113 | + | |
114 | +Actual locking and unlocking, during normal operations, is handled | |
115 | +entirely by user level code in the contending threads, and by the | |
116 | +existing futex mechanism to wait for, and wakeup, locks. The kernels | |
117 | +only essential involvement in robust_futexes is to remember where the | |
118 | +list 'head' is, and to walk the list on thread exit, handling locks | |
119 | +still held by the departing thread, as described below. | |
120 | + | |
121 | +There may exist thousands of futex lock structures in a threads shared | |
122 | +memory, on various data structures, at a given point in time. Only those | |
123 | +lock structures for locks currently held by that thread should be on | |
124 | +that thread's robust_futex linked lock list a given time. | |
125 | + | |
126 | +A given futex lock structure in a user shared memory region may be held | |
127 | +at different times by any of the threads with access to that region. The | |
128 | +thread currently holding such a lock, if any, is marked with the threads | |
129 | +TID in the lower 29 bits of the 'lock word'. | |
130 | + | |
131 | +When adding or removing a lock from its list of held locks, in order for | |
132 | +the kernel to correctly handle lock cleanup regardless of when the task | |
133 | +exits (perhaps it gets an unexpected signal 9 in the middle of | |
134 | +manipulating this list), the user code must observe the following | |
135 | +protocol on 'lock entry' insertion and removal: | |
136 | + | |
137 | +On insertion: | |
138 | + 1) set the 'list_op_pending' word to the address of the 'lock word' | |
139 | + to be inserted, | |
140 | + 2) acquire the futex lock, | |
141 | + 3) add the lock entry, with its thread id (TID) in the bottom 29 bits | |
142 | + of the 'lock word', to the linked list starting at 'head', and | |
143 | + 4) clear the 'list_op_pending' word. | |
144 | + | |
145 | + XXX I am particularly unsure of the following -pj XXX | |
146 | + | |
147 | +On removal: | |
148 | + 1) set the 'list_op_pending' word to the address of the 'lock word' | |
149 | + to be removed, | |
150 | + 2) remove the lock entry for this lock from the 'head' list, | |
151 | + 2) release the futex lock, and | |
152 | + 2) clear the 'lock_op_pending' word. | |
153 | + | |
154 | +On exit, the kernel will consider the address stored in | |
155 | +'list_op_pending' and the address of each 'lock word' found by walking | |
156 | +the list starting at 'head'. For each such address, if the bottom 29 | |
157 | +bits of the 'lock word' at offset 'offset' from that address equals the | |
158 | +exiting threads TID, then the kernel will do two things: | |
159 | + | |
160 | + 1) if bit 31 (0x80000000) is set in that word, then attempt a futex | |
161 | + wakeup on that address, which will waken the next thread that has | |
162 | + used to the futex mechanism to wait on that address, and | |
163 | + 2) atomically set bit 30 (0x40000000) in the 'lock word'. | |
164 | + | |
165 | +In the above, bit 31 was set by futex waiters on that lock to indicate | |
166 | +they were waiting, and bit 30 is set by the kernel to indicate that the | |
167 | +lock owner died holding the lock. | |
168 | + | |
169 | +The kernel exit code will silently stop scanning the list further if at | |
170 | +any point: | |
171 | + | |
172 | + 1) the 'head' pointer or an subsequent linked list pointer | |
173 | + is not a valid address of a user space word | |
174 | + 2) the calculated location of the 'lock word' (address plus | |
175 | + 'offset') is not the valud address of a 32 bit user space | |
176 | + word | |
177 | + 3) if the list contains more than 1 million (subject to | |
178 | + future kernel configuration changes) elements. | |
179 | + | |
180 | +When the kernel sees a list entry whose 'lock word' doesn't have the | |
181 | +current threads TID in the lower 29 bits, it does nothing with that | |
182 | +entry, and goes on to the next entry. | |
183 | + | |
184 | +Bit 29 (0x20000000) of the 'lock word' is reserved for future use. |
Documentation/robust-futexes.txt
1 | +Started by: Ingo Molnar <mingo@redhat.com> | |
2 | + | |
3 | +Background | |
4 | +---------- | |
5 | + | |
6 | +what are robust futexes? To answer that, we first need to understand | |
7 | +what futexes are: normal futexes are special types of locks that in the | |
8 | +noncontended case can be acquired/released from userspace without having | |
9 | +to enter the kernel. | |
10 | + | |
11 | +A futex is in essence a user-space address, e.g. a 32-bit lock variable | |
12 | +field. If userspace notices contention (the lock is already owned and | |
13 | +someone else wants to grab it too) then the lock is marked with a value | |
14 | +that says "there's a waiter pending", and the sys_futex(FUTEX_WAIT) | |
15 | +syscall is used to wait for the other guy to release it. The kernel | |
16 | +creates a 'futex queue' internally, so that it can later on match up the | |
17 | +waiter with the waker - without them having to know about each other. | |
18 | +When the owner thread releases the futex, it notices (via the variable | |
19 | +value) that there were waiter(s) pending, and does the | |
20 | +sys_futex(FUTEX_WAKE) syscall to wake them up. Once all waiters have | |
21 | +taken and released the lock, the futex is again back to 'uncontended' | |
22 | +state, and there's no in-kernel state associated with it. The kernel | |
23 | +completely forgets that there ever was a futex at that address. This | |
24 | +method makes futexes very lightweight and scalable. | |
25 | + | |
26 | +"Robustness" is about dealing with crashes while holding a lock: if a | |
27 | +process exits prematurely while holding a pthread_mutex_t lock that is | |
28 | +also shared with some other process (e.g. yum segfaults while holding a | |
29 | +pthread_mutex_t, or yum is kill -9-ed), then waiters for that lock need | |
30 | +to be notified that the last owner of the lock exited in some irregular | |
31 | +way. | |
32 | + | |
33 | +To solve such types of problems, "robust mutex" userspace APIs were | |
34 | +created: pthread_mutex_lock() returns an error value if the owner exits | |
35 | +prematurely - and the new owner can decide whether the data protected by | |
36 | +the lock can be recovered safely. | |
37 | + | |
38 | +There is a big conceptual problem with futex based mutexes though: it is | |
39 | +the kernel that destroys the owner task (e.g. due to a SEGFAULT), but | |
40 | +the kernel cannot help with the cleanup: if there is no 'futex queue' | |
41 | +(and in most cases there is none, futexes being fast lightweight locks) | |
42 | +then the kernel has no information to clean up after the held lock! | |
43 | +Userspace has no chance to clean up after the lock either - userspace is | |
44 | +the one that crashes, so it has no opportunity to clean up. Catch-22. | |
45 | + | |
46 | +In practice, when e.g. yum is kill -9-ed (or segfaults), a system reboot | |
47 | +is needed to release that futex based lock. This is one of the leading | |
48 | +bugreports against yum. | |
49 | + | |
50 | +To solve this problem, the traditional approach was to extend the vma | |
51 | +(virtual memory area descriptor) concept to have a notion of 'pending | |
52 | +robust futexes attached to this area'. This approach requires 3 new | |
53 | +syscall variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and | |
54 | +FUTEX_RECOVER. At do_exit() time, all vmas are searched to see whether | |
55 | +they have a robust_head set. This approach has two fundamental problems | |
56 | +left: | |
57 | + | |
58 | + - it has quite complex locking and race scenarios. The vma-based | |
59 | + approach had been pending for years, but they are still not completely | |
60 | + reliable. | |
61 | + | |
62 | + - they have to scan _every_ vma at sys_exit() time, per thread! | |
63 | + | |
64 | +The second disadvantage is a real killer: pthread_exit() takes around 1 | |
65 | +microsecond on Linux, but with thousands (or tens of thousands) of vmas | |
66 | +every pthread_exit() takes a millisecond or more, also totally | |
67 | +destroying the CPU's L1 and L2 caches! | |
68 | + | |
69 | +This is very much noticeable even for normal process sys_exit_group() | |
70 | +calls: the kernel has to do the vma scanning unconditionally! (this is | |
71 | +because the kernel has no knowledge about how many robust futexes there | |
72 | +are to be cleaned up, because a robust futex might have been registered | |
73 | +in another task, and the futex variable might have been simply mmap()-ed | |
74 | +into this process's address space). | |
75 | + | |
76 | +This huge overhead forced the creation of CONFIG_FUTEX_ROBUST so that | |
77 | +normal kernels can turn it off, but worse than that: the overhead makes | |
78 | +robust futexes impractical for any type of generic Linux distribution. | |
79 | + | |
80 | +So something had to be done. | |
81 | + | |
82 | +New approach to robust futexes | |
83 | +------------------------------ | |
84 | + | |
85 | +At the heart of this new approach there is a per-thread private list of | |
86 | +robust locks that userspace is holding (maintained by glibc) - which | |
87 | +userspace list is registered with the kernel via a new syscall [this | |
88 | +registration happens at most once per thread lifetime]. At do_exit() | |
89 | +time, the kernel checks this user-space list: are there any robust futex | |
90 | +locks to be cleaned up? | |
91 | + | |
92 | +In the common case, at do_exit() time, there is no list registered, so | |
93 | +the cost of robust futexes is just a simple current->robust_list != NULL | |
94 | +comparison. If the thread has registered a list, then normally the list | |
95 | +is empty. If the thread/process crashed or terminated in some incorrect | |
96 | +way then the list might be non-empty: in this case the kernel carefully | |
97 | +walks the list [not trusting it], and marks all locks that are owned by | |
98 | +this thread with the FUTEX_OWNER_DEAD bit, and wakes up one waiter (if | |
99 | +any). | |
100 | + | |
101 | +The list is guaranteed to be private and per-thread at do_exit() time, | |
102 | +so it can be accessed by the kernel in a lockless way. | |
103 | + | |
104 | +There is one race possible though: since adding to and removing from the | |
105 | +list is done after the futex is acquired by glibc, there is a few | |
106 | +instructions window for the thread (or process) to die there, leaving | |
107 | +the futex hung. To protect against this possibility, userspace (glibc) | |
108 | +also maintains a simple per-thread 'list_op_pending' field, to allow the | |
109 | +kernel to clean up if the thread dies after acquiring the lock, but just | |
110 | +before it could have added itself to the list. Glibc sets this | |
111 | +list_op_pending field before it tries to acquire the futex, and clears | |
112 | +it after the list-add (or list-remove) has finished. | |
113 | + | |
114 | +That's all that is needed - all the rest of robust-futex cleanup is done | |
115 | +in userspace [just like with the previous patches]. | |
116 | + | |
117 | +Ulrich Drepper has implemented the necessary glibc support for this new | |
118 | +mechanism, which fully enables robust mutexes. | |
119 | + | |
120 | +Key differences of this userspace-list based approach, compared to the | |
121 | +vma based method: | |
122 | + | |
123 | + - it's much, much faster: at thread exit time, there's no need to loop | |
124 | + over every vma (!), which the VM-based method has to do. Only a very | |
125 | + simple 'is the list empty' op is done. | |
126 | + | |
127 | + - no VM changes are needed - 'struct address_space' is left alone. | |
128 | + | |
129 | + - no registration of individual locks is needed: robust mutexes dont | |
130 | + need any extra per-lock syscalls. Robust mutexes thus become a very | |
131 | + lightweight primitive - so they dont force the application designer | |
132 | + to do a hard choice between performance and robustness - robust | |
133 | + mutexes are just as fast. | |
134 | + | |
135 | + - no per-lock kernel allocation happens. | |
136 | + | |
137 | + - no resource limits are needed. | |
138 | + | |
139 | + - no kernel-space recovery call (FUTEX_RECOVER) is needed. | |
140 | + | |
141 | + - the implementation and the locking is "obvious", and there are no | |
142 | + interactions with the VM. | |
143 | + | |
144 | +Performance | |
145 | +----------- | |
146 | + | |
147 | +I have benchmarked the time needed for the kernel to process a list of 1 | |
148 | +million (!) held locks, using the new method [on a 2GHz CPU]: | |
149 | + | |
150 | + - with FUTEX_WAIT set [contended mutex]: 130 msecs | |
151 | + - without FUTEX_WAIT set [uncontended mutex]: 30 msecs | |
152 | + | |
153 | +I have also measured an approach where glibc does the lock notification | |
154 | +[which it currently does for !pshared robust mutexes], and that took 256 | |
155 | +msecs - clearly slower, due to the 1 million FUTEX_WAKE syscalls | |
156 | +userspace had to do. | |
157 | + | |
158 | +(1 million held locks are unheard of - we expect at most a handful of | |
159 | +locks to be held at a time. Nevertheless it's nice to know that this | |
160 | +approach scales nicely.) | |
161 | + | |
162 | +Implementation details | |
163 | +---------------------- | |
164 | + | |
165 | +The patch adds two new syscalls: one to register the userspace list, and | |
166 | +one to query the registered list pointer: | |
167 | + | |
168 | + asmlinkage long | |
169 | + sys_set_robust_list(struct robust_list_head __user *head, | |
170 | + size_t len); | |
171 | + | |
172 | + asmlinkage long | |
173 | + sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr, | |
174 | + size_t __user *len_ptr); | |
175 | + | |
176 | +List registration is very fast: the pointer is simply stored in | |
177 | +current->robust_list. [Note that in the future, if robust futexes become | |
178 | +widespread, we could extend sys_clone() to register a robust-list head | |
179 | +for new threads, without the need of another syscall.] | |
180 | + | |
181 | +So there is virtually zero overhead for tasks not using robust futexes, | |
182 | +and even for robust futex users, there is only one extra syscall per | |
183 | +thread lifetime, and the cleanup operation, if it happens, is fast and | |
184 | +straightforward. The kernel doesnt have any internal distinction between | |
185 | +robust and normal futexes. | |
186 | + | |
187 | +If a futex is found to be held at exit time, the kernel sets the | |
188 | +following bit of the futex word: | |
189 | + | |
190 | + #define FUTEX_OWNER_DIED 0x40000000 | |
191 | + | |
192 | +and wakes up the next futex waiter (if any). User-space does the rest of | |
193 | +the cleanup. | |
194 | + | |
195 | +Otherwise, robust futexes are acquired by glibc by putting the TID into | |
196 | +the futex field atomically. Waiters set the FUTEX_WAITERS bit: | |
197 | + | |
198 | + #define FUTEX_WAITERS 0x80000000 | |
199 | + | |
200 | +and the remaining bits are for the TID. | |
201 | + | |
202 | +Testing, architecture support | |
203 | +----------------------------- | |
204 | + | |
205 | +i've tested the new syscalls on x86 and x86_64, and have made sure the | |
206 | +parsing of the userspace list is robust [ ;-) ] even if the list is | |
207 | +deliberately corrupted. | |
208 | + | |
209 | +i386 and x86_64 syscalls are wired up at the moment, and Ulrich has | |
210 | +tested the new glibc code (on x86_64 and i386), and it works for his | |
211 | +robust-mutex testcases. | |
212 | + | |
213 | +All other architectures should build just fine too - but they wont have | |
214 | +the new syscalls yet. | |
215 | + | |
216 | +Architectures need to implement the new futex_atomic_cmpxchg_inuser() | |
217 | +inline function before writing up the syscalls (that function returns | |
218 | +-ENOSYS right now). |