Commit 26cdd1f76a889a21faa851bcb260782db2c7f0a9
Exists in
ti-lsk-linux-4.1.y
and in
10 other branches
Merge branches 'timers-urgent-for-linus' and 'x86-urgent-for-linus' of git://git…
….kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer and x86 fix from Ingo Molnar: "A CLOCK_TAI early expiry fix and an x86 microcode driver oops fix" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: hrtimer: Fix incorrect tai offset calculation for non high-res timer systems * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, microcode: Return error from driver init code when loader is disabled
Showing 2 changed files Inline Diff
arch/x86/kernel/cpu/microcode/core.c
1 | /* | 1 | /* |
2 | * Intel CPU Microcode Update Driver for Linux | 2 | * Intel CPU Microcode Update Driver for Linux |
3 | * | 3 | * |
4 | * Copyright (C) 2000-2006 Tigran Aivazian <tigran@aivazian.fsnet.co.uk> | 4 | * Copyright (C) 2000-2006 Tigran Aivazian <tigran@aivazian.fsnet.co.uk> |
5 | * 2006 Shaohua Li <shaohua.li@intel.com> | 5 | * 2006 Shaohua Li <shaohua.li@intel.com> |
6 | * | 6 | * |
7 | * This driver allows to upgrade microcode on Intel processors | 7 | * This driver allows to upgrade microcode on Intel processors |
8 | * belonging to IA-32 family - PentiumPro, Pentium II, | 8 | * belonging to IA-32 family - PentiumPro, Pentium II, |
9 | * Pentium III, Xeon, Pentium 4, etc. | 9 | * Pentium III, Xeon, Pentium 4, etc. |
10 | * | 10 | * |
11 | * Reference: Section 8.11 of Volume 3a, IA-32 Intel? Architecture | 11 | * Reference: Section 8.11 of Volume 3a, IA-32 Intel? Architecture |
12 | * Software Developer's Manual | 12 | * Software Developer's Manual |
13 | * Order Number 253668 or free download from: | 13 | * Order Number 253668 or free download from: |
14 | * | 14 | * |
15 | * http://developer.intel.com/Assets/PDF/manual/253668.pdf | 15 | * http://developer.intel.com/Assets/PDF/manual/253668.pdf |
16 | * | 16 | * |
17 | * For more information, go to http://www.urbanmyth.org/microcode | 17 | * For more information, go to http://www.urbanmyth.org/microcode |
18 | * | 18 | * |
19 | * This program is free software; you can redistribute it and/or | 19 | * This program is free software; you can redistribute it and/or |
20 | * modify it under the terms of the GNU General Public License | 20 | * modify it under the terms of the GNU General Public License |
21 | * as published by the Free Software Foundation; either version | 21 | * as published by the Free Software Foundation; either version |
22 | * 2 of the License, or (at your option) any later version. | 22 | * 2 of the License, or (at your option) any later version. |
23 | * | 23 | * |
24 | * 1.0 16 Feb 2000, Tigran Aivazian <tigran@sco.com> | 24 | * 1.0 16 Feb 2000, Tigran Aivazian <tigran@sco.com> |
25 | * Initial release. | 25 | * Initial release. |
26 | * 1.01 18 Feb 2000, Tigran Aivazian <tigran@sco.com> | 26 | * 1.01 18 Feb 2000, Tigran Aivazian <tigran@sco.com> |
27 | * Added read() support + cleanups. | 27 | * Added read() support + cleanups. |
28 | * 1.02 21 Feb 2000, Tigran Aivazian <tigran@sco.com> | 28 | * 1.02 21 Feb 2000, Tigran Aivazian <tigran@sco.com> |
29 | * Added 'device trimming' support. open(O_WRONLY) zeroes | 29 | * Added 'device trimming' support. open(O_WRONLY) zeroes |
30 | * and frees the saved copy of applied microcode. | 30 | * and frees the saved copy of applied microcode. |
31 | * 1.03 29 Feb 2000, Tigran Aivazian <tigran@sco.com> | 31 | * 1.03 29 Feb 2000, Tigran Aivazian <tigran@sco.com> |
32 | * Made to use devfs (/dev/cpu/microcode) + cleanups. | 32 | * Made to use devfs (/dev/cpu/microcode) + cleanups. |
33 | * 1.04 06 Jun 2000, Simon Trimmer <simon@veritas.com> | 33 | * 1.04 06 Jun 2000, Simon Trimmer <simon@veritas.com> |
34 | * Added misc device support (now uses both devfs and misc). | 34 | * Added misc device support (now uses both devfs and misc). |
35 | * Added MICROCODE_IOCFREE ioctl to clear memory. | 35 | * Added MICROCODE_IOCFREE ioctl to clear memory. |
36 | * 1.05 09 Jun 2000, Simon Trimmer <simon@veritas.com> | 36 | * 1.05 09 Jun 2000, Simon Trimmer <simon@veritas.com> |
37 | * Messages for error cases (non Intel & no suitable microcode). | 37 | * Messages for error cases (non Intel & no suitable microcode). |
38 | * 1.06 03 Aug 2000, Tigran Aivazian <tigran@veritas.com> | 38 | * 1.06 03 Aug 2000, Tigran Aivazian <tigran@veritas.com> |
39 | * Removed ->release(). Removed exclusive open and status bitmap. | 39 | * Removed ->release(). Removed exclusive open and status bitmap. |
40 | * Added microcode_rwsem to serialize read()/write()/ioctl(). | 40 | * Added microcode_rwsem to serialize read()/write()/ioctl(). |
41 | * Removed global kernel lock usage. | 41 | * Removed global kernel lock usage. |
42 | * 1.07 07 Sep 2000, Tigran Aivazian <tigran@veritas.com> | 42 | * 1.07 07 Sep 2000, Tigran Aivazian <tigran@veritas.com> |
43 | * Write 0 to 0x8B msr and then cpuid before reading revision, | 43 | * Write 0 to 0x8B msr and then cpuid before reading revision, |
44 | * so that it works even if there were no update done by the | 44 | * so that it works even if there were no update done by the |
45 | * BIOS. Otherwise, reading from 0x8B gives junk (which happened | 45 | * BIOS. Otherwise, reading from 0x8B gives junk (which happened |
46 | * to be 0 on my machine which is why it worked even when I | 46 | * to be 0 on my machine which is why it worked even when I |
47 | * disabled update by the BIOS) | 47 | * disabled update by the BIOS) |
48 | * Thanks to Eric W. Biederman <ebiederman@lnxi.com> for the fix. | 48 | * Thanks to Eric W. Biederman <ebiederman@lnxi.com> for the fix. |
49 | * 1.08 11 Dec 2000, Richard Schaal <richard.schaal@intel.com> and | 49 | * 1.08 11 Dec 2000, Richard Schaal <richard.schaal@intel.com> and |
50 | * Tigran Aivazian <tigran@veritas.com> | 50 | * Tigran Aivazian <tigran@veritas.com> |
51 | * Intel Pentium 4 processor support and bugfixes. | 51 | * Intel Pentium 4 processor support and bugfixes. |
52 | * 1.09 30 Oct 2001, Tigran Aivazian <tigran@veritas.com> | 52 | * 1.09 30 Oct 2001, Tigran Aivazian <tigran@veritas.com> |
53 | * Bugfix for HT (Hyper-Threading) enabled processors | 53 | * Bugfix for HT (Hyper-Threading) enabled processors |
54 | * whereby processor resources are shared by all logical processors | 54 | * whereby processor resources are shared by all logical processors |
55 | * in a single CPU package. | 55 | * in a single CPU package. |
56 | * 1.10 28 Feb 2002 Asit K Mallick <asit.k.mallick@intel.com> and | 56 | * 1.10 28 Feb 2002 Asit K Mallick <asit.k.mallick@intel.com> and |
57 | * Tigran Aivazian <tigran@veritas.com>, | 57 | * Tigran Aivazian <tigran@veritas.com>, |
58 | * Serialize updates as required on HT processors due to | 58 | * Serialize updates as required on HT processors due to |
59 | * speculative nature of implementation. | 59 | * speculative nature of implementation. |
60 | * 1.11 22 Mar 2002 Tigran Aivazian <tigran@veritas.com> | 60 | * 1.11 22 Mar 2002 Tigran Aivazian <tigran@veritas.com> |
61 | * Fix the panic when writing zero-length microcode chunk. | 61 | * Fix the panic when writing zero-length microcode chunk. |
62 | * 1.12 29 Sep 2003 Nitin Kamble <nitin.a.kamble@intel.com>, | 62 | * 1.12 29 Sep 2003 Nitin Kamble <nitin.a.kamble@intel.com>, |
63 | * Jun Nakajima <jun.nakajima@intel.com> | 63 | * Jun Nakajima <jun.nakajima@intel.com> |
64 | * Support for the microcode updates in the new format. | 64 | * Support for the microcode updates in the new format. |
65 | * 1.13 10 Oct 2003 Tigran Aivazian <tigran@veritas.com> | 65 | * 1.13 10 Oct 2003 Tigran Aivazian <tigran@veritas.com> |
66 | * Removed ->read() method and obsoleted MICROCODE_IOCFREE ioctl | 66 | * Removed ->read() method and obsoleted MICROCODE_IOCFREE ioctl |
67 | * because we no longer hold a copy of applied microcode | 67 | * because we no longer hold a copy of applied microcode |
68 | * in kernel memory. | 68 | * in kernel memory. |
69 | * 1.14 25 Jun 2004 Tigran Aivazian <tigran@veritas.com> | 69 | * 1.14 25 Jun 2004 Tigran Aivazian <tigran@veritas.com> |
70 | * Fix sigmatch() macro to handle old CPUs with pf == 0. | 70 | * Fix sigmatch() macro to handle old CPUs with pf == 0. |
71 | * Thanks to Stuart Swales for pointing out this bug. | 71 | * Thanks to Stuart Swales for pointing out this bug. |
72 | */ | 72 | */ |
73 | 73 | ||
74 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt | 74 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt |
75 | 75 | ||
76 | #include <linux/platform_device.h> | 76 | #include <linux/platform_device.h> |
77 | #include <linux/miscdevice.h> | 77 | #include <linux/miscdevice.h> |
78 | #include <linux/capability.h> | 78 | #include <linux/capability.h> |
79 | #include <linux/kernel.h> | 79 | #include <linux/kernel.h> |
80 | #include <linux/module.h> | 80 | #include <linux/module.h> |
81 | #include <linux/mutex.h> | 81 | #include <linux/mutex.h> |
82 | #include <linux/cpu.h> | 82 | #include <linux/cpu.h> |
83 | #include <linux/fs.h> | 83 | #include <linux/fs.h> |
84 | #include <linux/mm.h> | 84 | #include <linux/mm.h> |
85 | #include <linux/syscore_ops.h> | 85 | #include <linux/syscore_ops.h> |
86 | 86 | ||
87 | #include <asm/microcode.h> | 87 | #include <asm/microcode.h> |
88 | #include <asm/processor.h> | 88 | #include <asm/processor.h> |
89 | #include <asm/cpu_device_id.h> | 89 | #include <asm/cpu_device_id.h> |
90 | #include <asm/perf_event.h> | 90 | #include <asm/perf_event.h> |
91 | 91 | ||
92 | MODULE_DESCRIPTION("Microcode Update Driver"); | 92 | MODULE_DESCRIPTION("Microcode Update Driver"); |
93 | MODULE_AUTHOR("Tigran Aivazian <tigran@aivazian.fsnet.co.uk>"); | 93 | MODULE_AUTHOR("Tigran Aivazian <tigran@aivazian.fsnet.co.uk>"); |
94 | MODULE_LICENSE("GPL"); | 94 | MODULE_LICENSE("GPL"); |
95 | 95 | ||
96 | #define MICROCODE_VERSION "2.00" | 96 | #define MICROCODE_VERSION "2.00" |
97 | 97 | ||
98 | static struct microcode_ops *microcode_ops; | 98 | static struct microcode_ops *microcode_ops; |
99 | 99 | ||
100 | bool dis_ucode_ldr; | 100 | bool dis_ucode_ldr; |
101 | module_param(dis_ucode_ldr, bool, 0); | 101 | module_param(dis_ucode_ldr, bool, 0); |
102 | 102 | ||
103 | /* | 103 | /* |
104 | * Synchronization. | 104 | * Synchronization. |
105 | * | 105 | * |
106 | * All non cpu-hotplug-callback call sites use: | 106 | * All non cpu-hotplug-callback call sites use: |
107 | * | 107 | * |
108 | * - microcode_mutex to synchronize with each other; | 108 | * - microcode_mutex to synchronize with each other; |
109 | * - get/put_online_cpus() to synchronize with | 109 | * - get/put_online_cpus() to synchronize with |
110 | * the cpu-hotplug-callback call sites. | 110 | * the cpu-hotplug-callback call sites. |
111 | * | 111 | * |
112 | * We guarantee that only a single cpu is being | 112 | * We guarantee that only a single cpu is being |
113 | * updated at any particular moment of time. | 113 | * updated at any particular moment of time. |
114 | */ | 114 | */ |
115 | static DEFINE_MUTEX(microcode_mutex); | 115 | static DEFINE_MUTEX(microcode_mutex); |
116 | 116 | ||
117 | struct ucode_cpu_info ucode_cpu_info[NR_CPUS]; | 117 | struct ucode_cpu_info ucode_cpu_info[NR_CPUS]; |
118 | EXPORT_SYMBOL_GPL(ucode_cpu_info); | 118 | EXPORT_SYMBOL_GPL(ucode_cpu_info); |
119 | 119 | ||
120 | /* | 120 | /* |
121 | * Operations that are run on a target cpu: | 121 | * Operations that are run on a target cpu: |
122 | */ | 122 | */ |
123 | 123 | ||
124 | struct cpu_info_ctx { | 124 | struct cpu_info_ctx { |
125 | struct cpu_signature *cpu_sig; | 125 | struct cpu_signature *cpu_sig; |
126 | int err; | 126 | int err; |
127 | }; | 127 | }; |
128 | 128 | ||
129 | static void collect_cpu_info_local(void *arg) | 129 | static void collect_cpu_info_local(void *arg) |
130 | { | 130 | { |
131 | struct cpu_info_ctx *ctx = arg; | 131 | struct cpu_info_ctx *ctx = arg; |
132 | 132 | ||
133 | ctx->err = microcode_ops->collect_cpu_info(smp_processor_id(), | 133 | ctx->err = microcode_ops->collect_cpu_info(smp_processor_id(), |
134 | ctx->cpu_sig); | 134 | ctx->cpu_sig); |
135 | } | 135 | } |
136 | 136 | ||
137 | static int collect_cpu_info_on_target(int cpu, struct cpu_signature *cpu_sig) | 137 | static int collect_cpu_info_on_target(int cpu, struct cpu_signature *cpu_sig) |
138 | { | 138 | { |
139 | struct cpu_info_ctx ctx = { .cpu_sig = cpu_sig, .err = 0 }; | 139 | struct cpu_info_ctx ctx = { .cpu_sig = cpu_sig, .err = 0 }; |
140 | int ret; | 140 | int ret; |
141 | 141 | ||
142 | ret = smp_call_function_single(cpu, collect_cpu_info_local, &ctx, 1); | 142 | ret = smp_call_function_single(cpu, collect_cpu_info_local, &ctx, 1); |
143 | if (!ret) | 143 | if (!ret) |
144 | ret = ctx.err; | 144 | ret = ctx.err; |
145 | 145 | ||
146 | return ret; | 146 | return ret; |
147 | } | 147 | } |
148 | 148 | ||
149 | static int collect_cpu_info(int cpu) | 149 | static int collect_cpu_info(int cpu) |
150 | { | 150 | { |
151 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; | 151 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; |
152 | int ret; | 152 | int ret; |
153 | 153 | ||
154 | memset(uci, 0, sizeof(*uci)); | 154 | memset(uci, 0, sizeof(*uci)); |
155 | 155 | ||
156 | ret = collect_cpu_info_on_target(cpu, &uci->cpu_sig); | 156 | ret = collect_cpu_info_on_target(cpu, &uci->cpu_sig); |
157 | if (!ret) | 157 | if (!ret) |
158 | uci->valid = 1; | 158 | uci->valid = 1; |
159 | 159 | ||
160 | return ret; | 160 | return ret; |
161 | } | 161 | } |
162 | 162 | ||
163 | struct apply_microcode_ctx { | 163 | struct apply_microcode_ctx { |
164 | int err; | 164 | int err; |
165 | }; | 165 | }; |
166 | 166 | ||
167 | static void apply_microcode_local(void *arg) | 167 | static void apply_microcode_local(void *arg) |
168 | { | 168 | { |
169 | struct apply_microcode_ctx *ctx = arg; | 169 | struct apply_microcode_ctx *ctx = arg; |
170 | 170 | ||
171 | ctx->err = microcode_ops->apply_microcode(smp_processor_id()); | 171 | ctx->err = microcode_ops->apply_microcode(smp_processor_id()); |
172 | } | 172 | } |
173 | 173 | ||
174 | static int apply_microcode_on_target(int cpu) | 174 | static int apply_microcode_on_target(int cpu) |
175 | { | 175 | { |
176 | struct apply_microcode_ctx ctx = { .err = 0 }; | 176 | struct apply_microcode_ctx ctx = { .err = 0 }; |
177 | int ret; | 177 | int ret; |
178 | 178 | ||
179 | ret = smp_call_function_single(cpu, apply_microcode_local, &ctx, 1); | 179 | ret = smp_call_function_single(cpu, apply_microcode_local, &ctx, 1); |
180 | if (!ret) | 180 | if (!ret) |
181 | ret = ctx.err; | 181 | ret = ctx.err; |
182 | 182 | ||
183 | return ret; | 183 | return ret; |
184 | } | 184 | } |
185 | 185 | ||
186 | #ifdef CONFIG_MICROCODE_OLD_INTERFACE | 186 | #ifdef CONFIG_MICROCODE_OLD_INTERFACE |
187 | static int do_microcode_update(const void __user *buf, size_t size) | 187 | static int do_microcode_update(const void __user *buf, size_t size) |
188 | { | 188 | { |
189 | int error = 0; | 189 | int error = 0; |
190 | int cpu; | 190 | int cpu; |
191 | 191 | ||
192 | for_each_online_cpu(cpu) { | 192 | for_each_online_cpu(cpu) { |
193 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; | 193 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; |
194 | enum ucode_state ustate; | 194 | enum ucode_state ustate; |
195 | 195 | ||
196 | if (!uci->valid) | 196 | if (!uci->valid) |
197 | continue; | 197 | continue; |
198 | 198 | ||
199 | ustate = microcode_ops->request_microcode_user(cpu, buf, size); | 199 | ustate = microcode_ops->request_microcode_user(cpu, buf, size); |
200 | if (ustate == UCODE_ERROR) { | 200 | if (ustate == UCODE_ERROR) { |
201 | error = -1; | 201 | error = -1; |
202 | break; | 202 | break; |
203 | } else if (ustate == UCODE_OK) | 203 | } else if (ustate == UCODE_OK) |
204 | apply_microcode_on_target(cpu); | 204 | apply_microcode_on_target(cpu); |
205 | } | 205 | } |
206 | 206 | ||
207 | return error; | 207 | return error; |
208 | } | 208 | } |
209 | 209 | ||
210 | static int microcode_open(struct inode *inode, struct file *file) | 210 | static int microcode_open(struct inode *inode, struct file *file) |
211 | { | 211 | { |
212 | return capable(CAP_SYS_RAWIO) ? nonseekable_open(inode, file) : -EPERM; | 212 | return capable(CAP_SYS_RAWIO) ? nonseekable_open(inode, file) : -EPERM; |
213 | } | 213 | } |
214 | 214 | ||
215 | static ssize_t microcode_write(struct file *file, const char __user *buf, | 215 | static ssize_t microcode_write(struct file *file, const char __user *buf, |
216 | size_t len, loff_t *ppos) | 216 | size_t len, loff_t *ppos) |
217 | { | 217 | { |
218 | ssize_t ret = -EINVAL; | 218 | ssize_t ret = -EINVAL; |
219 | 219 | ||
220 | if ((len >> PAGE_SHIFT) > totalram_pages) { | 220 | if ((len >> PAGE_SHIFT) > totalram_pages) { |
221 | pr_err("too much data (max %ld pages)\n", totalram_pages); | 221 | pr_err("too much data (max %ld pages)\n", totalram_pages); |
222 | return ret; | 222 | return ret; |
223 | } | 223 | } |
224 | 224 | ||
225 | get_online_cpus(); | 225 | get_online_cpus(); |
226 | mutex_lock(µcode_mutex); | 226 | mutex_lock(µcode_mutex); |
227 | 227 | ||
228 | if (do_microcode_update(buf, len) == 0) | 228 | if (do_microcode_update(buf, len) == 0) |
229 | ret = (ssize_t)len; | 229 | ret = (ssize_t)len; |
230 | 230 | ||
231 | if (ret > 0) | 231 | if (ret > 0) |
232 | perf_check_microcode(); | 232 | perf_check_microcode(); |
233 | 233 | ||
234 | mutex_unlock(µcode_mutex); | 234 | mutex_unlock(µcode_mutex); |
235 | put_online_cpus(); | 235 | put_online_cpus(); |
236 | 236 | ||
237 | return ret; | 237 | return ret; |
238 | } | 238 | } |
239 | 239 | ||
240 | static const struct file_operations microcode_fops = { | 240 | static const struct file_operations microcode_fops = { |
241 | .owner = THIS_MODULE, | 241 | .owner = THIS_MODULE, |
242 | .write = microcode_write, | 242 | .write = microcode_write, |
243 | .open = microcode_open, | 243 | .open = microcode_open, |
244 | .llseek = no_llseek, | 244 | .llseek = no_llseek, |
245 | }; | 245 | }; |
246 | 246 | ||
247 | static struct miscdevice microcode_dev = { | 247 | static struct miscdevice microcode_dev = { |
248 | .minor = MICROCODE_MINOR, | 248 | .minor = MICROCODE_MINOR, |
249 | .name = "microcode", | 249 | .name = "microcode", |
250 | .nodename = "cpu/microcode", | 250 | .nodename = "cpu/microcode", |
251 | .fops = µcode_fops, | 251 | .fops = µcode_fops, |
252 | }; | 252 | }; |
253 | 253 | ||
254 | static int __init microcode_dev_init(void) | 254 | static int __init microcode_dev_init(void) |
255 | { | 255 | { |
256 | int error; | 256 | int error; |
257 | 257 | ||
258 | error = misc_register(µcode_dev); | 258 | error = misc_register(µcode_dev); |
259 | if (error) { | 259 | if (error) { |
260 | pr_err("can't misc_register on minor=%d\n", MICROCODE_MINOR); | 260 | pr_err("can't misc_register on minor=%d\n", MICROCODE_MINOR); |
261 | return error; | 261 | return error; |
262 | } | 262 | } |
263 | 263 | ||
264 | return 0; | 264 | return 0; |
265 | } | 265 | } |
266 | 266 | ||
267 | static void __exit microcode_dev_exit(void) | 267 | static void __exit microcode_dev_exit(void) |
268 | { | 268 | { |
269 | misc_deregister(µcode_dev); | 269 | misc_deregister(µcode_dev); |
270 | } | 270 | } |
271 | 271 | ||
272 | MODULE_ALIAS_MISCDEV(MICROCODE_MINOR); | 272 | MODULE_ALIAS_MISCDEV(MICROCODE_MINOR); |
273 | MODULE_ALIAS("devname:cpu/microcode"); | 273 | MODULE_ALIAS("devname:cpu/microcode"); |
274 | #else | 274 | #else |
275 | #define microcode_dev_init() 0 | 275 | #define microcode_dev_init() 0 |
276 | #define microcode_dev_exit() do { } while (0) | 276 | #define microcode_dev_exit() do { } while (0) |
277 | #endif | 277 | #endif |
278 | 278 | ||
279 | /* fake device for request_firmware */ | 279 | /* fake device for request_firmware */ |
280 | static struct platform_device *microcode_pdev; | 280 | static struct platform_device *microcode_pdev; |
281 | 281 | ||
282 | static int reload_for_cpu(int cpu) | 282 | static int reload_for_cpu(int cpu) |
283 | { | 283 | { |
284 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; | 284 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; |
285 | enum ucode_state ustate; | 285 | enum ucode_state ustate; |
286 | int err = 0; | 286 | int err = 0; |
287 | 287 | ||
288 | if (!uci->valid) | 288 | if (!uci->valid) |
289 | return err; | 289 | return err; |
290 | 290 | ||
291 | ustate = microcode_ops->request_microcode_fw(cpu, µcode_pdev->dev, true); | 291 | ustate = microcode_ops->request_microcode_fw(cpu, µcode_pdev->dev, true); |
292 | if (ustate == UCODE_OK) | 292 | if (ustate == UCODE_OK) |
293 | apply_microcode_on_target(cpu); | 293 | apply_microcode_on_target(cpu); |
294 | else | 294 | else |
295 | if (ustate == UCODE_ERROR) | 295 | if (ustate == UCODE_ERROR) |
296 | err = -EINVAL; | 296 | err = -EINVAL; |
297 | return err; | 297 | return err; |
298 | } | 298 | } |
299 | 299 | ||
300 | static ssize_t reload_store(struct device *dev, | 300 | static ssize_t reload_store(struct device *dev, |
301 | struct device_attribute *attr, | 301 | struct device_attribute *attr, |
302 | const char *buf, size_t size) | 302 | const char *buf, size_t size) |
303 | { | 303 | { |
304 | unsigned long val; | 304 | unsigned long val; |
305 | int cpu; | 305 | int cpu; |
306 | ssize_t ret = 0, tmp_ret; | 306 | ssize_t ret = 0, tmp_ret; |
307 | 307 | ||
308 | ret = kstrtoul(buf, 0, &val); | 308 | ret = kstrtoul(buf, 0, &val); |
309 | if (ret) | 309 | if (ret) |
310 | return ret; | 310 | return ret; |
311 | 311 | ||
312 | if (val != 1) | 312 | if (val != 1) |
313 | return size; | 313 | return size; |
314 | 314 | ||
315 | get_online_cpus(); | 315 | get_online_cpus(); |
316 | mutex_lock(µcode_mutex); | 316 | mutex_lock(µcode_mutex); |
317 | for_each_online_cpu(cpu) { | 317 | for_each_online_cpu(cpu) { |
318 | tmp_ret = reload_for_cpu(cpu); | 318 | tmp_ret = reload_for_cpu(cpu); |
319 | if (tmp_ret != 0) | 319 | if (tmp_ret != 0) |
320 | pr_warn("Error reloading microcode on CPU %d\n", cpu); | 320 | pr_warn("Error reloading microcode on CPU %d\n", cpu); |
321 | 321 | ||
322 | /* save retval of the first encountered reload error */ | 322 | /* save retval of the first encountered reload error */ |
323 | if (!ret) | 323 | if (!ret) |
324 | ret = tmp_ret; | 324 | ret = tmp_ret; |
325 | } | 325 | } |
326 | if (!ret) | 326 | if (!ret) |
327 | perf_check_microcode(); | 327 | perf_check_microcode(); |
328 | mutex_unlock(µcode_mutex); | 328 | mutex_unlock(µcode_mutex); |
329 | put_online_cpus(); | 329 | put_online_cpus(); |
330 | 330 | ||
331 | if (!ret) | 331 | if (!ret) |
332 | ret = size; | 332 | ret = size; |
333 | 333 | ||
334 | return ret; | 334 | return ret; |
335 | } | 335 | } |
336 | 336 | ||
337 | static ssize_t version_show(struct device *dev, | 337 | static ssize_t version_show(struct device *dev, |
338 | struct device_attribute *attr, char *buf) | 338 | struct device_attribute *attr, char *buf) |
339 | { | 339 | { |
340 | struct ucode_cpu_info *uci = ucode_cpu_info + dev->id; | 340 | struct ucode_cpu_info *uci = ucode_cpu_info + dev->id; |
341 | 341 | ||
342 | return sprintf(buf, "0x%x\n", uci->cpu_sig.rev); | 342 | return sprintf(buf, "0x%x\n", uci->cpu_sig.rev); |
343 | } | 343 | } |
344 | 344 | ||
345 | static ssize_t pf_show(struct device *dev, | 345 | static ssize_t pf_show(struct device *dev, |
346 | struct device_attribute *attr, char *buf) | 346 | struct device_attribute *attr, char *buf) |
347 | { | 347 | { |
348 | struct ucode_cpu_info *uci = ucode_cpu_info + dev->id; | 348 | struct ucode_cpu_info *uci = ucode_cpu_info + dev->id; |
349 | 349 | ||
350 | return sprintf(buf, "0x%x\n", uci->cpu_sig.pf); | 350 | return sprintf(buf, "0x%x\n", uci->cpu_sig.pf); |
351 | } | 351 | } |
352 | 352 | ||
353 | static DEVICE_ATTR(reload, 0200, NULL, reload_store); | 353 | static DEVICE_ATTR(reload, 0200, NULL, reload_store); |
354 | static DEVICE_ATTR(version, 0400, version_show, NULL); | 354 | static DEVICE_ATTR(version, 0400, version_show, NULL); |
355 | static DEVICE_ATTR(processor_flags, 0400, pf_show, NULL); | 355 | static DEVICE_ATTR(processor_flags, 0400, pf_show, NULL); |
356 | 356 | ||
357 | static struct attribute *mc_default_attrs[] = { | 357 | static struct attribute *mc_default_attrs[] = { |
358 | &dev_attr_version.attr, | 358 | &dev_attr_version.attr, |
359 | &dev_attr_processor_flags.attr, | 359 | &dev_attr_processor_flags.attr, |
360 | NULL | 360 | NULL |
361 | }; | 361 | }; |
362 | 362 | ||
363 | static struct attribute_group mc_attr_group = { | 363 | static struct attribute_group mc_attr_group = { |
364 | .attrs = mc_default_attrs, | 364 | .attrs = mc_default_attrs, |
365 | .name = "microcode", | 365 | .name = "microcode", |
366 | }; | 366 | }; |
367 | 367 | ||
368 | static void microcode_fini_cpu(int cpu) | 368 | static void microcode_fini_cpu(int cpu) |
369 | { | 369 | { |
370 | microcode_ops->microcode_fini_cpu(cpu); | 370 | microcode_ops->microcode_fini_cpu(cpu); |
371 | } | 371 | } |
372 | 372 | ||
373 | static enum ucode_state microcode_resume_cpu(int cpu) | 373 | static enum ucode_state microcode_resume_cpu(int cpu) |
374 | { | 374 | { |
375 | pr_debug("CPU%d updated upon resume\n", cpu); | 375 | pr_debug("CPU%d updated upon resume\n", cpu); |
376 | 376 | ||
377 | if (apply_microcode_on_target(cpu)) | 377 | if (apply_microcode_on_target(cpu)) |
378 | return UCODE_ERROR; | 378 | return UCODE_ERROR; |
379 | 379 | ||
380 | return UCODE_OK; | 380 | return UCODE_OK; |
381 | } | 381 | } |
382 | 382 | ||
383 | static enum ucode_state microcode_init_cpu(int cpu, bool refresh_fw) | 383 | static enum ucode_state microcode_init_cpu(int cpu, bool refresh_fw) |
384 | { | 384 | { |
385 | enum ucode_state ustate; | 385 | enum ucode_state ustate; |
386 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; | 386 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; |
387 | 387 | ||
388 | if (uci && uci->valid) | 388 | if (uci && uci->valid) |
389 | return UCODE_OK; | 389 | return UCODE_OK; |
390 | 390 | ||
391 | if (collect_cpu_info(cpu)) | 391 | if (collect_cpu_info(cpu)) |
392 | return UCODE_ERROR; | 392 | return UCODE_ERROR; |
393 | 393 | ||
394 | /* --dimm. Trigger a delayed update? */ | 394 | /* --dimm. Trigger a delayed update? */ |
395 | if (system_state != SYSTEM_RUNNING) | 395 | if (system_state != SYSTEM_RUNNING) |
396 | return UCODE_NFOUND; | 396 | return UCODE_NFOUND; |
397 | 397 | ||
398 | ustate = microcode_ops->request_microcode_fw(cpu, µcode_pdev->dev, | 398 | ustate = microcode_ops->request_microcode_fw(cpu, µcode_pdev->dev, |
399 | refresh_fw); | 399 | refresh_fw); |
400 | 400 | ||
401 | if (ustate == UCODE_OK) { | 401 | if (ustate == UCODE_OK) { |
402 | pr_debug("CPU%d updated upon init\n", cpu); | 402 | pr_debug("CPU%d updated upon init\n", cpu); |
403 | apply_microcode_on_target(cpu); | 403 | apply_microcode_on_target(cpu); |
404 | } | 404 | } |
405 | 405 | ||
406 | return ustate; | 406 | return ustate; |
407 | } | 407 | } |
408 | 408 | ||
409 | static enum ucode_state microcode_update_cpu(int cpu) | 409 | static enum ucode_state microcode_update_cpu(int cpu) |
410 | { | 410 | { |
411 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; | 411 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; |
412 | 412 | ||
413 | if (uci->valid) | 413 | if (uci->valid) |
414 | return microcode_resume_cpu(cpu); | 414 | return microcode_resume_cpu(cpu); |
415 | 415 | ||
416 | return microcode_init_cpu(cpu, false); | 416 | return microcode_init_cpu(cpu, false); |
417 | } | 417 | } |
418 | 418 | ||
419 | static int mc_device_add(struct device *dev, struct subsys_interface *sif) | 419 | static int mc_device_add(struct device *dev, struct subsys_interface *sif) |
420 | { | 420 | { |
421 | int err, cpu = dev->id; | 421 | int err, cpu = dev->id; |
422 | 422 | ||
423 | if (!cpu_online(cpu)) | 423 | if (!cpu_online(cpu)) |
424 | return 0; | 424 | return 0; |
425 | 425 | ||
426 | pr_debug("CPU%d added\n", cpu); | 426 | pr_debug("CPU%d added\n", cpu); |
427 | 427 | ||
428 | err = sysfs_create_group(&dev->kobj, &mc_attr_group); | 428 | err = sysfs_create_group(&dev->kobj, &mc_attr_group); |
429 | if (err) | 429 | if (err) |
430 | return err; | 430 | return err; |
431 | 431 | ||
432 | if (microcode_init_cpu(cpu, true) == UCODE_ERROR) | 432 | if (microcode_init_cpu(cpu, true) == UCODE_ERROR) |
433 | return -EINVAL; | 433 | return -EINVAL; |
434 | 434 | ||
435 | return err; | 435 | return err; |
436 | } | 436 | } |
437 | 437 | ||
438 | static int mc_device_remove(struct device *dev, struct subsys_interface *sif) | 438 | static int mc_device_remove(struct device *dev, struct subsys_interface *sif) |
439 | { | 439 | { |
440 | int cpu = dev->id; | 440 | int cpu = dev->id; |
441 | 441 | ||
442 | if (!cpu_online(cpu)) | 442 | if (!cpu_online(cpu)) |
443 | return 0; | 443 | return 0; |
444 | 444 | ||
445 | pr_debug("CPU%d removed\n", cpu); | 445 | pr_debug("CPU%d removed\n", cpu); |
446 | microcode_fini_cpu(cpu); | 446 | microcode_fini_cpu(cpu); |
447 | sysfs_remove_group(&dev->kobj, &mc_attr_group); | 447 | sysfs_remove_group(&dev->kobj, &mc_attr_group); |
448 | return 0; | 448 | return 0; |
449 | } | 449 | } |
450 | 450 | ||
451 | static struct subsys_interface mc_cpu_interface = { | 451 | static struct subsys_interface mc_cpu_interface = { |
452 | .name = "microcode", | 452 | .name = "microcode", |
453 | .subsys = &cpu_subsys, | 453 | .subsys = &cpu_subsys, |
454 | .add_dev = mc_device_add, | 454 | .add_dev = mc_device_add, |
455 | .remove_dev = mc_device_remove, | 455 | .remove_dev = mc_device_remove, |
456 | }; | 456 | }; |
457 | 457 | ||
458 | /** | 458 | /** |
459 | * mc_bp_resume - Update boot CPU microcode during resume. | 459 | * mc_bp_resume - Update boot CPU microcode during resume. |
460 | */ | 460 | */ |
461 | static void mc_bp_resume(void) | 461 | static void mc_bp_resume(void) |
462 | { | 462 | { |
463 | int cpu = smp_processor_id(); | 463 | int cpu = smp_processor_id(); |
464 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; | 464 | struct ucode_cpu_info *uci = ucode_cpu_info + cpu; |
465 | 465 | ||
466 | if (uci->valid && uci->mc) | 466 | if (uci->valid && uci->mc) |
467 | microcode_ops->apply_microcode(cpu); | 467 | microcode_ops->apply_microcode(cpu); |
468 | else if (!uci->mc) | 468 | else if (!uci->mc) |
469 | reload_early_microcode(); | 469 | reload_early_microcode(); |
470 | } | 470 | } |
471 | 471 | ||
472 | static struct syscore_ops mc_syscore_ops = { | 472 | static struct syscore_ops mc_syscore_ops = { |
473 | .resume = mc_bp_resume, | 473 | .resume = mc_bp_resume, |
474 | }; | 474 | }; |
475 | 475 | ||
476 | static int | 476 | static int |
477 | mc_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu) | 477 | mc_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu) |
478 | { | 478 | { |
479 | unsigned int cpu = (unsigned long)hcpu; | 479 | unsigned int cpu = (unsigned long)hcpu; |
480 | struct device *dev; | 480 | struct device *dev; |
481 | 481 | ||
482 | dev = get_cpu_device(cpu); | 482 | dev = get_cpu_device(cpu); |
483 | 483 | ||
484 | switch (action & ~CPU_TASKS_FROZEN) { | 484 | switch (action & ~CPU_TASKS_FROZEN) { |
485 | case CPU_ONLINE: | 485 | case CPU_ONLINE: |
486 | microcode_update_cpu(cpu); | 486 | microcode_update_cpu(cpu); |
487 | pr_debug("CPU%d added\n", cpu); | 487 | pr_debug("CPU%d added\n", cpu); |
488 | /* | 488 | /* |
489 | * "break" is missing on purpose here because we want to fall | 489 | * "break" is missing on purpose here because we want to fall |
490 | * through in order to create the sysfs group. | 490 | * through in order to create the sysfs group. |
491 | */ | 491 | */ |
492 | 492 | ||
493 | case CPU_DOWN_FAILED: | 493 | case CPU_DOWN_FAILED: |
494 | if (sysfs_create_group(&dev->kobj, &mc_attr_group)) | 494 | if (sysfs_create_group(&dev->kobj, &mc_attr_group)) |
495 | pr_err("Failed to create group for CPU%d\n", cpu); | 495 | pr_err("Failed to create group for CPU%d\n", cpu); |
496 | break; | 496 | break; |
497 | 497 | ||
498 | case CPU_DOWN_PREPARE: | 498 | case CPU_DOWN_PREPARE: |
499 | /* Suspend is in progress, only remove the interface */ | 499 | /* Suspend is in progress, only remove the interface */ |
500 | sysfs_remove_group(&dev->kobj, &mc_attr_group); | 500 | sysfs_remove_group(&dev->kobj, &mc_attr_group); |
501 | pr_debug("CPU%d removed\n", cpu); | 501 | pr_debug("CPU%d removed\n", cpu); |
502 | break; | 502 | break; |
503 | 503 | ||
504 | /* | 504 | /* |
505 | * case CPU_DEAD: | 505 | * case CPU_DEAD: |
506 | * | 506 | * |
507 | * When a CPU goes offline, don't free up or invalidate the copy of | 507 | * When a CPU goes offline, don't free up or invalidate the copy of |
508 | * the microcode in kernel memory, so that we can reuse it when the | 508 | * the microcode in kernel memory, so that we can reuse it when the |
509 | * CPU comes back online without unnecessarily requesting the userspace | 509 | * CPU comes back online without unnecessarily requesting the userspace |
510 | * for it again. | 510 | * for it again. |
511 | */ | 511 | */ |
512 | } | 512 | } |
513 | 513 | ||
514 | /* The CPU refused to come up during a system resume */ | 514 | /* The CPU refused to come up during a system resume */ |
515 | if (action == CPU_UP_CANCELED_FROZEN) | 515 | if (action == CPU_UP_CANCELED_FROZEN) |
516 | microcode_fini_cpu(cpu); | 516 | microcode_fini_cpu(cpu); |
517 | 517 | ||
518 | return NOTIFY_OK; | 518 | return NOTIFY_OK; |
519 | } | 519 | } |
520 | 520 | ||
521 | static struct notifier_block __refdata mc_cpu_notifier = { | 521 | static struct notifier_block __refdata mc_cpu_notifier = { |
522 | .notifier_call = mc_cpu_callback, | 522 | .notifier_call = mc_cpu_callback, |
523 | }; | 523 | }; |
524 | 524 | ||
525 | #ifdef MODULE | 525 | #ifdef MODULE |
526 | /* Autoload on Intel and AMD systems */ | 526 | /* Autoload on Intel and AMD systems */ |
527 | static const struct x86_cpu_id __initconst microcode_id[] = { | 527 | static const struct x86_cpu_id __initconst microcode_id[] = { |
528 | #ifdef CONFIG_MICROCODE_INTEL | 528 | #ifdef CONFIG_MICROCODE_INTEL |
529 | { X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, }, | 529 | { X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, }, |
530 | #endif | 530 | #endif |
531 | #ifdef CONFIG_MICROCODE_AMD | 531 | #ifdef CONFIG_MICROCODE_AMD |
532 | { X86_VENDOR_AMD, X86_FAMILY_ANY, X86_MODEL_ANY, }, | 532 | { X86_VENDOR_AMD, X86_FAMILY_ANY, X86_MODEL_ANY, }, |
533 | #endif | 533 | #endif |
534 | {} | 534 | {} |
535 | }; | 535 | }; |
536 | MODULE_DEVICE_TABLE(x86cpu, microcode_id); | 536 | MODULE_DEVICE_TABLE(x86cpu, microcode_id); |
537 | #endif | 537 | #endif |
538 | 538 | ||
539 | static struct attribute *cpu_root_microcode_attrs[] = { | 539 | static struct attribute *cpu_root_microcode_attrs[] = { |
540 | &dev_attr_reload.attr, | 540 | &dev_attr_reload.attr, |
541 | NULL | 541 | NULL |
542 | }; | 542 | }; |
543 | 543 | ||
544 | static struct attribute_group cpu_root_microcode_group = { | 544 | static struct attribute_group cpu_root_microcode_group = { |
545 | .name = "microcode", | 545 | .name = "microcode", |
546 | .attrs = cpu_root_microcode_attrs, | 546 | .attrs = cpu_root_microcode_attrs, |
547 | }; | 547 | }; |
548 | 548 | ||
549 | static int __init microcode_init(void) | 549 | static int __init microcode_init(void) |
550 | { | 550 | { |
551 | struct cpuinfo_x86 *c = &cpu_data(0); | 551 | struct cpuinfo_x86 *c = &cpu_data(0); |
552 | int error; | 552 | int error; |
553 | 553 | ||
554 | if (paravirt_enabled() || dis_ucode_ldr) | 554 | if (paravirt_enabled() || dis_ucode_ldr) |
555 | return 0; | 555 | return -EINVAL; |
556 | 556 | ||
557 | if (c->x86_vendor == X86_VENDOR_INTEL) | 557 | if (c->x86_vendor == X86_VENDOR_INTEL) |
558 | microcode_ops = init_intel_microcode(); | 558 | microcode_ops = init_intel_microcode(); |
559 | else if (c->x86_vendor == X86_VENDOR_AMD) | 559 | else if (c->x86_vendor == X86_VENDOR_AMD) |
560 | microcode_ops = init_amd_microcode(); | 560 | microcode_ops = init_amd_microcode(); |
561 | else | 561 | else |
562 | pr_err("no support for this CPU vendor\n"); | 562 | pr_err("no support for this CPU vendor\n"); |
563 | 563 | ||
564 | if (!microcode_ops) | 564 | if (!microcode_ops) |
565 | return -ENODEV; | 565 | return -ENODEV; |
566 | 566 | ||
567 | microcode_pdev = platform_device_register_simple("microcode", -1, | 567 | microcode_pdev = platform_device_register_simple("microcode", -1, |
568 | NULL, 0); | 568 | NULL, 0); |
569 | if (IS_ERR(microcode_pdev)) | 569 | if (IS_ERR(microcode_pdev)) |
570 | return PTR_ERR(microcode_pdev); | 570 | return PTR_ERR(microcode_pdev); |
571 | 571 | ||
572 | get_online_cpus(); | 572 | get_online_cpus(); |
573 | mutex_lock(µcode_mutex); | 573 | mutex_lock(µcode_mutex); |
574 | 574 | ||
575 | error = subsys_interface_register(&mc_cpu_interface); | 575 | error = subsys_interface_register(&mc_cpu_interface); |
576 | if (!error) | 576 | if (!error) |
577 | perf_check_microcode(); | 577 | perf_check_microcode(); |
578 | mutex_unlock(µcode_mutex); | 578 | mutex_unlock(µcode_mutex); |
579 | put_online_cpus(); | 579 | put_online_cpus(); |
580 | 580 | ||
581 | if (error) | 581 | if (error) |
582 | goto out_pdev; | 582 | goto out_pdev; |
583 | 583 | ||
584 | error = sysfs_create_group(&cpu_subsys.dev_root->kobj, | 584 | error = sysfs_create_group(&cpu_subsys.dev_root->kobj, |
585 | &cpu_root_microcode_group); | 585 | &cpu_root_microcode_group); |
586 | 586 | ||
587 | if (error) { | 587 | if (error) { |
588 | pr_err("Error creating microcode group!\n"); | 588 | pr_err("Error creating microcode group!\n"); |
589 | goto out_driver; | 589 | goto out_driver; |
590 | } | 590 | } |
591 | 591 | ||
592 | error = microcode_dev_init(); | 592 | error = microcode_dev_init(); |
593 | if (error) | 593 | if (error) |
594 | goto out_ucode_group; | 594 | goto out_ucode_group; |
595 | 595 | ||
596 | register_syscore_ops(&mc_syscore_ops); | 596 | register_syscore_ops(&mc_syscore_ops); |
597 | register_hotcpu_notifier(&mc_cpu_notifier); | 597 | register_hotcpu_notifier(&mc_cpu_notifier); |
598 | 598 | ||
599 | pr_info("Microcode Update Driver: v" MICROCODE_VERSION | 599 | pr_info("Microcode Update Driver: v" MICROCODE_VERSION |
600 | " <tigran@aivazian.fsnet.co.uk>, Peter Oruba\n"); | 600 | " <tigran@aivazian.fsnet.co.uk>, Peter Oruba\n"); |
601 | 601 | ||
602 | return 0; | 602 | return 0; |
603 | 603 | ||
604 | out_ucode_group: | 604 | out_ucode_group: |
605 | sysfs_remove_group(&cpu_subsys.dev_root->kobj, | 605 | sysfs_remove_group(&cpu_subsys.dev_root->kobj, |
606 | &cpu_root_microcode_group); | 606 | &cpu_root_microcode_group); |
607 | 607 | ||
608 | out_driver: | 608 | out_driver: |
609 | get_online_cpus(); | 609 | get_online_cpus(); |
610 | mutex_lock(µcode_mutex); | 610 | mutex_lock(µcode_mutex); |
611 | 611 | ||
612 | subsys_interface_unregister(&mc_cpu_interface); | 612 | subsys_interface_unregister(&mc_cpu_interface); |
613 | 613 | ||
614 | mutex_unlock(µcode_mutex); | 614 | mutex_unlock(µcode_mutex); |
615 | put_online_cpus(); | 615 | put_online_cpus(); |
616 | 616 | ||
617 | out_pdev: | 617 | out_pdev: |
618 | platform_device_unregister(microcode_pdev); | 618 | platform_device_unregister(microcode_pdev); |
619 | return error; | 619 | return error; |
620 | 620 | ||
621 | } | 621 | } |
622 | module_init(microcode_init); | 622 | module_init(microcode_init); |
623 | 623 | ||
624 | static void __exit microcode_exit(void) | 624 | static void __exit microcode_exit(void) |
625 | { | 625 | { |
626 | struct cpuinfo_x86 *c = &cpu_data(0); | 626 | struct cpuinfo_x86 *c = &cpu_data(0); |
627 | 627 | ||
628 | microcode_dev_exit(); | 628 | microcode_dev_exit(); |
629 | 629 | ||
630 | unregister_hotcpu_notifier(&mc_cpu_notifier); | 630 | unregister_hotcpu_notifier(&mc_cpu_notifier); |
631 | unregister_syscore_ops(&mc_syscore_ops); | 631 | unregister_syscore_ops(&mc_syscore_ops); |
632 | 632 | ||
633 | sysfs_remove_group(&cpu_subsys.dev_root->kobj, | 633 | sysfs_remove_group(&cpu_subsys.dev_root->kobj, |
634 | &cpu_root_microcode_group); | 634 | &cpu_root_microcode_group); |
635 | 635 | ||
636 | get_online_cpus(); | 636 | get_online_cpus(); |
637 | mutex_lock(µcode_mutex); | 637 | mutex_lock(µcode_mutex); |
638 | 638 | ||
639 | subsys_interface_unregister(&mc_cpu_interface); | 639 | subsys_interface_unregister(&mc_cpu_interface); |
640 | 640 | ||
641 | mutex_unlock(µcode_mutex); | 641 | mutex_unlock(µcode_mutex); |
642 | put_online_cpus(); | 642 | put_online_cpus(); |
643 | 643 | ||
644 | platform_device_unregister(microcode_pdev); | 644 | platform_device_unregister(microcode_pdev); |
645 | 645 | ||
646 | microcode_ops = NULL; | 646 | microcode_ops = NULL; |
647 | 647 | ||
648 | if (c->x86_vendor == X86_VENDOR_AMD) | 648 | if (c->x86_vendor == X86_VENDOR_AMD) |
649 | exit_amd_microcode(); | 649 | exit_amd_microcode(); |
650 | 650 | ||
651 | pr_info("Microcode Update Driver: v" MICROCODE_VERSION " removed.\n"); | 651 | pr_info("Microcode Update Driver: v" MICROCODE_VERSION " removed.\n"); |
652 | } | 652 | } |
653 | module_exit(microcode_exit); | 653 | module_exit(microcode_exit); |
654 | 654 |
kernel/time/hrtimer.c
1 | /* | 1 | /* |
2 | * linux/kernel/hrtimer.c | 2 | * linux/kernel/hrtimer.c |
3 | * | 3 | * |
4 | * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> | 4 | * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> |
5 | * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar | 5 | * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar |
6 | * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner | 6 | * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner |
7 | * | 7 | * |
8 | * High-resolution kernel timers | 8 | * High-resolution kernel timers |
9 | * | 9 | * |
10 | * In contrast to the low-resolution timeout API implemented in | 10 | * In contrast to the low-resolution timeout API implemented in |
11 | * kernel/timer.c, hrtimers provide finer resolution and accuracy | 11 | * kernel/timer.c, hrtimers provide finer resolution and accuracy |
12 | * depending on system configuration and capabilities. | 12 | * depending on system configuration and capabilities. |
13 | * | 13 | * |
14 | * These timers are currently used for: | 14 | * These timers are currently used for: |
15 | * - itimers | 15 | * - itimers |
16 | * - POSIX timers | 16 | * - POSIX timers |
17 | * - nanosleep | 17 | * - nanosleep |
18 | * - precise in-kernel timing | 18 | * - precise in-kernel timing |
19 | * | 19 | * |
20 | * Started by: Thomas Gleixner and Ingo Molnar | 20 | * Started by: Thomas Gleixner and Ingo Molnar |
21 | * | 21 | * |
22 | * Credits: | 22 | * Credits: |
23 | * based on kernel/timer.c | 23 | * based on kernel/timer.c |
24 | * | 24 | * |
25 | * Help, testing, suggestions, bugfixes, improvements were | 25 | * Help, testing, suggestions, bugfixes, improvements were |
26 | * provided by: | 26 | * provided by: |
27 | * | 27 | * |
28 | * George Anzinger, Andrew Morton, Steven Rostedt, Roman Zippel | 28 | * George Anzinger, Andrew Morton, Steven Rostedt, Roman Zippel |
29 | * et. al. | 29 | * et. al. |
30 | * | 30 | * |
31 | * For licencing details see kernel-base/COPYING | 31 | * For licencing details see kernel-base/COPYING |
32 | */ | 32 | */ |
33 | 33 | ||
34 | #include <linux/cpu.h> | 34 | #include <linux/cpu.h> |
35 | #include <linux/export.h> | 35 | #include <linux/export.h> |
36 | #include <linux/percpu.h> | 36 | #include <linux/percpu.h> |
37 | #include <linux/hrtimer.h> | 37 | #include <linux/hrtimer.h> |
38 | #include <linux/notifier.h> | 38 | #include <linux/notifier.h> |
39 | #include <linux/syscalls.h> | 39 | #include <linux/syscalls.h> |
40 | #include <linux/kallsyms.h> | 40 | #include <linux/kallsyms.h> |
41 | #include <linux/interrupt.h> | 41 | #include <linux/interrupt.h> |
42 | #include <linux/tick.h> | 42 | #include <linux/tick.h> |
43 | #include <linux/seq_file.h> | 43 | #include <linux/seq_file.h> |
44 | #include <linux/err.h> | 44 | #include <linux/err.h> |
45 | #include <linux/debugobjects.h> | 45 | #include <linux/debugobjects.h> |
46 | #include <linux/sched.h> | 46 | #include <linux/sched.h> |
47 | #include <linux/sched/sysctl.h> | 47 | #include <linux/sched/sysctl.h> |
48 | #include <linux/sched/rt.h> | 48 | #include <linux/sched/rt.h> |
49 | #include <linux/sched/deadline.h> | 49 | #include <linux/sched/deadline.h> |
50 | #include <linux/timer.h> | 50 | #include <linux/timer.h> |
51 | #include <linux/freezer.h> | 51 | #include <linux/freezer.h> |
52 | 52 | ||
53 | #include <asm/uaccess.h> | 53 | #include <asm/uaccess.h> |
54 | 54 | ||
55 | #include <trace/events/timer.h> | 55 | #include <trace/events/timer.h> |
56 | 56 | ||
57 | #include "timekeeping.h" | 57 | #include "timekeeping.h" |
58 | 58 | ||
59 | /* | 59 | /* |
60 | * The timer bases: | 60 | * The timer bases: |
61 | * | 61 | * |
62 | * There are more clockids then hrtimer bases. Thus, we index | 62 | * There are more clockids then hrtimer bases. Thus, we index |
63 | * into the timer bases by the hrtimer_base_type enum. When trying | 63 | * into the timer bases by the hrtimer_base_type enum. When trying |
64 | * to reach a base using a clockid, hrtimer_clockid_to_base() | 64 | * to reach a base using a clockid, hrtimer_clockid_to_base() |
65 | * is used to convert from clockid to the proper hrtimer_base_type. | 65 | * is used to convert from clockid to the proper hrtimer_base_type. |
66 | */ | 66 | */ |
67 | DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) = | 67 | DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) = |
68 | { | 68 | { |
69 | 69 | ||
70 | .lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock), | 70 | .lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock), |
71 | .clock_base = | 71 | .clock_base = |
72 | { | 72 | { |
73 | { | 73 | { |
74 | .index = HRTIMER_BASE_MONOTONIC, | 74 | .index = HRTIMER_BASE_MONOTONIC, |
75 | .clockid = CLOCK_MONOTONIC, | 75 | .clockid = CLOCK_MONOTONIC, |
76 | .get_time = &ktime_get, | 76 | .get_time = &ktime_get, |
77 | .resolution = KTIME_LOW_RES, | 77 | .resolution = KTIME_LOW_RES, |
78 | }, | 78 | }, |
79 | { | 79 | { |
80 | .index = HRTIMER_BASE_REALTIME, | 80 | .index = HRTIMER_BASE_REALTIME, |
81 | .clockid = CLOCK_REALTIME, | 81 | .clockid = CLOCK_REALTIME, |
82 | .get_time = &ktime_get_real, | 82 | .get_time = &ktime_get_real, |
83 | .resolution = KTIME_LOW_RES, | 83 | .resolution = KTIME_LOW_RES, |
84 | }, | 84 | }, |
85 | { | 85 | { |
86 | .index = HRTIMER_BASE_BOOTTIME, | 86 | .index = HRTIMER_BASE_BOOTTIME, |
87 | .clockid = CLOCK_BOOTTIME, | 87 | .clockid = CLOCK_BOOTTIME, |
88 | .get_time = &ktime_get_boottime, | 88 | .get_time = &ktime_get_boottime, |
89 | .resolution = KTIME_LOW_RES, | 89 | .resolution = KTIME_LOW_RES, |
90 | }, | 90 | }, |
91 | { | 91 | { |
92 | .index = HRTIMER_BASE_TAI, | 92 | .index = HRTIMER_BASE_TAI, |
93 | .clockid = CLOCK_TAI, | 93 | .clockid = CLOCK_TAI, |
94 | .get_time = &ktime_get_clocktai, | 94 | .get_time = &ktime_get_clocktai, |
95 | .resolution = KTIME_LOW_RES, | 95 | .resolution = KTIME_LOW_RES, |
96 | }, | 96 | }, |
97 | } | 97 | } |
98 | }; | 98 | }; |
99 | 99 | ||
100 | static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = { | 100 | static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = { |
101 | [CLOCK_REALTIME] = HRTIMER_BASE_REALTIME, | 101 | [CLOCK_REALTIME] = HRTIMER_BASE_REALTIME, |
102 | [CLOCK_MONOTONIC] = HRTIMER_BASE_MONOTONIC, | 102 | [CLOCK_MONOTONIC] = HRTIMER_BASE_MONOTONIC, |
103 | [CLOCK_BOOTTIME] = HRTIMER_BASE_BOOTTIME, | 103 | [CLOCK_BOOTTIME] = HRTIMER_BASE_BOOTTIME, |
104 | [CLOCK_TAI] = HRTIMER_BASE_TAI, | 104 | [CLOCK_TAI] = HRTIMER_BASE_TAI, |
105 | }; | 105 | }; |
106 | 106 | ||
107 | static inline int hrtimer_clockid_to_base(clockid_t clock_id) | 107 | static inline int hrtimer_clockid_to_base(clockid_t clock_id) |
108 | { | 108 | { |
109 | return hrtimer_clock_to_base_table[clock_id]; | 109 | return hrtimer_clock_to_base_table[clock_id]; |
110 | } | 110 | } |
111 | 111 | ||
112 | 112 | ||
113 | /* | 113 | /* |
114 | * Get the coarse grained time at the softirq based on xtime and | 114 | * Get the coarse grained time at the softirq based on xtime and |
115 | * wall_to_monotonic. | 115 | * wall_to_monotonic. |
116 | */ | 116 | */ |
117 | static void hrtimer_get_softirq_time(struct hrtimer_cpu_base *base) | 117 | static void hrtimer_get_softirq_time(struct hrtimer_cpu_base *base) |
118 | { | 118 | { |
119 | ktime_t xtim, mono, boot, tai; | 119 | ktime_t xtim, mono, boot, tai; |
120 | ktime_t off_real, off_boot, off_tai; | 120 | ktime_t off_real, off_boot, off_tai; |
121 | 121 | ||
122 | mono = ktime_get_update_offsets_tick(&off_real, &off_boot, &off_tai); | 122 | mono = ktime_get_update_offsets_tick(&off_real, &off_boot, &off_tai); |
123 | boot = ktime_add(mono, off_boot); | 123 | boot = ktime_add(mono, off_boot); |
124 | xtim = ktime_add(mono, off_real); | 124 | xtim = ktime_add(mono, off_real); |
125 | tai = ktime_add(xtim, off_tai); | 125 | tai = ktime_add(mono, off_tai); |
126 | 126 | ||
127 | base->clock_base[HRTIMER_BASE_REALTIME].softirq_time = xtim; | 127 | base->clock_base[HRTIMER_BASE_REALTIME].softirq_time = xtim; |
128 | base->clock_base[HRTIMER_BASE_MONOTONIC].softirq_time = mono; | 128 | base->clock_base[HRTIMER_BASE_MONOTONIC].softirq_time = mono; |
129 | base->clock_base[HRTIMER_BASE_BOOTTIME].softirq_time = boot; | 129 | base->clock_base[HRTIMER_BASE_BOOTTIME].softirq_time = boot; |
130 | base->clock_base[HRTIMER_BASE_TAI].softirq_time = tai; | 130 | base->clock_base[HRTIMER_BASE_TAI].softirq_time = tai; |
131 | } | 131 | } |
132 | 132 | ||
133 | /* | 133 | /* |
134 | * Functions and macros which are different for UP/SMP systems are kept in a | 134 | * Functions and macros which are different for UP/SMP systems are kept in a |
135 | * single place | 135 | * single place |
136 | */ | 136 | */ |
137 | #ifdef CONFIG_SMP | 137 | #ifdef CONFIG_SMP |
138 | 138 | ||
139 | /* | 139 | /* |
140 | * We are using hashed locking: holding per_cpu(hrtimer_bases)[n].lock | 140 | * We are using hashed locking: holding per_cpu(hrtimer_bases)[n].lock |
141 | * means that all timers which are tied to this base via timer->base are | 141 | * means that all timers which are tied to this base via timer->base are |
142 | * locked, and the base itself is locked too. | 142 | * locked, and the base itself is locked too. |
143 | * | 143 | * |
144 | * So __run_timers/migrate_timers can safely modify all timers which could | 144 | * So __run_timers/migrate_timers can safely modify all timers which could |
145 | * be found on the lists/queues. | 145 | * be found on the lists/queues. |
146 | * | 146 | * |
147 | * When the timer's base is locked, and the timer removed from list, it is | 147 | * When the timer's base is locked, and the timer removed from list, it is |
148 | * possible to set timer->base = NULL and drop the lock: the timer remains | 148 | * possible to set timer->base = NULL and drop the lock: the timer remains |
149 | * locked. | 149 | * locked. |
150 | */ | 150 | */ |
151 | static | 151 | static |
152 | struct hrtimer_clock_base *lock_hrtimer_base(const struct hrtimer *timer, | 152 | struct hrtimer_clock_base *lock_hrtimer_base(const struct hrtimer *timer, |
153 | unsigned long *flags) | 153 | unsigned long *flags) |
154 | { | 154 | { |
155 | struct hrtimer_clock_base *base; | 155 | struct hrtimer_clock_base *base; |
156 | 156 | ||
157 | for (;;) { | 157 | for (;;) { |
158 | base = timer->base; | 158 | base = timer->base; |
159 | if (likely(base != NULL)) { | 159 | if (likely(base != NULL)) { |
160 | raw_spin_lock_irqsave(&base->cpu_base->lock, *flags); | 160 | raw_spin_lock_irqsave(&base->cpu_base->lock, *flags); |
161 | if (likely(base == timer->base)) | 161 | if (likely(base == timer->base)) |
162 | return base; | 162 | return base; |
163 | /* The timer has migrated to another CPU: */ | 163 | /* The timer has migrated to another CPU: */ |
164 | raw_spin_unlock_irqrestore(&base->cpu_base->lock, *flags); | 164 | raw_spin_unlock_irqrestore(&base->cpu_base->lock, *flags); |
165 | } | 165 | } |
166 | cpu_relax(); | 166 | cpu_relax(); |
167 | } | 167 | } |
168 | } | 168 | } |
169 | 169 | ||
170 | /* | 170 | /* |
171 | * With HIGHRES=y we do not migrate the timer when it is expiring | 171 | * With HIGHRES=y we do not migrate the timer when it is expiring |
172 | * before the next event on the target cpu because we cannot reprogram | 172 | * before the next event on the target cpu because we cannot reprogram |
173 | * the target cpu hardware and we would cause it to fire late. | 173 | * the target cpu hardware and we would cause it to fire late. |
174 | * | 174 | * |
175 | * Called with cpu_base->lock of target cpu held. | 175 | * Called with cpu_base->lock of target cpu held. |
176 | */ | 176 | */ |
177 | static int | 177 | static int |
178 | hrtimer_check_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base) | 178 | hrtimer_check_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base) |
179 | { | 179 | { |
180 | #ifdef CONFIG_HIGH_RES_TIMERS | 180 | #ifdef CONFIG_HIGH_RES_TIMERS |
181 | ktime_t expires; | 181 | ktime_t expires; |
182 | 182 | ||
183 | if (!new_base->cpu_base->hres_active) | 183 | if (!new_base->cpu_base->hres_active) |
184 | return 0; | 184 | return 0; |
185 | 185 | ||
186 | expires = ktime_sub(hrtimer_get_expires(timer), new_base->offset); | 186 | expires = ktime_sub(hrtimer_get_expires(timer), new_base->offset); |
187 | return expires.tv64 <= new_base->cpu_base->expires_next.tv64; | 187 | return expires.tv64 <= new_base->cpu_base->expires_next.tv64; |
188 | #else | 188 | #else |
189 | return 0; | 189 | return 0; |
190 | #endif | 190 | #endif |
191 | } | 191 | } |
192 | 192 | ||
193 | /* | 193 | /* |
194 | * Switch the timer base to the current CPU when possible. | 194 | * Switch the timer base to the current CPU when possible. |
195 | */ | 195 | */ |
196 | static inline struct hrtimer_clock_base * | 196 | static inline struct hrtimer_clock_base * |
197 | switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base, | 197 | switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base, |
198 | int pinned) | 198 | int pinned) |
199 | { | 199 | { |
200 | struct hrtimer_clock_base *new_base; | 200 | struct hrtimer_clock_base *new_base; |
201 | struct hrtimer_cpu_base *new_cpu_base; | 201 | struct hrtimer_cpu_base *new_cpu_base; |
202 | int this_cpu = smp_processor_id(); | 202 | int this_cpu = smp_processor_id(); |
203 | int cpu = get_nohz_timer_target(pinned); | 203 | int cpu = get_nohz_timer_target(pinned); |
204 | int basenum = base->index; | 204 | int basenum = base->index; |
205 | 205 | ||
206 | again: | 206 | again: |
207 | new_cpu_base = &per_cpu(hrtimer_bases, cpu); | 207 | new_cpu_base = &per_cpu(hrtimer_bases, cpu); |
208 | new_base = &new_cpu_base->clock_base[basenum]; | 208 | new_base = &new_cpu_base->clock_base[basenum]; |
209 | 209 | ||
210 | if (base != new_base) { | 210 | if (base != new_base) { |
211 | /* | 211 | /* |
212 | * We are trying to move timer to new_base. | 212 | * We are trying to move timer to new_base. |
213 | * However we can't change timer's base while it is running, | 213 | * However we can't change timer's base while it is running, |
214 | * so we keep it on the same CPU. No hassle vs. reprogramming | 214 | * so we keep it on the same CPU. No hassle vs. reprogramming |
215 | * the event source in the high resolution case. The softirq | 215 | * the event source in the high resolution case. The softirq |
216 | * code will take care of this when the timer function has | 216 | * code will take care of this when the timer function has |
217 | * completed. There is no conflict as we hold the lock until | 217 | * completed. There is no conflict as we hold the lock until |
218 | * the timer is enqueued. | 218 | * the timer is enqueued. |
219 | */ | 219 | */ |
220 | if (unlikely(hrtimer_callback_running(timer))) | 220 | if (unlikely(hrtimer_callback_running(timer))) |
221 | return base; | 221 | return base; |
222 | 222 | ||
223 | /* See the comment in lock_timer_base() */ | 223 | /* See the comment in lock_timer_base() */ |
224 | timer->base = NULL; | 224 | timer->base = NULL; |
225 | raw_spin_unlock(&base->cpu_base->lock); | 225 | raw_spin_unlock(&base->cpu_base->lock); |
226 | raw_spin_lock(&new_base->cpu_base->lock); | 226 | raw_spin_lock(&new_base->cpu_base->lock); |
227 | 227 | ||
228 | if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) { | 228 | if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) { |
229 | cpu = this_cpu; | 229 | cpu = this_cpu; |
230 | raw_spin_unlock(&new_base->cpu_base->lock); | 230 | raw_spin_unlock(&new_base->cpu_base->lock); |
231 | raw_spin_lock(&base->cpu_base->lock); | 231 | raw_spin_lock(&base->cpu_base->lock); |
232 | timer->base = base; | 232 | timer->base = base; |
233 | goto again; | 233 | goto again; |
234 | } | 234 | } |
235 | timer->base = new_base; | 235 | timer->base = new_base; |
236 | } else { | 236 | } else { |
237 | if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) { | 237 | if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) { |
238 | cpu = this_cpu; | 238 | cpu = this_cpu; |
239 | goto again; | 239 | goto again; |
240 | } | 240 | } |
241 | } | 241 | } |
242 | return new_base; | 242 | return new_base; |
243 | } | 243 | } |
244 | 244 | ||
245 | #else /* CONFIG_SMP */ | 245 | #else /* CONFIG_SMP */ |
246 | 246 | ||
247 | static inline struct hrtimer_clock_base * | 247 | static inline struct hrtimer_clock_base * |
248 | lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags) | 248 | lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags) |
249 | { | 249 | { |
250 | struct hrtimer_clock_base *base = timer->base; | 250 | struct hrtimer_clock_base *base = timer->base; |
251 | 251 | ||
252 | raw_spin_lock_irqsave(&base->cpu_base->lock, *flags); | 252 | raw_spin_lock_irqsave(&base->cpu_base->lock, *flags); |
253 | 253 | ||
254 | return base; | 254 | return base; |
255 | } | 255 | } |
256 | 256 | ||
257 | # define switch_hrtimer_base(t, b, p) (b) | 257 | # define switch_hrtimer_base(t, b, p) (b) |
258 | 258 | ||
259 | #endif /* !CONFIG_SMP */ | 259 | #endif /* !CONFIG_SMP */ |
260 | 260 | ||
261 | /* | 261 | /* |
262 | * Functions for the union type storage format of ktime_t which are | 262 | * Functions for the union type storage format of ktime_t which are |
263 | * too large for inlining: | 263 | * too large for inlining: |
264 | */ | 264 | */ |
265 | #if BITS_PER_LONG < 64 | 265 | #if BITS_PER_LONG < 64 |
266 | /* | 266 | /* |
267 | * Divide a ktime value by a nanosecond value | 267 | * Divide a ktime value by a nanosecond value |
268 | */ | 268 | */ |
269 | u64 ktime_divns(const ktime_t kt, s64 div) | 269 | u64 ktime_divns(const ktime_t kt, s64 div) |
270 | { | 270 | { |
271 | u64 dclc; | 271 | u64 dclc; |
272 | int sft = 0; | 272 | int sft = 0; |
273 | 273 | ||
274 | dclc = ktime_to_ns(kt); | 274 | dclc = ktime_to_ns(kt); |
275 | /* Make sure the divisor is less than 2^32: */ | 275 | /* Make sure the divisor is less than 2^32: */ |
276 | while (div >> 32) { | 276 | while (div >> 32) { |
277 | sft++; | 277 | sft++; |
278 | div >>= 1; | 278 | div >>= 1; |
279 | } | 279 | } |
280 | dclc >>= sft; | 280 | dclc >>= sft; |
281 | do_div(dclc, (unsigned long) div); | 281 | do_div(dclc, (unsigned long) div); |
282 | 282 | ||
283 | return dclc; | 283 | return dclc; |
284 | } | 284 | } |
285 | EXPORT_SYMBOL_GPL(ktime_divns); | 285 | EXPORT_SYMBOL_GPL(ktime_divns); |
286 | #endif /* BITS_PER_LONG >= 64 */ | 286 | #endif /* BITS_PER_LONG >= 64 */ |
287 | 287 | ||
288 | /* | 288 | /* |
289 | * Add two ktime values and do a safety check for overflow: | 289 | * Add two ktime values and do a safety check for overflow: |
290 | */ | 290 | */ |
291 | ktime_t ktime_add_safe(const ktime_t lhs, const ktime_t rhs) | 291 | ktime_t ktime_add_safe(const ktime_t lhs, const ktime_t rhs) |
292 | { | 292 | { |
293 | ktime_t res = ktime_add(lhs, rhs); | 293 | ktime_t res = ktime_add(lhs, rhs); |
294 | 294 | ||
295 | /* | 295 | /* |
296 | * We use KTIME_SEC_MAX here, the maximum timeout which we can | 296 | * We use KTIME_SEC_MAX here, the maximum timeout which we can |
297 | * return to user space in a timespec: | 297 | * return to user space in a timespec: |
298 | */ | 298 | */ |
299 | if (res.tv64 < 0 || res.tv64 < lhs.tv64 || res.tv64 < rhs.tv64) | 299 | if (res.tv64 < 0 || res.tv64 < lhs.tv64 || res.tv64 < rhs.tv64) |
300 | res = ktime_set(KTIME_SEC_MAX, 0); | 300 | res = ktime_set(KTIME_SEC_MAX, 0); |
301 | 301 | ||
302 | return res; | 302 | return res; |
303 | } | 303 | } |
304 | 304 | ||
305 | EXPORT_SYMBOL_GPL(ktime_add_safe); | 305 | EXPORT_SYMBOL_GPL(ktime_add_safe); |
306 | 306 | ||
307 | #ifdef CONFIG_DEBUG_OBJECTS_TIMERS | 307 | #ifdef CONFIG_DEBUG_OBJECTS_TIMERS |
308 | 308 | ||
309 | static struct debug_obj_descr hrtimer_debug_descr; | 309 | static struct debug_obj_descr hrtimer_debug_descr; |
310 | 310 | ||
311 | static void *hrtimer_debug_hint(void *addr) | 311 | static void *hrtimer_debug_hint(void *addr) |
312 | { | 312 | { |
313 | return ((struct hrtimer *) addr)->function; | 313 | return ((struct hrtimer *) addr)->function; |
314 | } | 314 | } |
315 | 315 | ||
316 | /* | 316 | /* |
317 | * fixup_init is called when: | 317 | * fixup_init is called when: |
318 | * - an active object is initialized | 318 | * - an active object is initialized |
319 | */ | 319 | */ |
320 | static int hrtimer_fixup_init(void *addr, enum debug_obj_state state) | 320 | static int hrtimer_fixup_init(void *addr, enum debug_obj_state state) |
321 | { | 321 | { |
322 | struct hrtimer *timer = addr; | 322 | struct hrtimer *timer = addr; |
323 | 323 | ||
324 | switch (state) { | 324 | switch (state) { |
325 | case ODEBUG_STATE_ACTIVE: | 325 | case ODEBUG_STATE_ACTIVE: |
326 | hrtimer_cancel(timer); | 326 | hrtimer_cancel(timer); |
327 | debug_object_init(timer, &hrtimer_debug_descr); | 327 | debug_object_init(timer, &hrtimer_debug_descr); |
328 | return 1; | 328 | return 1; |
329 | default: | 329 | default: |
330 | return 0; | 330 | return 0; |
331 | } | 331 | } |
332 | } | 332 | } |
333 | 333 | ||
334 | /* | 334 | /* |
335 | * fixup_activate is called when: | 335 | * fixup_activate is called when: |
336 | * - an active object is activated | 336 | * - an active object is activated |
337 | * - an unknown object is activated (might be a statically initialized object) | 337 | * - an unknown object is activated (might be a statically initialized object) |
338 | */ | 338 | */ |
339 | static int hrtimer_fixup_activate(void *addr, enum debug_obj_state state) | 339 | static int hrtimer_fixup_activate(void *addr, enum debug_obj_state state) |
340 | { | 340 | { |
341 | switch (state) { | 341 | switch (state) { |
342 | 342 | ||
343 | case ODEBUG_STATE_NOTAVAILABLE: | 343 | case ODEBUG_STATE_NOTAVAILABLE: |
344 | WARN_ON_ONCE(1); | 344 | WARN_ON_ONCE(1); |
345 | return 0; | 345 | return 0; |
346 | 346 | ||
347 | case ODEBUG_STATE_ACTIVE: | 347 | case ODEBUG_STATE_ACTIVE: |
348 | WARN_ON(1); | 348 | WARN_ON(1); |
349 | 349 | ||
350 | default: | 350 | default: |
351 | return 0; | 351 | return 0; |
352 | } | 352 | } |
353 | } | 353 | } |
354 | 354 | ||
355 | /* | 355 | /* |
356 | * fixup_free is called when: | 356 | * fixup_free is called when: |
357 | * - an active object is freed | 357 | * - an active object is freed |
358 | */ | 358 | */ |
359 | static int hrtimer_fixup_free(void *addr, enum debug_obj_state state) | 359 | static int hrtimer_fixup_free(void *addr, enum debug_obj_state state) |
360 | { | 360 | { |
361 | struct hrtimer *timer = addr; | 361 | struct hrtimer *timer = addr; |
362 | 362 | ||
363 | switch (state) { | 363 | switch (state) { |
364 | case ODEBUG_STATE_ACTIVE: | 364 | case ODEBUG_STATE_ACTIVE: |
365 | hrtimer_cancel(timer); | 365 | hrtimer_cancel(timer); |
366 | debug_object_free(timer, &hrtimer_debug_descr); | 366 | debug_object_free(timer, &hrtimer_debug_descr); |
367 | return 1; | 367 | return 1; |
368 | default: | 368 | default: |
369 | return 0; | 369 | return 0; |
370 | } | 370 | } |
371 | } | 371 | } |
372 | 372 | ||
373 | static struct debug_obj_descr hrtimer_debug_descr = { | 373 | static struct debug_obj_descr hrtimer_debug_descr = { |
374 | .name = "hrtimer", | 374 | .name = "hrtimer", |
375 | .debug_hint = hrtimer_debug_hint, | 375 | .debug_hint = hrtimer_debug_hint, |
376 | .fixup_init = hrtimer_fixup_init, | 376 | .fixup_init = hrtimer_fixup_init, |
377 | .fixup_activate = hrtimer_fixup_activate, | 377 | .fixup_activate = hrtimer_fixup_activate, |
378 | .fixup_free = hrtimer_fixup_free, | 378 | .fixup_free = hrtimer_fixup_free, |
379 | }; | 379 | }; |
380 | 380 | ||
381 | static inline void debug_hrtimer_init(struct hrtimer *timer) | 381 | static inline void debug_hrtimer_init(struct hrtimer *timer) |
382 | { | 382 | { |
383 | debug_object_init(timer, &hrtimer_debug_descr); | 383 | debug_object_init(timer, &hrtimer_debug_descr); |
384 | } | 384 | } |
385 | 385 | ||
386 | static inline void debug_hrtimer_activate(struct hrtimer *timer) | 386 | static inline void debug_hrtimer_activate(struct hrtimer *timer) |
387 | { | 387 | { |
388 | debug_object_activate(timer, &hrtimer_debug_descr); | 388 | debug_object_activate(timer, &hrtimer_debug_descr); |
389 | } | 389 | } |
390 | 390 | ||
391 | static inline void debug_hrtimer_deactivate(struct hrtimer *timer) | 391 | static inline void debug_hrtimer_deactivate(struct hrtimer *timer) |
392 | { | 392 | { |
393 | debug_object_deactivate(timer, &hrtimer_debug_descr); | 393 | debug_object_deactivate(timer, &hrtimer_debug_descr); |
394 | } | 394 | } |
395 | 395 | ||
396 | static inline void debug_hrtimer_free(struct hrtimer *timer) | 396 | static inline void debug_hrtimer_free(struct hrtimer *timer) |
397 | { | 397 | { |
398 | debug_object_free(timer, &hrtimer_debug_descr); | 398 | debug_object_free(timer, &hrtimer_debug_descr); |
399 | } | 399 | } |
400 | 400 | ||
401 | static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, | 401 | static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, |
402 | enum hrtimer_mode mode); | 402 | enum hrtimer_mode mode); |
403 | 403 | ||
404 | void hrtimer_init_on_stack(struct hrtimer *timer, clockid_t clock_id, | 404 | void hrtimer_init_on_stack(struct hrtimer *timer, clockid_t clock_id, |
405 | enum hrtimer_mode mode) | 405 | enum hrtimer_mode mode) |
406 | { | 406 | { |
407 | debug_object_init_on_stack(timer, &hrtimer_debug_descr); | 407 | debug_object_init_on_stack(timer, &hrtimer_debug_descr); |
408 | __hrtimer_init(timer, clock_id, mode); | 408 | __hrtimer_init(timer, clock_id, mode); |
409 | } | 409 | } |
410 | EXPORT_SYMBOL_GPL(hrtimer_init_on_stack); | 410 | EXPORT_SYMBOL_GPL(hrtimer_init_on_stack); |
411 | 411 | ||
412 | void destroy_hrtimer_on_stack(struct hrtimer *timer) | 412 | void destroy_hrtimer_on_stack(struct hrtimer *timer) |
413 | { | 413 | { |
414 | debug_object_free(timer, &hrtimer_debug_descr); | 414 | debug_object_free(timer, &hrtimer_debug_descr); |
415 | } | 415 | } |
416 | 416 | ||
417 | #else | 417 | #else |
418 | static inline void debug_hrtimer_init(struct hrtimer *timer) { } | 418 | static inline void debug_hrtimer_init(struct hrtimer *timer) { } |
419 | static inline void debug_hrtimer_activate(struct hrtimer *timer) { } | 419 | static inline void debug_hrtimer_activate(struct hrtimer *timer) { } |
420 | static inline void debug_hrtimer_deactivate(struct hrtimer *timer) { } | 420 | static inline void debug_hrtimer_deactivate(struct hrtimer *timer) { } |
421 | #endif | 421 | #endif |
422 | 422 | ||
423 | static inline void | 423 | static inline void |
424 | debug_init(struct hrtimer *timer, clockid_t clockid, | 424 | debug_init(struct hrtimer *timer, clockid_t clockid, |
425 | enum hrtimer_mode mode) | 425 | enum hrtimer_mode mode) |
426 | { | 426 | { |
427 | debug_hrtimer_init(timer); | 427 | debug_hrtimer_init(timer); |
428 | trace_hrtimer_init(timer, clockid, mode); | 428 | trace_hrtimer_init(timer, clockid, mode); |
429 | } | 429 | } |
430 | 430 | ||
431 | static inline void debug_activate(struct hrtimer *timer) | 431 | static inline void debug_activate(struct hrtimer *timer) |
432 | { | 432 | { |
433 | debug_hrtimer_activate(timer); | 433 | debug_hrtimer_activate(timer); |
434 | trace_hrtimer_start(timer); | 434 | trace_hrtimer_start(timer); |
435 | } | 435 | } |
436 | 436 | ||
437 | static inline void debug_deactivate(struct hrtimer *timer) | 437 | static inline void debug_deactivate(struct hrtimer *timer) |
438 | { | 438 | { |
439 | debug_hrtimer_deactivate(timer); | 439 | debug_hrtimer_deactivate(timer); |
440 | trace_hrtimer_cancel(timer); | 440 | trace_hrtimer_cancel(timer); |
441 | } | 441 | } |
442 | 442 | ||
443 | /* High resolution timer related functions */ | 443 | /* High resolution timer related functions */ |
444 | #ifdef CONFIG_HIGH_RES_TIMERS | 444 | #ifdef CONFIG_HIGH_RES_TIMERS |
445 | 445 | ||
446 | /* | 446 | /* |
447 | * High resolution timer enabled ? | 447 | * High resolution timer enabled ? |
448 | */ | 448 | */ |
449 | static int hrtimer_hres_enabled __read_mostly = 1; | 449 | static int hrtimer_hres_enabled __read_mostly = 1; |
450 | 450 | ||
451 | /* | 451 | /* |
452 | * Enable / Disable high resolution mode | 452 | * Enable / Disable high resolution mode |
453 | */ | 453 | */ |
454 | static int __init setup_hrtimer_hres(char *str) | 454 | static int __init setup_hrtimer_hres(char *str) |
455 | { | 455 | { |
456 | if (!strcmp(str, "off")) | 456 | if (!strcmp(str, "off")) |
457 | hrtimer_hres_enabled = 0; | 457 | hrtimer_hres_enabled = 0; |
458 | else if (!strcmp(str, "on")) | 458 | else if (!strcmp(str, "on")) |
459 | hrtimer_hres_enabled = 1; | 459 | hrtimer_hres_enabled = 1; |
460 | else | 460 | else |
461 | return 0; | 461 | return 0; |
462 | return 1; | 462 | return 1; |
463 | } | 463 | } |
464 | 464 | ||
465 | __setup("highres=", setup_hrtimer_hres); | 465 | __setup("highres=", setup_hrtimer_hres); |
466 | 466 | ||
467 | /* | 467 | /* |
468 | * hrtimer_high_res_enabled - query, if the highres mode is enabled | 468 | * hrtimer_high_res_enabled - query, if the highres mode is enabled |
469 | */ | 469 | */ |
470 | static inline int hrtimer_is_hres_enabled(void) | 470 | static inline int hrtimer_is_hres_enabled(void) |
471 | { | 471 | { |
472 | return hrtimer_hres_enabled; | 472 | return hrtimer_hres_enabled; |
473 | } | 473 | } |
474 | 474 | ||
475 | /* | 475 | /* |
476 | * Is the high resolution mode active ? | 476 | * Is the high resolution mode active ? |
477 | */ | 477 | */ |
478 | static inline int hrtimer_hres_active(void) | 478 | static inline int hrtimer_hres_active(void) |
479 | { | 479 | { |
480 | return __this_cpu_read(hrtimer_bases.hres_active); | 480 | return __this_cpu_read(hrtimer_bases.hres_active); |
481 | } | 481 | } |
482 | 482 | ||
483 | /* | 483 | /* |
484 | * Reprogram the event source with checking both queues for the | 484 | * Reprogram the event source with checking both queues for the |
485 | * next event | 485 | * next event |
486 | * Called with interrupts disabled and base->lock held | 486 | * Called with interrupts disabled and base->lock held |
487 | */ | 487 | */ |
488 | static void | 488 | static void |
489 | hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) | 489 | hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) |
490 | { | 490 | { |
491 | int i; | 491 | int i; |
492 | struct hrtimer_clock_base *base = cpu_base->clock_base; | 492 | struct hrtimer_clock_base *base = cpu_base->clock_base; |
493 | ktime_t expires, expires_next; | 493 | ktime_t expires, expires_next; |
494 | 494 | ||
495 | expires_next.tv64 = KTIME_MAX; | 495 | expires_next.tv64 = KTIME_MAX; |
496 | 496 | ||
497 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { | 497 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { |
498 | struct hrtimer *timer; | 498 | struct hrtimer *timer; |
499 | struct timerqueue_node *next; | 499 | struct timerqueue_node *next; |
500 | 500 | ||
501 | next = timerqueue_getnext(&base->active); | 501 | next = timerqueue_getnext(&base->active); |
502 | if (!next) | 502 | if (!next) |
503 | continue; | 503 | continue; |
504 | timer = container_of(next, struct hrtimer, node); | 504 | timer = container_of(next, struct hrtimer, node); |
505 | 505 | ||
506 | expires = ktime_sub(hrtimer_get_expires(timer), base->offset); | 506 | expires = ktime_sub(hrtimer_get_expires(timer), base->offset); |
507 | /* | 507 | /* |
508 | * clock_was_set() has changed base->offset so the | 508 | * clock_was_set() has changed base->offset so the |
509 | * result might be negative. Fix it up to prevent a | 509 | * result might be negative. Fix it up to prevent a |
510 | * false positive in clockevents_program_event() | 510 | * false positive in clockevents_program_event() |
511 | */ | 511 | */ |
512 | if (expires.tv64 < 0) | 512 | if (expires.tv64 < 0) |
513 | expires.tv64 = 0; | 513 | expires.tv64 = 0; |
514 | if (expires.tv64 < expires_next.tv64) | 514 | if (expires.tv64 < expires_next.tv64) |
515 | expires_next = expires; | 515 | expires_next = expires; |
516 | } | 516 | } |
517 | 517 | ||
518 | if (skip_equal && expires_next.tv64 == cpu_base->expires_next.tv64) | 518 | if (skip_equal && expires_next.tv64 == cpu_base->expires_next.tv64) |
519 | return; | 519 | return; |
520 | 520 | ||
521 | cpu_base->expires_next.tv64 = expires_next.tv64; | 521 | cpu_base->expires_next.tv64 = expires_next.tv64; |
522 | 522 | ||
523 | /* | 523 | /* |
524 | * If a hang was detected in the last timer interrupt then we | 524 | * If a hang was detected in the last timer interrupt then we |
525 | * leave the hang delay active in the hardware. We want the | 525 | * leave the hang delay active in the hardware. We want the |
526 | * system to make progress. That also prevents the following | 526 | * system to make progress. That also prevents the following |
527 | * scenario: | 527 | * scenario: |
528 | * T1 expires 50ms from now | 528 | * T1 expires 50ms from now |
529 | * T2 expires 5s from now | 529 | * T2 expires 5s from now |
530 | * | 530 | * |
531 | * T1 is removed, so this code is called and would reprogram | 531 | * T1 is removed, so this code is called and would reprogram |
532 | * the hardware to 5s from now. Any hrtimer_start after that | 532 | * the hardware to 5s from now. Any hrtimer_start after that |
533 | * will not reprogram the hardware due to hang_detected being | 533 | * will not reprogram the hardware due to hang_detected being |
534 | * set. So we'd effectivly block all timers until the T2 event | 534 | * set. So we'd effectivly block all timers until the T2 event |
535 | * fires. | 535 | * fires. |
536 | */ | 536 | */ |
537 | if (cpu_base->hang_detected) | 537 | if (cpu_base->hang_detected) |
538 | return; | 538 | return; |
539 | 539 | ||
540 | if (cpu_base->expires_next.tv64 != KTIME_MAX) | 540 | if (cpu_base->expires_next.tv64 != KTIME_MAX) |
541 | tick_program_event(cpu_base->expires_next, 1); | 541 | tick_program_event(cpu_base->expires_next, 1); |
542 | } | 542 | } |
543 | 543 | ||
544 | /* | 544 | /* |
545 | * Shared reprogramming for clock_realtime and clock_monotonic | 545 | * Shared reprogramming for clock_realtime and clock_monotonic |
546 | * | 546 | * |
547 | * When a timer is enqueued and expires earlier than the already enqueued | 547 | * When a timer is enqueued and expires earlier than the already enqueued |
548 | * timers, we have to check, whether it expires earlier than the timer for | 548 | * timers, we have to check, whether it expires earlier than the timer for |
549 | * which the clock event device was armed. | 549 | * which the clock event device was armed. |
550 | * | 550 | * |
551 | * Note, that in case the state has HRTIMER_STATE_CALLBACK set, no reprogramming | 551 | * Note, that in case the state has HRTIMER_STATE_CALLBACK set, no reprogramming |
552 | * and no expiry check happens. The timer gets enqueued into the rbtree. The | 552 | * and no expiry check happens. The timer gets enqueued into the rbtree. The |
553 | * reprogramming and expiry check is done in the hrtimer_interrupt or in the | 553 | * reprogramming and expiry check is done in the hrtimer_interrupt or in the |
554 | * softirq. | 554 | * softirq. |
555 | * | 555 | * |
556 | * Called with interrupts disabled and base->cpu_base.lock held | 556 | * Called with interrupts disabled and base->cpu_base.lock held |
557 | */ | 557 | */ |
558 | static int hrtimer_reprogram(struct hrtimer *timer, | 558 | static int hrtimer_reprogram(struct hrtimer *timer, |
559 | struct hrtimer_clock_base *base) | 559 | struct hrtimer_clock_base *base) |
560 | { | 560 | { |
561 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); | 561 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); |
562 | ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset); | 562 | ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset); |
563 | int res; | 563 | int res; |
564 | 564 | ||
565 | WARN_ON_ONCE(hrtimer_get_expires_tv64(timer) < 0); | 565 | WARN_ON_ONCE(hrtimer_get_expires_tv64(timer) < 0); |
566 | 566 | ||
567 | /* | 567 | /* |
568 | * When the callback is running, we do not reprogram the clock event | 568 | * When the callback is running, we do not reprogram the clock event |
569 | * device. The timer callback is either running on a different CPU or | 569 | * device. The timer callback is either running on a different CPU or |
570 | * the callback is executed in the hrtimer_interrupt context. The | 570 | * the callback is executed in the hrtimer_interrupt context. The |
571 | * reprogramming is handled either by the softirq, which called the | 571 | * reprogramming is handled either by the softirq, which called the |
572 | * callback or at the end of the hrtimer_interrupt. | 572 | * callback or at the end of the hrtimer_interrupt. |
573 | */ | 573 | */ |
574 | if (hrtimer_callback_running(timer)) | 574 | if (hrtimer_callback_running(timer)) |
575 | return 0; | 575 | return 0; |
576 | 576 | ||
577 | /* | 577 | /* |
578 | * CLOCK_REALTIME timer might be requested with an absolute | 578 | * CLOCK_REALTIME timer might be requested with an absolute |
579 | * expiry time which is less than base->offset. Nothing wrong | 579 | * expiry time which is less than base->offset. Nothing wrong |
580 | * about that, just avoid to call into the tick code, which | 580 | * about that, just avoid to call into the tick code, which |
581 | * has now objections against negative expiry values. | 581 | * has now objections against negative expiry values. |
582 | */ | 582 | */ |
583 | if (expires.tv64 < 0) | 583 | if (expires.tv64 < 0) |
584 | return -ETIME; | 584 | return -ETIME; |
585 | 585 | ||
586 | if (expires.tv64 >= cpu_base->expires_next.tv64) | 586 | if (expires.tv64 >= cpu_base->expires_next.tv64) |
587 | return 0; | 587 | return 0; |
588 | 588 | ||
589 | /* | 589 | /* |
590 | * If a hang was detected in the last timer interrupt then we | 590 | * If a hang was detected in the last timer interrupt then we |
591 | * do not schedule a timer which is earlier than the expiry | 591 | * do not schedule a timer which is earlier than the expiry |
592 | * which we enforced in the hang detection. We want the system | 592 | * which we enforced in the hang detection. We want the system |
593 | * to make progress. | 593 | * to make progress. |
594 | */ | 594 | */ |
595 | if (cpu_base->hang_detected) | 595 | if (cpu_base->hang_detected) |
596 | return 0; | 596 | return 0; |
597 | 597 | ||
598 | /* | 598 | /* |
599 | * Clockevents returns -ETIME, when the event was in the past. | 599 | * Clockevents returns -ETIME, when the event was in the past. |
600 | */ | 600 | */ |
601 | res = tick_program_event(expires, 0); | 601 | res = tick_program_event(expires, 0); |
602 | if (!IS_ERR_VALUE(res)) | 602 | if (!IS_ERR_VALUE(res)) |
603 | cpu_base->expires_next = expires; | 603 | cpu_base->expires_next = expires; |
604 | return res; | 604 | return res; |
605 | } | 605 | } |
606 | 606 | ||
607 | /* | 607 | /* |
608 | * Initialize the high resolution related parts of cpu_base | 608 | * Initialize the high resolution related parts of cpu_base |
609 | */ | 609 | */ |
610 | static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) | 610 | static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) |
611 | { | 611 | { |
612 | base->expires_next.tv64 = KTIME_MAX; | 612 | base->expires_next.tv64 = KTIME_MAX; |
613 | base->hres_active = 0; | 613 | base->hres_active = 0; |
614 | } | 614 | } |
615 | 615 | ||
616 | static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base) | 616 | static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base) |
617 | { | 617 | { |
618 | ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset; | 618 | ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset; |
619 | ktime_t *offs_boot = &base->clock_base[HRTIMER_BASE_BOOTTIME].offset; | 619 | ktime_t *offs_boot = &base->clock_base[HRTIMER_BASE_BOOTTIME].offset; |
620 | ktime_t *offs_tai = &base->clock_base[HRTIMER_BASE_TAI].offset; | 620 | ktime_t *offs_tai = &base->clock_base[HRTIMER_BASE_TAI].offset; |
621 | 621 | ||
622 | return ktime_get_update_offsets_now(offs_real, offs_boot, offs_tai); | 622 | return ktime_get_update_offsets_now(offs_real, offs_boot, offs_tai); |
623 | } | 623 | } |
624 | 624 | ||
625 | /* | 625 | /* |
626 | * Retrigger next event is called after clock was set | 626 | * Retrigger next event is called after clock was set |
627 | * | 627 | * |
628 | * Called with interrupts disabled via on_each_cpu() | 628 | * Called with interrupts disabled via on_each_cpu() |
629 | */ | 629 | */ |
630 | static void retrigger_next_event(void *arg) | 630 | static void retrigger_next_event(void *arg) |
631 | { | 631 | { |
632 | struct hrtimer_cpu_base *base = this_cpu_ptr(&hrtimer_bases); | 632 | struct hrtimer_cpu_base *base = this_cpu_ptr(&hrtimer_bases); |
633 | 633 | ||
634 | if (!hrtimer_hres_active()) | 634 | if (!hrtimer_hres_active()) |
635 | return; | 635 | return; |
636 | 636 | ||
637 | raw_spin_lock(&base->lock); | 637 | raw_spin_lock(&base->lock); |
638 | hrtimer_update_base(base); | 638 | hrtimer_update_base(base); |
639 | hrtimer_force_reprogram(base, 0); | 639 | hrtimer_force_reprogram(base, 0); |
640 | raw_spin_unlock(&base->lock); | 640 | raw_spin_unlock(&base->lock); |
641 | } | 641 | } |
642 | 642 | ||
643 | /* | 643 | /* |
644 | * Switch to high resolution mode | 644 | * Switch to high resolution mode |
645 | */ | 645 | */ |
646 | static int hrtimer_switch_to_hres(void) | 646 | static int hrtimer_switch_to_hres(void) |
647 | { | 647 | { |
648 | int i, cpu = smp_processor_id(); | 648 | int i, cpu = smp_processor_id(); |
649 | struct hrtimer_cpu_base *base = &per_cpu(hrtimer_bases, cpu); | 649 | struct hrtimer_cpu_base *base = &per_cpu(hrtimer_bases, cpu); |
650 | unsigned long flags; | 650 | unsigned long flags; |
651 | 651 | ||
652 | if (base->hres_active) | 652 | if (base->hres_active) |
653 | return 1; | 653 | return 1; |
654 | 654 | ||
655 | local_irq_save(flags); | 655 | local_irq_save(flags); |
656 | 656 | ||
657 | if (tick_init_highres()) { | 657 | if (tick_init_highres()) { |
658 | local_irq_restore(flags); | 658 | local_irq_restore(flags); |
659 | printk(KERN_WARNING "Could not switch to high resolution " | 659 | printk(KERN_WARNING "Could not switch to high resolution " |
660 | "mode on CPU %d\n", cpu); | 660 | "mode on CPU %d\n", cpu); |
661 | return 0; | 661 | return 0; |
662 | } | 662 | } |
663 | base->hres_active = 1; | 663 | base->hres_active = 1; |
664 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) | 664 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) |
665 | base->clock_base[i].resolution = KTIME_HIGH_RES; | 665 | base->clock_base[i].resolution = KTIME_HIGH_RES; |
666 | 666 | ||
667 | tick_setup_sched_timer(); | 667 | tick_setup_sched_timer(); |
668 | /* "Retrigger" the interrupt to get things going */ | 668 | /* "Retrigger" the interrupt to get things going */ |
669 | retrigger_next_event(NULL); | 669 | retrigger_next_event(NULL); |
670 | local_irq_restore(flags); | 670 | local_irq_restore(flags); |
671 | return 1; | 671 | return 1; |
672 | } | 672 | } |
673 | 673 | ||
674 | static void clock_was_set_work(struct work_struct *work) | 674 | static void clock_was_set_work(struct work_struct *work) |
675 | { | 675 | { |
676 | clock_was_set(); | 676 | clock_was_set(); |
677 | } | 677 | } |
678 | 678 | ||
679 | static DECLARE_WORK(hrtimer_work, clock_was_set_work); | 679 | static DECLARE_WORK(hrtimer_work, clock_was_set_work); |
680 | 680 | ||
681 | /* | 681 | /* |
682 | * Called from timekeeping and resume code to reprogramm the hrtimer | 682 | * Called from timekeeping and resume code to reprogramm the hrtimer |
683 | * interrupt device on all cpus. | 683 | * interrupt device on all cpus. |
684 | */ | 684 | */ |
685 | void clock_was_set_delayed(void) | 685 | void clock_was_set_delayed(void) |
686 | { | 686 | { |
687 | schedule_work(&hrtimer_work); | 687 | schedule_work(&hrtimer_work); |
688 | } | 688 | } |
689 | 689 | ||
690 | #else | 690 | #else |
691 | 691 | ||
692 | static inline int hrtimer_hres_active(void) { return 0; } | 692 | static inline int hrtimer_hres_active(void) { return 0; } |
693 | static inline int hrtimer_is_hres_enabled(void) { return 0; } | 693 | static inline int hrtimer_is_hres_enabled(void) { return 0; } |
694 | static inline int hrtimer_switch_to_hres(void) { return 0; } | 694 | static inline int hrtimer_switch_to_hres(void) { return 0; } |
695 | static inline void | 695 | static inline void |
696 | hrtimer_force_reprogram(struct hrtimer_cpu_base *base, int skip_equal) { } | 696 | hrtimer_force_reprogram(struct hrtimer_cpu_base *base, int skip_equal) { } |
697 | static inline int hrtimer_reprogram(struct hrtimer *timer, | 697 | static inline int hrtimer_reprogram(struct hrtimer *timer, |
698 | struct hrtimer_clock_base *base) | 698 | struct hrtimer_clock_base *base) |
699 | { | 699 | { |
700 | return 0; | 700 | return 0; |
701 | } | 701 | } |
702 | static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) { } | 702 | static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) { } |
703 | static inline void retrigger_next_event(void *arg) { } | 703 | static inline void retrigger_next_event(void *arg) { } |
704 | 704 | ||
705 | #endif /* CONFIG_HIGH_RES_TIMERS */ | 705 | #endif /* CONFIG_HIGH_RES_TIMERS */ |
706 | 706 | ||
707 | /* | 707 | /* |
708 | * Clock realtime was set | 708 | * Clock realtime was set |
709 | * | 709 | * |
710 | * Change the offset of the realtime clock vs. the monotonic | 710 | * Change the offset of the realtime clock vs. the monotonic |
711 | * clock. | 711 | * clock. |
712 | * | 712 | * |
713 | * We might have to reprogram the high resolution timer interrupt. On | 713 | * We might have to reprogram the high resolution timer interrupt. On |
714 | * SMP we call the architecture specific code to retrigger _all_ high | 714 | * SMP we call the architecture specific code to retrigger _all_ high |
715 | * resolution timer interrupts. On UP we just disable interrupts and | 715 | * resolution timer interrupts. On UP we just disable interrupts and |
716 | * call the high resolution interrupt code. | 716 | * call the high resolution interrupt code. |
717 | */ | 717 | */ |
718 | void clock_was_set(void) | 718 | void clock_was_set(void) |
719 | { | 719 | { |
720 | #ifdef CONFIG_HIGH_RES_TIMERS | 720 | #ifdef CONFIG_HIGH_RES_TIMERS |
721 | /* Retrigger the CPU local events everywhere */ | 721 | /* Retrigger the CPU local events everywhere */ |
722 | on_each_cpu(retrigger_next_event, NULL, 1); | 722 | on_each_cpu(retrigger_next_event, NULL, 1); |
723 | #endif | 723 | #endif |
724 | timerfd_clock_was_set(); | 724 | timerfd_clock_was_set(); |
725 | } | 725 | } |
726 | 726 | ||
727 | /* | 727 | /* |
728 | * During resume we might have to reprogram the high resolution timer | 728 | * During resume we might have to reprogram the high resolution timer |
729 | * interrupt on all online CPUs. However, all other CPUs will be | 729 | * interrupt on all online CPUs. However, all other CPUs will be |
730 | * stopped with IRQs interrupts disabled so the clock_was_set() call | 730 | * stopped with IRQs interrupts disabled so the clock_was_set() call |
731 | * must be deferred. | 731 | * must be deferred. |
732 | */ | 732 | */ |
733 | void hrtimers_resume(void) | 733 | void hrtimers_resume(void) |
734 | { | 734 | { |
735 | WARN_ONCE(!irqs_disabled(), | 735 | WARN_ONCE(!irqs_disabled(), |
736 | KERN_INFO "hrtimers_resume() called with IRQs enabled!"); | 736 | KERN_INFO "hrtimers_resume() called with IRQs enabled!"); |
737 | 737 | ||
738 | /* Retrigger on the local CPU */ | 738 | /* Retrigger on the local CPU */ |
739 | retrigger_next_event(NULL); | 739 | retrigger_next_event(NULL); |
740 | /* And schedule a retrigger for all others */ | 740 | /* And schedule a retrigger for all others */ |
741 | clock_was_set_delayed(); | 741 | clock_was_set_delayed(); |
742 | } | 742 | } |
743 | 743 | ||
744 | static inline void timer_stats_hrtimer_set_start_info(struct hrtimer *timer) | 744 | static inline void timer_stats_hrtimer_set_start_info(struct hrtimer *timer) |
745 | { | 745 | { |
746 | #ifdef CONFIG_TIMER_STATS | 746 | #ifdef CONFIG_TIMER_STATS |
747 | if (timer->start_site) | 747 | if (timer->start_site) |
748 | return; | 748 | return; |
749 | timer->start_site = __builtin_return_address(0); | 749 | timer->start_site = __builtin_return_address(0); |
750 | memcpy(timer->start_comm, current->comm, TASK_COMM_LEN); | 750 | memcpy(timer->start_comm, current->comm, TASK_COMM_LEN); |
751 | timer->start_pid = current->pid; | 751 | timer->start_pid = current->pid; |
752 | #endif | 752 | #endif |
753 | } | 753 | } |
754 | 754 | ||
755 | static inline void timer_stats_hrtimer_clear_start_info(struct hrtimer *timer) | 755 | static inline void timer_stats_hrtimer_clear_start_info(struct hrtimer *timer) |
756 | { | 756 | { |
757 | #ifdef CONFIG_TIMER_STATS | 757 | #ifdef CONFIG_TIMER_STATS |
758 | timer->start_site = NULL; | 758 | timer->start_site = NULL; |
759 | #endif | 759 | #endif |
760 | } | 760 | } |
761 | 761 | ||
762 | static inline void timer_stats_account_hrtimer(struct hrtimer *timer) | 762 | static inline void timer_stats_account_hrtimer(struct hrtimer *timer) |
763 | { | 763 | { |
764 | #ifdef CONFIG_TIMER_STATS | 764 | #ifdef CONFIG_TIMER_STATS |
765 | if (likely(!timer_stats_active)) | 765 | if (likely(!timer_stats_active)) |
766 | return; | 766 | return; |
767 | timer_stats_update_stats(timer, timer->start_pid, timer->start_site, | 767 | timer_stats_update_stats(timer, timer->start_pid, timer->start_site, |
768 | timer->function, timer->start_comm, 0); | 768 | timer->function, timer->start_comm, 0); |
769 | #endif | 769 | #endif |
770 | } | 770 | } |
771 | 771 | ||
772 | /* | 772 | /* |
773 | * Counterpart to lock_hrtimer_base above: | 773 | * Counterpart to lock_hrtimer_base above: |
774 | */ | 774 | */ |
775 | static inline | 775 | static inline |
776 | void unlock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags) | 776 | void unlock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags) |
777 | { | 777 | { |
778 | raw_spin_unlock_irqrestore(&timer->base->cpu_base->lock, *flags); | 778 | raw_spin_unlock_irqrestore(&timer->base->cpu_base->lock, *flags); |
779 | } | 779 | } |
780 | 780 | ||
781 | /** | 781 | /** |
782 | * hrtimer_forward - forward the timer expiry | 782 | * hrtimer_forward - forward the timer expiry |
783 | * @timer: hrtimer to forward | 783 | * @timer: hrtimer to forward |
784 | * @now: forward past this time | 784 | * @now: forward past this time |
785 | * @interval: the interval to forward | 785 | * @interval: the interval to forward |
786 | * | 786 | * |
787 | * Forward the timer expiry so it will expire in the future. | 787 | * Forward the timer expiry so it will expire in the future. |
788 | * Returns the number of overruns. | 788 | * Returns the number of overruns. |
789 | */ | 789 | */ |
790 | u64 hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval) | 790 | u64 hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval) |
791 | { | 791 | { |
792 | u64 orun = 1; | 792 | u64 orun = 1; |
793 | ktime_t delta; | 793 | ktime_t delta; |
794 | 794 | ||
795 | delta = ktime_sub(now, hrtimer_get_expires(timer)); | 795 | delta = ktime_sub(now, hrtimer_get_expires(timer)); |
796 | 796 | ||
797 | if (delta.tv64 < 0) | 797 | if (delta.tv64 < 0) |
798 | return 0; | 798 | return 0; |
799 | 799 | ||
800 | if (interval.tv64 < timer->base->resolution.tv64) | 800 | if (interval.tv64 < timer->base->resolution.tv64) |
801 | interval.tv64 = timer->base->resolution.tv64; | 801 | interval.tv64 = timer->base->resolution.tv64; |
802 | 802 | ||
803 | if (unlikely(delta.tv64 >= interval.tv64)) { | 803 | if (unlikely(delta.tv64 >= interval.tv64)) { |
804 | s64 incr = ktime_to_ns(interval); | 804 | s64 incr = ktime_to_ns(interval); |
805 | 805 | ||
806 | orun = ktime_divns(delta, incr); | 806 | orun = ktime_divns(delta, incr); |
807 | hrtimer_add_expires_ns(timer, incr * orun); | 807 | hrtimer_add_expires_ns(timer, incr * orun); |
808 | if (hrtimer_get_expires_tv64(timer) > now.tv64) | 808 | if (hrtimer_get_expires_tv64(timer) > now.tv64) |
809 | return orun; | 809 | return orun; |
810 | /* | 810 | /* |
811 | * This (and the ktime_add() below) is the | 811 | * This (and the ktime_add() below) is the |
812 | * correction for exact: | 812 | * correction for exact: |
813 | */ | 813 | */ |
814 | orun++; | 814 | orun++; |
815 | } | 815 | } |
816 | hrtimer_add_expires(timer, interval); | 816 | hrtimer_add_expires(timer, interval); |
817 | 817 | ||
818 | return orun; | 818 | return orun; |
819 | } | 819 | } |
820 | EXPORT_SYMBOL_GPL(hrtimer_forward); | 820 | EXPORT_SYMBOL_GPL(hrtimer_forward); |
821 | 821 | ||
822 | /* | 822 | /* |
823 | * enqueue_hrtimer - internal function to (re)start a timer | 823 | * enqueue_hrtimer - internal function to (re)start a timer |
824 | * | 824 | * |
825 | * The timer is inserted in expiry order. Insertion into the | 825 | * The timer is inserted in expiry order. Insertion into the |
826 | * red black tree is O(log(n)). Must hold the base lock. | 826 | * red black tree is O(log(n)). Must hold the base lock. |
827 | * | 827 | * |
828 | * Returns 1 when the new timer is the leftmost timer in the tree. | 828 | * Returns 1 when the new timer is the leftmost timer in the tree. |
829 | */ | 829 | */ |
830 | static int enqueue_hrtimer(struct hrtimer *timer, | 830 | static int enqueue_hrtimer(struct hrtimer *timer, |
831 | struct hrtimer_clock_base *base) | 831 | struct hrtimer_clock_base *base) |
832 | { | 832 | { |
833 | debug_activate(timer); | 833 | debug_activate(timer); |
834 | 834 | ||
835 | timerqueue_add(&base->active, &timer->node); | 835 | timerqueue_add(&base->active, &timer->node); |
836 | base->cpu_base->active_bases |= 1 << base->index; | 836 | base->cpu_base->active_bases |= 1 << base->index; |
837 | 837 | ||
838 | /* | 838 | /* |
839 | * HRTIMER_STATE_ENQUEUED is or'ed to the current state to preserve the | 839 | * HRTIMER_STATE_ENQUEUED is or'ed to the current state to preserve the |
840 | * state of a possibly running callback. | 840 | * state of a possibly running callback. |
841 | */ | 841 | */ |
842 | timer->state |= HRTIMER_STATE_ENQUEUED; | 842 | timer->state |= HRTIMER_STATE_ENQUEUED; |
843 | 843 | ||
844 | return (&timer->node == base->active.next); | 844 | return (&timer->node == base->active.next); |
845 | } | 845 | } |
846 | 846 | ||
847 | /* | 847 | /* |
848 | * __remove_hrtimer - internal function to remove a timer | 848 | * __remove_hrtimer - internal function to remove a timer |
849 | * | 849 | * |
850 | * Caller must hold the base lock. | 850 | * Caller must hold the base lock. |
851 | * | 851 | * |
852 | * High resolution timer mode reprograms the clock event device when the | 852 | * High resolution timer mode reprograms the clock event device when the |
853 | * timer is the one which expires next. The caller can disable this by setting | 853 | * timer is the one which expires next. The caller can disable this by setting |
854 | * reprogram to zero. This is useful, when the context does a reprogramming | 854 | * reprogram to zero. This is useful, when the context does a reprogramming |
855 | * anyway (e.g. timer interrupt) | 855 | * anyway (e.g. timer interrupt) |
856 | */ | 856 | */ |
857 | static void __remove_hrtimer(struct hrtimer *timer, | 857 | static void __remove_hrtimer(struct hrtimer *timer, |
858 | struct hrtimer_clock_base *base, | 858 | struct hrtimer_clock_base *base, |
859 | unsigned long newstate, int reprogram) | 859 | unsigned long newstate, int reprogram) |
860 | { | 860 | { |
861 | struct timerqueue_node *next_timer; | 861 | struct timerqueue_node *next_timer; |
862 | if (!(timer->state & HRTIMER_STATE_ENQUEUED)) | 862 | if (!(timer->state & HRTIMER_STATE_ENQUEUED)) |
863 | goto out; | 863 | goto out; |
864 | 864 | ||
865 | next_timer = timerqueue_getnext(&base->active); | 865 | next_timer = timerqueue_getnext(&base->active); |
866 | timerqueue_del(&base->active, &timer->node); | 866 | timerqueue_del(&base->active, &timer->node); |
867 | if (&timer->node == next_timer) { | 867 | if (&timer->node == next_timer) { |
868 | #ifdef CONFIG_HIGH_RES_TIMERS | 868 | #ifdef CONFIG_HIGH_RES_TIMERS |
869 | /* Reprogram the clock event device. if enabled */ | 869 | /* Reprogram the clock event device. if enabled */ |
870 | if (reprogram && hrtimer_hres_active()) { | 870 | if (reprogram && hrtimer_hres_active()) { |
871 | ktime_t expires; | 871 | ktime_t expires; |
872 | 872 | ||
873 | expires = ktime_sub(hrtimer_get_expires(timer), | 873 | expires = ktime_sub(hrtimer_get_expires(timer), |
874 | base->offset); | 874 | base->offset); |
875 | if (base->cpu_base->expires_next.tv64 == expires.tv64) | 875 | if (base->cpu_base->expires_next.tv64 == expires.tv64) |
876 | hrtimer_force_reprogram(base->cpu_base, 1); | 876 | hrtimer_force_reprogram(base->cpu_base, 1); |
877 | } | 877 | } |
878 | #endif | 878 | #endif |
879 | } | 879 | } |
880 | if (!timerqueue_getnext(&base->active)) | 880 | if (!timerqueue_getnext(&base->active)) |
881 | base->cpu_base->active_bases &= ~(1 << base->index); | 881 | base->cpu_base->active_bases &= ~(1 << base->index); |
882 | out: | 882 | out: |
883 | timer->state = newstate; | 883 | timer->state = newstate; |
884 | } | 884 | } |
885 | 885 | ||
886 | /* | 886 | /* |
887 | * remove hrtimer, called with base lock held | 887 | * remove hrtimer, called with base lock held |
888 | */ | 888 | */ |
889 | static inline int | 889 | static inline int |
890 | remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base) | 890 | remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base) |
891 | { | 891 | { |
892 | if (hrtimer_is_queued(timer)) { | 892 | if (hrtimer_is_queued(timer)) { |
893 | unsigned long state; | 893 | unsigned long state; |
894 | int reprogram; | 894 | int reprogram; |
895 | 895 | ||
896 | /* | 896 | /* |
897 | * Remove the timer and force reprogramming when high | 897 | * Remove the timer and force reprogramming when high |
898 | * resolution mode is active and the timer is on the current | 898 | * resolution mode is active and the timer is on the current |
899 | * CPU. If we remove a timer on another CPU, reprogramming is | 899 | * CPU. If we remove a timer on another CPU, reprogramming is |
900 | * skipped. The interrupt event on this CPU is fired and | 900 | * skipped. The interrupt event on this CPU is fired and |
901 | * reprogramming happens in the interrupt handler. This is a | 901 | * reprogramming happens in the interrupt handler. This is a |
902 | * rare case and less expensive than a smp call. | 902 | * rare case and less expensive than a smp call. |
903 | */ | 903 | */ |
904 | debug_deactivate(timer); | 904 | debug_deactivate(timer); |
905 | timer_stats_hrtimer_clear_start_info(timer); | 905 | timer_stats_hrtimer_clear_start_info(timer); |
906 | reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases); | 906 | reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases); |
907 | /* | 907 | /* |
908 | * We must preserve the CALLBACK state flag here, | 908 | * We must preserve the CALLBACK state flag here, |
909 | * otherwise we could move the timer base in | 909 | * otherwise we could move the timer base in |
910 | * switch_hrtimer_base. | 910 | * switch_hrtimer_base. |
911 | */ | 911 | */ |
912 | state = timer->state & HRTIMER_STATE_CALLBACK; | 912 | state = timer->state & HRTIMER_STATE_CALLBACK; |
913 | __remove_hrtimer(timer, base, state, reprogram); | 913 | __remove_hrtimer(timer, base, state, reprogram); |
914 | return 1; | 914 | return 1; |
915 | } | 915 | } |
916 | return 0; | 916 | return 0; |
917 | } | 917 | } |
918 | 918 | ||
919 | int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, | 919 | int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, |
920 | unsigned long delta_ns, const enum hrtimer_mode mode, | 920 | unsigned long delta_ns, const enum hrtimer_mode mode, |
921 | int wakeup) | 921 | int wakeup) |
922 | { | 922 | { |
923 | struct hrtimer_clock_base *base, *new_base; | 923 | struct hrtimer_clock_base *base, *new_base; |
924 | unsigned long flags; | 924 | unsigned long flags; |
925 | int ret, leftmost; | 925 | int ret, leftmost; |
926 | 926 | ||
927 | base = lock_hrtimer_base(timer, &flags); | 927 | base = lock_hrtimer_base(timer, &flags); |
928 | 928 | ||
929 | /* Remove an active timer from the queue: */ | 929 | /* Remove an active timer from the queue: */ |
930 | ret = remove_hrtimer(timer, base); | 930 | ret = remove_hrtimer(timer, base); |
931 | 931 | ||
932 | if (mode & HRTIMER_MODE_REL) { | 932 | if (mode & HRTIMER_MODE_REL) { |
933 | tim = ktime_add_safe(tim, base->get_time()); | 933 | tim = ktime_add_safe(tim, base->get_time()); |
934 | /* | 934 | /* |
935 | * CONFIG_TIME_LOW_RES is a temporary way for architectures | 935 | * CONFIG_TIME_LOW_RES is a temporary way for architectures |
936 | * to signal that they simply return xtime in | 936 | * to signal that they simply return xtime in |
937 | * do_gettimeoffset(). In this case we want to round up by | 937 | * do_gettimeoffset(). In this case we want to round up by |
938 | * resolution when starting a relative timer, to avoid short | 938 | * resolution when starting a relative timer, to avoid short |
939 | * timeouts. This will go away with the GTOD framework. | 939 | * timeouts. This will go away with the GTOD framework. |
940 | */ | 940 | */ |
941 | #ifdef CONFIG_TIME_LOW_RES | 941 | #ifdef CONFIG_TIME_LOW_RES |
942 | tim = ktime_add_safe(tim, base->resolution); | 942 | tim = ktime_add_safe(tim, base->resolution); |
943 | #endif | 943 | #endif |
944 | } | 944 | } |
945 | 945 | ||
946 | hrtimer_set_expires_range_ns(timer, tim, delta_ns); | 946 | hrtimer_set_expires_range_ns(timer, tim, delta_ns); |
947 | 947 | ||
948 | /* Switch the timer base, if necessary: */ | 948 | /* Switch the timer base, if necessary: */ |
949 | new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); | 949 | new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); |
950 | 950 | ||
951 | timer_stats_hrtimer_set_start_info(timer); | 951 | timer_stats_hrtimer_set_start_info(timer); |
952 | 952 | ||
953 | leftmost = enqueue_hrtimer(timer, new_base); | 953 | leftmost = enqueue_hrtimer(timer, new_base); |
954 | 954 | ||
955 | if (!leftmost) { | 955 | if (!leftmost) { |
956 | unlock_hrtimer_base(timer, &flags); | 956 | unlock_hrtimer_base(timer, &flags); |
957 | return ret; | 957 | return ret; |
958 | } | 958 | } |
959 | 959 | ||
960 | if (!hrtimer_is_hres_active(timer)) { | 960 | if (!hrtimer_is_hres_active(timer)) { |
961 | /* | 961 | /* |
962 | * Kick to reschedule the next tick to handle the new timer | 962 | * Kick to reschedule the next tick to handle the new timer |
963 | * on dynticks target. | 963 | * on dynticks target. |
964 | */ | 964 | */ |
965 | wake_up_nohz_cpu(new_base->cpu_base->cpu); | 965 | wake_up_nohz_cpu(new_base->cpu_base->cpu); |
966 | } else if (new_base->cpu_base == this_cpu_ptr(&hrtimer_bases) && | 966 | } else if (new_base->cpu_base == this_cpu_ptr(&hrtimer_bases) && |
967 | hrtimer_reprogram(timer, new_base)) { | 967 | hrtimer_reprogram(timer, new_base)) { |
968 | /* | 968 | /* |
969 | * Only allow reprogramming if the new base is on this CPU. | 969 | * Only allow reprogramming if the new base is on this CPU. |
970 | * (it might still be on another CPU if the timer was pending) | 970 | * (it might still be on another CPU if the timer was pending) |
971 | * | 971 | * |
972 | * XXX send_remote_softirq() ? | 972 | * XXX send_remote_softirq() ? |
973 | */ | 973 | */ |
974 | if (wakeup) { | 974 | if (wakeup) { |
975 | /* | 975 | /* |
976 | * We need to drop cpu_base->lock to avoid a | 976 | * We need to drop cpu_base->lock to avoid a |
977 | * lock ordering issue vs. rq->lock. | 977 | * lock ordering issue vs. rq->lock. |
978 | */ | 978 | */ |
979 | raw_spin_unlock(&new_base->cpu_base->lock); | 979 | raw_spin_unlock(&new_base->cpu_base->lock); |
980 | raise_softirq_irqoff(HRTIMER_SOFTIRQ); | 980 | raise_softirq_irqoff(HRTIMER_SOFTIRQ); |
981 | local_irq_restore(flags); | 981 | local_irq_restore(flags); |
982 | return ret; | 982 | return ret; |
983 | } else { | 983 | } else { |
984 | __raise_softirq_irqoff(HRTIMER_SOFTIRQ); | 984 | __raise_softirq_irqoff(HRTIMER_SOFTIRQ); |
985 | } | 985 | } |
986 | } | 986 | } |
987 | 987 | ||
988 | unlock_hrtimer_base(timer, &flags); | 988 | unlock_hrtimer_base(timer, &flags); |
989 | 989 | ||
990 | return ret; | 990 | return ret; |
991 | } | 991 | } |
992 | EXPORT_SYMBOL_GPL(__hrtimer_start_range_ns); | 992 | EXPORT_SYMBOL_GPL(__hrtimer_start_range_ns); |
993 | 993 | ||
994 | /** | 994 | /** |
995 | * hrtimer_start_range_ns - (re)start an hrtimer on the current CPU | 995 | * hrtimer_start_range_ns - (re)start an hrtimer on the current CPU |
996 | * @timer: the timer to be added | 996 | * @timer: the timer to be added |
997 | * @tim: expiry time | 997 | * @tim: expiry time |
998 | * @delta_ns: "slack" range for the timer | 998 | * @delta_ns: "slack" range for the timer |
999 | * @mode: expiry mode: absolute (HRTIMER_MODE_ABS) or | 999 | * @mode: expiry mode: absolute (HRTIMER_MODE_ABS) or |
1000 | * relative (HRTIMER_MODE_REL) | 1000 | * relative (HRTIMER_MODE_REL) |
1001 | * | 1001 | * |
1002 | * Returns: | 1002 | * Returns: |
1003 | * 0 on success | 1003 | * 0 on success |
1004 | * 1 when the timer was active | 1004 | * 1 when the timer was active |
1005 | */ | 1005 | */ |
1006 | int hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, | 1006 | int hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, |
1007 | unsigned long delta_ns, const enum hrtimer_mode mode) | 1007 | unsigned long delta_ns, const enum hrtimer_mode mode) |
1008 | { | 1008 | { |
1009 | return __hrtimer_start_range_ns(timer, tim, delta_ns, mode, 1); | 1009 | return __hrtimer_start_range_ns(timer, tim, delta_ns, mode, 1); |
1010 | } | 1010 | } |
1011 | EXPORT_SYMBOL_GPL(hrtimer_start_range_ns); | 1011 | EXPORT_SYMBOL_GPL(hrtimer_start_range_ns); |
1012 | 1012 | ||
1013 | /** | 1013 | /** |
1014 | * hrtimer_start - (re)start an hrtimer on the current CPU | 1014 | * hrtimer_start - (re)start an hrtimer on the current CPU |
1015 | * @timer: the timer to be added | 1015 | * @timer: the timer to be added |
1016 | * @tim: expiry time | 1016 | * @tim: expiry time |
1017 | * @mode: expiry mode: absolute (HRTIMER_MODE_ABS) or | 1017 | * @mode: expiry mode: absolute (HRTIMER_MODE_ABS) or |
1018 | * relative (HRTIMER_MODE_REL) | 1018 | * relative (HRTIMER_MODE_REL) |
1019 | * | 1019 | * |
1020 | * Returns: | 1020 | * Returns: |
1021 | * 0 on success | 1021 | * 0 on success |
1022 | * 1 when the timer was active | 1022 | * 1 when the timer was active |
1023 | */ | 1023 | */ |
1024 | int | 1024 | int |
1025 | hrtimer_start(struct hrtimer *timer, ktime_t tim, const enum hrtimer_mode mode) | 1025 | hrtimer_start(struct hrtimer *timer, ktime_t tim, const enum hrtimer_mode mode) |
1026 | { | 1026 | { |
1027 | return __hrtimer_start_range_ns(timer, tim, 0, mode, 1); | 1027 | return __hrtimer_start_range_ns(timer, tim, 0, mode, 1); |
1028 | } | 1028 | } |
1029 | EXPORT_SYMBOL_GPL(hrtimer_start); | 1029 | EXPORT_SYMBOL_GPL(hrtimer_start); |
1030 | 1030 | ||
1031 | 1031 | ||
1032 | /** | 1032 | /** |
1033 | * hrtimer_try_to_cancel - try to deactivate a timer | 1033 | * hrtimer_try_to_cancel - try to deactivate a timer |
1034 | * @timer: hrtimer to stop | 1034 | * @timer: hrtimer to stop |
1035 | * | 1035 | * |
1036 | * Returns: | 1036 | * Returns: |
1037 | * 0 when the timer was not active | 1037 | * 0 when the timer was not active |
1038 | * 1 when the timer was active | 1038 | * 1 when the timer was active |
1039 | * -1 when the timer is currently excuting the callback function and | 1039 | * -1 when the timer is currently excuting the callback function and |
1040 | * cannot be stopped | 1040 | * cannot be stopped |
1041 | */ | 1041 | */ |
1042 | int hrtimer_try_to_cancel(struct hrtimer *timer) | 1042 | int hrtimer_try_to_cancel(struct hrtimer *timer) |
1043 | { | 1043 | { |
1044 | struct hrtimer_clock_base *base; | 1044 | struct hrtimer_clock_base *base; |
1045 | unsigned long flags; | 1045 | unsigned long flags; |
1046 | int ret = -1; | 1046 | int ret = -1; |
1047 | 1047 | ||
1048 | base = lock_hrtimer_base(timer, &flags); | 1048 | base = lock_hrtimer_base(timer, &flags); |
1049 | 1049 | ||
1050 | if (!hrtimer_callback_running(timer)) | 1050 | if (!hrtimer_callback_running(timer)) |
1051 | ret = remove_hrtimer(timer, base); | 1051 | ret = remove_hrtimer(timer, base); |
1052 | 1052 | ||
1053 | unlock_hrtimer_base(timer, &flags); | 1053 | unlock_hrtimer_base(timer, &flags); |
1054 | 1054 | ||
1055 | return ret; | 1055 | return ret; |
1056 | 1056 | ||
1057 | } | 1057 | } |
1058 | EXPORT_SYMBOL_GPL(hrtimer_try_to_cancel); | 1058 | EXPORT_SYMBOL_GPL(hrtimer_try_to_cancel); |
1059 | 1059 | ||
1060 | /** | 1060 | /** |
1061 | * hrtimer_cancel - cancel a timer and wait for the handler to finish. | 1061 | * hrtimer_cancel - cancel a timer and wait for the handler to finish. |
1062 | * @timer: the timer to be cancelled | 1062 | * @timer: the timer to be cancelled |
1063 | * | 1063 | * |
1064 | * Returns: | 1064 | * Returns: |
1065 | * 0 when the timer was not active | 1065 | * 0 when the timer was not active |
1066 | * 1 when the timer was active | 1066 | * 1 when the timer was active |
1067 | */ | 1067 | */ |
1068 | int hrtimer_cancel(struct hrtimer *timer) | 1068 | int hrtimer_cancel(struct hrtimer *timer) |
1069 | { | 1069 | { |
1070 | for (;;) { | 1070 | for (;;) { |
1071 | int ret = hrtimer_try_to_cancel(timer); | 1071 | int ret = hrtimer_try_to_cancel(timer); |
1072 | 1072 | ||
1073 | if (ret >= 0) | 1073 | if (ret >= 0) |
1074 | return ret; | 1074 | return ret; |
1075 | cpu_relax(); | 1075 | cpu_relax(); |
1076 | } | 1076 | } |
1077 | } | 1077 | } |
1078 | EXPORT_SYMBOL_GPL(hrtimer_cancel); | 1078 | EXPORT_SYMBOL_GPL(hrtimer_cancel); |
1079 | 1079 | ||
1080 | /** | 1080 | /** |
1081 | * hrtimer_get_remaining - get remaining time for the timer | 1081 | * hrtimer_get_remaining - get remaining time for the timer |
1082 | * @timer: the timer to read | 1082 | * @timer: the timer to read |
1083 | */ | 1083 | */ |
1084 | ktime_t hrtimer_get_remaining(const struct hrtimer *timer) | 1084 | ktime_t hrtimer_get_remaining(const struct hrtimer *timer) |
1085 | { | 1085 | { |
1086 | unsigned long flags; | 1086 | unsigned long flags; |
1087 | ktime_t rem; | 1087 | ktime_t rem; |
1088 | 1088 | ||
1089 | lock_hrtimer_base(timer, &flags); | 1089 | lock_hrtimer_base(timer, &flags); |
1090 | rem = hrtimer_expires_remaining(timer); | 1090 | rem = hrtimer_expires_remaining(timer); |
1091 | unlock_hrtimer_base(timer, &flags); | 1091 | unlock_hrtimer_base(timer, &flags); |
1092 | 1092 | ||
1093 | return rem; | 1093 | return rem; |
1094 | } | 1094 | } |
1095 | EXPORT_SYMBOL_GPL(hrtimer_get_remaining); | 1095 | EXPORT_SYMBOL_GPL(hrtimer_get_remaining); |
1096 | 1096 | ||
1097 | #ifdef CONFIG_NO_HZ_COMMON | 1097 | #ifdef CONFIG_NO_HZ_COMMON |
1098 | /** | 1098 | /** |
1099 | * hrtimer_get_next_event - get the time until next expiry event | 1099 | * hrtimer_get_next_event - get the time until next expiry event |
1100 | * | 1100 | * |
1101 | * Returns the delta to the next expiry event or KTIME_MAX if no timer | 1101 | * Returns the delta to the next expiry event or KTIME_MAX if no timer |
1102 | * is pending. | 1102 | * is pending. |
1103 | */ | 1103 | */ |
1104 | ktime_t hrtimer_get_next_event(void) | 1104 | ktime_t hrtimer_get_next_event(void) |
1105 | { | 1105 | { |
1106 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); | 1106 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); |
1107 | struct hrtimer_clock_base *base = cpu_base->clock_base; | 1107 | struct hrtimer_clock_base *base = cpu_base->clock_base; |
1108 | ktime_t delta, mindelta = { .tv64 = KTIME_MAX }; | 1108 | ktime_t delta, mindelta = { .tv64 = KTIME_MAX }; |
1109 | unsigned long flags; | 1109 | unsigned long flags; |
1110 | int i; | 1110 | int i; |
1111 | 1111 | ||
1112 | raw_spin_lock_irqsave(&cpu_base->lock, flags); | 1112 | raw_spin_lock_irqsave(&cpu_base->lock, flags); |
1113 | 1113 | ||
1114 | if (!hrtimer_hres_active()) { | 1114 | if (!hrtimer_hres_active()) { |
1115 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { | 1115 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++, base++) { |
1116 | struct hrtimer *timer; | 1116 | struct hrtimer *timer; |
1117 | struct timerqueue_node *next; | 1117 | struct timerqueue_node *next; |
1118 | 1118 | ||
1119 | next = timerqueue_getnext(&base->active); | 1119 | next = timerqueue_getnext(&base->active); |
1120 | if (!next) | 1120 | if (!next) |
1121 | continue; | 1121 | continue; |
1122 | 1122 | ||
1123 | timer = container_of(next, struct hrtimer, node); | 1123 | timer = container_of(next, struct hrtimer, node); |
1124 | delta.tv64 = hrtimer_get_expires_tv64(timer); | 1124 | delta.tv64 = hrtimer_get_expires_tv64(timer); |
1125 | delta = ktime_sub(delta, base->get_time()); | 1125 | delta = ktime_sub(delta, base->get_time()); |
1126 | if (delta.tv64 < mindelta.tv64) | 1126 | if (delta.tv64 < mindelta.tv64) |
1127 | mindelta.tv64 = delta.tv64; | 1127 | mindelta.tv64 = delta.tv64; |
1128 | } | 1128 | } |
1129 | } | 1129 | } |
1130 | 1130 | ||
1131 | raw_spin_unlock_irqrestore(&cpu_base->lock, flags); | 1131 | raw_spin_unlock_irqrestore(&cpu_base->lock, flags); |
1132 | 1132 | ||
1133 | if (mindelta.tv64 < 0) | 1133 | if (mindelta.tv64 < 0) |
1134 | mindelta.tv64 = 0; | 1134 | mindelta.tv64 = 0; |
1135 | return mindelta; | 1135 | return mindelta; |
1136 | } | 1136 | } |
1137 | #endif | 1137 | #endif |
1138 | 1138 | ||
1139 | static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, | 1139 | static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, |
1140 | enum hrtimer_mode mode) | 1140 | enum hrtimer_mode mode) |
1141 | { | 1141 | { |
1142 | struct hrtimer_cpu_base *cpu_base; | 1142 | struct hrtimer_cpu_base *cpu_base; |
1143 | int base; | 1143 | int base; |
1144 | 1144 | ||
1145 | memset(timer, 0, sizeof(struct hrtimer)); | 1145 | memset(timer, 0, sizeof(struct hrtimer)); |
1146 | 1146 | ||
1147 | cpu_base = raw_cpu_ptr(&hrtimer_bases); | 1147 | cpu_base = raw_cpu_ptr(&hrtimer_bases); |
1148 | 1148 | ||
1149 | if (clock_id == CLOCK_REALTIME && mode != HRTIMER_MODE_ABS) | 1149 | if (clock_id == CLOCK_REALTIME && mode != HRTIMER_MODE_ABS) |
1150 | clock_id = CLOCK_MONOTONIC; | 1150 | clock_id = CLOCK_MONOTONIC; |
1151 | 1151 | ||
1152 | base = hrtimer_clockid_to_base(clock_id); | 1152 | base = hrtimer_clockid_to_base(clock_id); |
1153 | timer->base = &cpu_base->clock_base[base]; | 1153 | timer->base = &cpu_base->clock_base[base]; |
1154 | timerqueue_init(&timer->node); | 1154 | timerqueue_init(&timer->node); |
1155 | 1155 | ||
1156 | #ifdef CONFIG_TIMER_STATS | 1156 | #ifdef CONFIG_TIMER_STATS |
1157 | timer->start_site = NULL; | 1157 | timer->start_site = NULL; |
1158 | timer->start_pid = -1; | 1158 | timer->start_pid = -1; |
1159 | memset(timer->start_comm, 0, TASK_COMM_LEN); | 1159 | memset(timer->start_comm, 0, TASK_COMM_LEN); |
1160 | #endif | 1160 | #endif |
1161 | } | 1161 | } |
1162 | 1162 | ||
1163 | /** | 1163 | /** |
1164 | * hrtimer_init - initialize a timer to the given clock | 1164 | * hrtimer_init - initialize a timer to the given clock |
1165 | * @timer: the timer to be initialized | 1165 | * @timer: the timer to be initialized |
1166 | * @clock_id: the clock to be used | 1166 | * @clock_id: the clock to be used |
1167 | * @mode: timer mode abs/rel | 1167 | * @mode: timer mode abs/rel |
1168 | */ | 1168 | */ |
1169 | void hrtimer_init(struct hrtimer *timer, clockid_t clock_id, | 1169 | void hrtimer_init(struct hrtimer *timer, clockid_t clock_id, |
1170 | enum hrtimer_mode mode) | 1170 | enum hrtimer_mode mode) |
1171 | { | 1171 | { |
1172 | debug_init(timer, clock_id, mode); | 1172 | debug_init(timer, clock_id, mode); |
1173 | __hrtimer_init(timer, clock_id, mode); | 1173 | __hrtimer_init(timer, clock_id, mode); |
1174 | } | 1174 | } |
1175 | EXPORT_SYMBOL_GPL(hrtimer_init); | 1175 | EXPORT_SYMBOL_GPL(hrtimer_init); |
1176 | 1176 | ||
1177 | /** | 1177 | /** |
1178 | * hrtimer_get_res - get the timer resolution for a clock | 1178 | * hrtimer_get_res - get the timer resolution for a clock |
1179 | * @which_clock: which clock to query | 1179 | * @which_clock: which clock to query |
1180 | * @tp: pointer to timespec variable to store the resolution | 1180 | * @tp: pointer to timespec variable to store the resolution |
1181 | * | 1181 | * |
1182 | * Store the resolution of the clock selected by @which_clock in the | 1182 | * Store the resolution of the clock selected by @which_clock in the |
1183 | * variable pointed to by @tp. | 1183 | * variable pointed to by @tp. |
1184 | */ | 1184 | */ |
1185 | int hrtimer_get_res(const clockid_t which_clock, struct timespec *tp) | 1185 | int hrtimer_get_res(const clockid_t which_clock, struct timespec *tp) |
1186 | { | 1186 | { |
1187 | struct hrtimer_cpu_base *cpu_base; | 1187 | struct hrtimer_cpu_base *cpu_base; |
1188 | int base = hrtimer_clockid_to_base(which_clock); | 1188 | int base = hrtimer_clockid_to_base(which_clock); |
1189 | 1189 | ||
1190 | cpu_base = raw_cpu_ptr(&hrtimer_bases); | 1190 | cpu_base = raw_cpu_ptr(&hrtimer_bases); |
1191 | *tp = ktime_to_timespec(cpu_base->clock_base[base].resolution); | 1191 | *tp = ktime_to_timespec(cpu_base->clock_base[base].resolution); |
1192 | 1192 | ||
1193 | return 0; | 1193 | return 0; |
1194 | } | 1194 | } |
1195 | EXPORT_SYMBOL_GPL(hrtimer_get_res); | 1195 | EXPORT_SYMBOL_GPL(hrtimer_get_res); |
1196 | 1196 | ||
1197 | static void __run_hrtimer(struct hrtimer *timer, ktime_t *now) | 1197 | static void __run_hrtimer(struct hrtimer *timer, ktime_t *now) |
1198 | { | 1198 | { |
1199 | struct hrtimer_clock_base *base = timer->base; | 1199 | struct hrtimer_clock_base *base = timer->base; |
1200 | struct hrtimer_cpu_base *cpu_base = base->cpu_base; | 1200 | struct hrtimer_cpu_base *cpu_base = base->cpu_base; |
1201 | enum hrtimer_restart (*fn)(struct hrtimer *); | 1201 | enum hrtimer_restart (*fn)(struct hrtimer *); |
1202 | int restart; | 1202 | int restart; |
1203 | 1203 | ||
1204 | WARN_ON(!irqs_disabled()); | 1204 | WARN_ON(!irqs_disabled()); |
1205 | 1205 | ||
1206 | debug_deactivate(timer); | 1206 | debug_deactivate(timer); |
1207 | __remove_hrtimer(timer, base, HRTIMER_STATE_CALLBACK, 0); | 1207 | __remove_hrtimer(timer, base, HRTIMER_STATE_CALLBACK, 0); |
1208 | timer_stats_account_hrtimer(timer); | 1208 | timer_stats_account_hrtimer(timer); |
1209 | fn = timer->function; | 1209 | fn = timer->function; |
1210 | 1210 | ||
1211 | /* | 1211 | /* |
1212 | * Because we run timers from hardirq context, there is no chance | 1212 | * Because we run timers from hardirq context, there is no chance |
1213 | * they get migrated to another cpu, therefore its safe to unlock | 1213 | * they get migrated to another cpu, therefore its safe to unlock |
1214 | * the timer base. | 1214 | * the timer base. |
1215 | */ | 1215 | */ |
1216 | raw_spin_unlock(&cpu_base->lock); | 1216 | raw_spin_unlock(&cpu_base->lock); |
1217 | trace_hrtimer_expire_entry(timer, now); | 1217 | trace_hrtimer_expire_entry(timer, now); |
1218 | restart = fn(timer); | 1218 | restart = fn(timer); |
1219 | trace_hrtimer_expire_exit(timer); | 1219 | trace_hrtimer_expire_exit(timer); |
1220 | raw_spin_lock(&cpu_base->lock); | 1220 | raw_spin_lock(&cpu_base->lock); |
1221 | 1221 | ||
1222 | /* | 1222 | /* |
1223 | * Note: We clear the CALLBACK bit after enqueue_hrtimer and | 1223 | * Note: We clear the CALLBACK bit after enqueue_hrtimer and |
1224 | * we do not reprogramm the event hardware. Happens either in | 1224 | * we do not reprogramm the event hardware. Happens either in |
1225 | * hrtimer_start_range_ns() or in hrtimer_interrupt() | 1225 | * hrtimer_start_range_ns() or in hrtimer_interrupt() |
1226 | */ | 1226 | */ |
1227 | if (restart != HRTIMER_NORESTART) { | 1227 | if (restart != HRTIMER_NORESTART) { |
1228 | BUG_ON(timer->state != HRTIMER_STATE_CALLBACK); | 1228 | BUG_ON(timer->state != HRTIMER_STATE_CALLBACK); |
1229 | enqueue_hrtimer(timer, base); | 1229 | enqueue_hrtimer(timer, base); |
1230 | } | 1230 | } |
1231 | 1231 | ||
1232 | WARN_ON_ONCE(!(timer->state & HRTIMER_STATE_CALLBACK)); | 1232 | WARN_ON_ONCE(!(timer->state & HRTIMER_STATE_CALLBACK)); |
1233 | 1233 | ||
1234 | timer->state &= ~HRTIMER_STATE_CALLBACK; | 1234 | timer->state &= ~HRTIMER_STATE_CALLBACK; |
1235 | } | 1235 | } |
1236 | 1236 | ||
1237 | #ifdef CONFIG_HIGH_RES_TIMERS | 1237 | #ifdef CONFIG_HIGH_RES_TIMERS |
1238 | 1238 | ||
1239 | /* | 1239 | /* |
1240 | * High resolution timer interrupt | 1240 | * High resolution timer interrupt |
1241 | * Called with interrupts disabled | 1241 | * Called with interrupts disabled |
1242 | */ | 1242 | */ |
1243 | void hrtimer_interrupt(struct clock_event_device *dev) | 1243 | void hrtimer_interrupt(struct clock_event_device *dev) |
1244 | { | 1244 | { |
1245 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); | 1245 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); |
1246 | ktime_t expires_next, now, entry_time, delta; | 1246 | ktime_t expires_next, now, entry_time, delta; |
1247 | int i, retries = 0; | 1247 | int i, retries = 0; |
1248 | 1248 | ||
1249 | BUG_ON(!cpu_base->hres_active); | 1249 | BUG_ON(!cpu_base->hres_active); |
1250 | cpu_base->nr_events++; | 1250 | cpu_base->nr_events++; |
1251 | dev->next_event.tv64 = KTIME_MAX; | 1251 | dev->next_event.tv64 = KTIME_MAX; |
1252 | 1252 | ||
1253 | raw_spin_lock(&cpu_base->lock); | 1253 | raw_spin_lock(&cpu_base->lock); |
1254 | entry_time = now = hrtimer_update_base(cpu_base); | 1254 | entry_time = now = hrtimer_update_base(cpu_base); |
1255 | retry: | 1255 | retry: |
1256 | expires_next.tv64 = KTIME_MAX; | 1256 | expires_next.tv64 = KTIME_MAX; |
1257 | /* | 1257 | /* |
1258 | * We set expires_next to KTIME_MAX here with cpu_base->lock | 1258 | * We set expires_next to KTIME_MAX here with cpu_base->lock |
1259 | * held to prevent that a timer is enqueued in our queue via | 1259 | * held to prevent that a timer is enqueued in our queue via |
1260 | * the migration code. This does not affect enqueueing of | 1260 | * the migration code. This does not affect enqueueing of |
1261 | * timers which run their callback and need to be requeued on | 1261 | * timers which run their callback and need to be requeued on |
1262 | * this CPU. | 1262 | * this CPU. |
1263 | */ | 1263 | */ |
1264 | cpu_base->expires_next.tv64 = KTIME_MAX; | 1264 | cpu_base->expires_next.tv64 = KTIME_MAX; |
1265 | 1265 | ||
1266 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { | 1266 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { |
1267 | struct hrtimer_clock_base *base; | 1267 | struct hrtimer_clock_base *base; |
1268 | struct timerqueue_node *node; | 1268 | struct timerqueue_node *node; |
1269 | ktime_t basenow; | 1269 | ktime_t basenow; |
1270 | 1270 | ||
1271 | if (!(cpu_base->active_bases & (1 << i))) | 1271 | if (!(cpu_base->active_bases & (1 << i))) |
1272 | continue; | 1272 | continue; |
1273 | 1273 | ||
1274 | base = cpu_base->clock_base + i; | 1274 | base = cpu_base->clock_base + i; |
1275 | basenow = ktime_add(now, base->offset); | 1275 | basenow = ktime_add(now, base->offset); |
1276 | 1276 | ||
1277 | while ((node = timerqueue_getnext(&base->active))) { | 1277 | while ((node = timerqueue_getnext(&base->active))) { |
1278 | struct hrtimer *timer; | 1278 | struct hrtimer *timer; |
1279 | 1279 | ||
1280 | timer = container_of(node, struct hrtimer, node); | 1280 | timer = container_of(node, struct hrtimer, node); |
1281 | 1281 | ||
1282 | /* | 1282 | /* |
1283 | * The immediate goal for using the softexpires is | 1283 | * The immediate goal for using the softexpires is |
1284 | * minimizing wakeups, not running timers at the | 1284 | * minimizing wakeups, not running timers at the |
1285 | * earliest interrupt after their soft expiration. | 1285 | * earliest interrupt after their soft expiration. |
1286 | * This allows us to avoid using a Priority Search | 1286 | * This allows us to avoid using a Priority Search |
1287 | * Tree, which can answer a stabbing querry for | 1287 | * Tree, which can answer a stabbing querry for |
1288 | * overlapping intervals and instead use the simple | 1288 | * overlapping intervals and instead use the simple |
1289 | * BST we already have. | 1289 | * BST we already have. |
1290 | * We don't add extra wakeups by delaying timers that | 1290 | * We don't add extra wakeups by delaying timers that |
1291 | * are right-of a not yet expired timer, because that | 1291 | * are right-of a not yet expired timer, because that |
1292 | * timer will have to trigger a wakeup anyway. | 1292 | * timer will have to trigger a wakeup anyway. |
1293 | */ | 1293 | */ |
1294 | 1294 | ||
1295 | if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { | 1295 | if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) { |
1296 | ktime_t expires; | 1296 | ktime_t expires; |
1297 | 1297 | ||
1298 | expires = ktime_sub(hrtimer_get_expires(timer), | 1298 | expires = ktime_sub(hrtimer_get_expires(timer), |
1299 | base->offset); | 1299 | base->offset); |
1300 | if (expires.tv64 < 0) | 1300 | if (expires.tv64 < 0) |
1301 | expires.tv64 = KTIME_MAX; | 1301 | expires.tv64 = KTIME_MAX; |
1302 | if (expires.tv64 < expires_next.tv64) | 1302 | if (expires.tv64 < expires_next.tv64) |
1303 | expires_next = expires; | 1303 | expires_next = expires; |
1304 | break; | 1304 | break; |
1305 | } | 1305 | } |
1306 | 1306 | ||
1307 | __run_hrtimer(timer, &basenow); | 1307 | __run_hrtimer(timer, &basenow); |
1308 | } | 1308 | } |
1309 | } | 1309 | } |
1310 | 1310 | ||
1311 | /* | 1311 | /* |
1312 | * Store the new expiry value so the migration code can verify | 1312 | * Store the new expiry value so the migration code can verify |
1313 | * against it. | 1313 | * against it. |
1314 | */ | 1314 | */ |
1315 | cpu_base->expires_next = expires_next; | 1315 | cpu_base->expires_next = expires_next; |
1316 | raw_spin_unlock(&cpu_base->lock); | 1316 | raw_spin_unlock(&cpu_base->lock); |
1317 | 1317 | ||
1318 | /* Reprogramming necessary ? */ | 1318 | /* Reprogramming necessary ? */ |
1319 | if (expires_next.tv64 == KTIME_MAX || | 1319 | if (expires_next.tv64 == KTIME_MAX || |
1320 | !tick_program_event(expires_next, 0)) { | 1320 | !tick_program_event(expires_next, 0)) { |
1321 | cpu_base->hang_detected = 0; | 1321 | cpu_base->hang_detected = 0; |
1322 | return; | 1322 | return; |
1323 | } | 1323 | } |
1324 | 1324 | ||
1325 | /* | 1325 | /* |
1326 | * The next timer was already expired due to: | 1326 | * The next timer was already expired due to: |
1327 | * - tracing | 1327 | * - tracing |
1328 | * - long lasting callbacks | 1328 | * - long lasting callbacks |
1329 | * - being scheduled away when running in a VM | 1329 | * - being scheduled away when running in a VM |
1330 | * | 1330 | * |
1331 | * We need to prevent that we loop forever in the hrtimer | 1331 | * We need to prevent that we loop forever in the hrtimer |
1332 | * interrupt routine. We give it 3 attempts to avoid | 1332 | * interrupt routine. We give it 3 attempts to avoid |
1333 | * overreacting on some spurious event. | 1333 | * overreacting on some spurious event. |
1334 | * | 1334 | * |
1335 | * Acquire base lock for updating the offsets and retrieving | 1335 | * Acquire base lock for updating the offsets and retrieving |
1336 | * the current time. | 1336 | * the current time. |
1337 | */ | 1337 | */ |
1338 | raw_spin_lock(&cpu_base->lock); | 1338 | raw_spin_lock(&cpu_base->lock); |
1339 | now = hrtimer_update_base(cpu_base); | 1339 | now = hrtimer_update_base(cpu_base); |
1340 | cpu_base->nr_retries++; | 1340 | cpu_base->nr_retries++; |
1341 | if (++retries < 3) | 1341 | if (++retries < 3) |
1342 | goto retry; | 1342 | goto retry; |
1343 | /* | 1343 | /* |
1344 | * Give the system a chance to do something else than looping | 1344 | * Give the system a chance to do something else than looping |
1345 | * here. We stored the entry time, so we know exactly how long | 1345 | * here. We stored the entry time, so we know exactly how long |
1346 | * we spent here. We schedule the next event this amount of | 1346 | * we spent here. We schedule the next event this amount of |
1347 | * time away. | 1347 | * time away. |
1348 | */ | 1348 | */ |
1349 | cpu_base->nr_hangs++; | 1349 | cpu_base->nr_hangs++; |
1350 | cpu_base->hang_detected = 1; | 1350 | cpu_base->hang_detected = 1; |
1351 | raw_spin_unlock(&cpu_base->lock); | 1351 | raw_spin_unlock(&cpu_base->lock); |
1352 | delta = ktime_sub(now, entry_time); | 1352 | delta = ktime_sub(now, entry_time); |
1353 | if (delta.tv64 > cpu_base->max_hang_time.tv64) | 1353 | if (delta.tv64 > cpu_base->max_hang_time.tv64) |
1354 | cpu_base->max_hang_time = delta; | 1354 | cpu_base->max_hang_time = delta; |
1355 | /* | 1355 | /* |
1356 | * Limit it to a sensible value as we enforce a longer | 1356 | * Limit it to a sensible value as we enforce a longer |
1357 | * delay. Give the CPU at least 100ms to catch up. | 1357 | * delay. Give the CPU at least 100ms to catch up. |
1358 | */ | 1358 | */ |
1359 | if (delta.tv64 > 100 * NSEC_PER_MSEC) | 1359 | if (delta.tv64 > 100 * NSEC_PER_MSEC) |
1360 | expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC); | 1360 | expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC); |
1361 | else | 1361 | else |
1362 | expires_next = ktime_add(now, delta); | 1362 | expires_next = ktime_add(now, delta); |
1363 | tick_program_event(expires_next, 1); | 1363 | tick_program_event(expires_next, 1); |
1364 | printk_once(KERN_WARNING "hrtimer: interrupt took %llu ns\n", | 1364 | printk_once(KERN_WARNING "hrtimer: interrupt took %llu ns\n", |
1365 | ktime_to_ns(delta)); | 1365 | ktime_to_ns(delta)); |
1366 | } | 1366 | } |
1367 | 1367 | ||
1368 | /* | 1368 | /* |
1369 | * local version of hrtimer_peek_ahead_timers() called with interrupts | 1369 | * local version of hrtimer_peek_ahead_timers() called with interrupts |
1370 | * disabled. | 1370 | * disabled. |
1371 | */ | 1371 | */ |
1372 | static void __hrtimer_peek_ahead_timers(void) | 1372 | static void __hrtimer_peek_ahead_timers(void) |
1373 | { | 1373 | { |
1374 | struct tick_device *td; | 1374 | struct tick_device *td; |
1375 | 1375 | ||
1376 | if (!hrtimer_hres_active()) | 1376 | if (!hrtimer_hres_active()) |
1377 | return; | 1377 | return; |
1378 | 1378 | ||
1379 | td = this_cpu_ptr(&tick_cpu_device); | 1379 | td = this_cpu_ptr(&tick_cpu_device); |
1380 | if (td && td->evtdev) | 1380 | if (td && td->evtdev) |
1381 | hrtimer_interrupt(td->evtdev); | 1381 | hrtimer_interrupt(td->evtdev); |
1382 | } | 1382 | } |
1383 | 1383 | ||
1384 | /** | 1384 | /** |
1385 | * hrtimer_peek_ahead_timers -- run soft-expired timers now | 1385 | * hrtimer_peek_ahead_timers -- run soft-expired timers now |
1386 | * | 1386 | * |
1387 | * hrtimer_peek_ahead_timers will peek at the timer queue of | 1387 | * hrtimer_peek_ahead_timers will peek at the timer queue of |
1388 | * the current cpu and check if there are any timers for which | 1388 | * the current cpu and check if there are any timers for which |
1389 | * the soft expires time has passed. If any such timers exist, | 1389 | * the soft expires time has passed. If any such timers exist, |
1390 | * they are run immediately and then removed from the timer queue. | 1390 | * they are run immediately and then removed from the timer queue. |
1391 | * | 1391 | * |
1392 | */ | 1392 | */ |
1393 | void hrtimer_peek_ahead_timers(void) | 1393 | void hrtimer_peek_ahead_timers(void) |
1394 | { | 1394 | { |
1395 | unsigned long flags; | 1395 | unsigned long flags; |
1396 | 1396 | ||
1397 | local_irq_save(flags); | 1397 | local_irq_save(flags); |
1398 | __hrtimer_peek_ahead_timers(); | 1398 | __hrtimer_peek_ahead_timers(); |
1399 | local_irq_restore(flags); | 1399 | local_irq_restore(flags); |
1400 | } | 1400 | } |
1401 | 1401 | ||
1402 | static void run_hrtimer_softirq(struct softirq_action *h) | 1402 | static void run_hrtimer_softirq(struct softirq_action *h) |
1403 | { | 1403 | { |
1404 | hrtimer_peek_ahead_timers(); | 1404 | hrtimer_peek_ahead_timers(); |
1405 | } | 1405 | } |
1406 | 1406 | ||
1407 | #else /* CONFIG_HIGH_RES_TIMERS */ | 1407 | #else /* CONFIG_HIGH_RES_TIMERS */ |
1408 | 1408 | ||
1409 | static inline void __hrtimer_peek_ahead_timers(void) { } | 1409 | static inline void __hrtimer_peek_ahead_timers(void) { } |
1410 | 1410 | ||
1411 | #endif /* !CONFIG_HIGH_RES_TIMERS */ | 1411 | #endif /* !CONFIG_HIGH_RES_TIMERS */ |
1412 | 1412 | ||
1413 | /* | 1413 | /* |
1414 | * Called from timer softirq every jiffy, expire hrtimers: | 1414 | * Called from timer softirq every jiffy, expire hrtimers: |
1415 | * | 1415 | * |
1416 | * For HRT its the fall back code to run the softirq in the timer | 1416 | * For HRT its the fall back code to run the softirq in the timer |
1417 | * softirq context in case the hrtimer initialization failed or has | 1417 | * softirq context in case the hrtimer initialization failed or has |
1418 | * not been done yet. | 1418 | * not been done yet. |
1419 | */ | 1419 | */ |
1420 | void hrtimer_run_pending(void) | 1420 | void hrtimer_run_pending(void) |
1421 | { | 1421 | { |
1422 | if (hrtimer_hres_active()) | 1422 | if (hrtimer_hres_active()) |
1423 | return; | 1423 | return; |
1424 | 1424 | ||
1425 | /* | 1425 | /* |
1426 | * This _is_ ugly: We have to check in the softirq context, | 1426 | * This _is_ ugly: We have to check in the softirq context, |
1427 | * whether we can switch to highres and / or nohz mode. The | 1427 | * whether we can switch to highres and / or nohz mode. The |
1428 | * clocksource switch happens in the timer interrupt with | 1428 | * clocksource switch happens in the timer interrupt with |
1429 | * xtime_lock held. Notification from there only sets the | 1429 | * xtime_lock held. Notification from there only sets the |
1430 | * check bit in the tick_oneshot code, otherwise we might | 1430 | * check bit in the tick_oneshot code, otherwise we might |
1431 | * deadlock vs. xtime_lock. | 1431 | * deadlock vs. xtime_lock. |
1432 | */ | 1432 | */ |
1433 | if (tick_check_oneshot_change(!hrtimer_is_hres_enabled())) | 1433 | if (tick_check_oneshot_change(!hrtimer_is_hres_enabled())) |
1434 | hrtimer_switch_to_hres(); | 1434 | hrtimer_switch_to_hres(); |
1435 | } | 1435 | } |
1436 | 1436 | ||
1437 | /* | 1437 | /* |
1438 | * Called from hardirq context every jiffy | 1438 | * Called from hardirq context every jiffy |
1439 | */ | 1439 | */ |
1440 | void hrtimer_run_queues(void) | 1440 | void hrtimer_run_queues(void) |
1441 | { | 1441 | { |
1442 | struct timerqueue_node *node; | 1442 | struct timerqueue_node *node; |
1443 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); | 1443 | struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); |
1444 | struct hrtimer_clock_base *base; | 1444 | struct hrtimer_clock_base *base; |
1445 | int index, gettime = 1; | 1445 | int index, gettime = 1; |
1446 | 1446 | ||
1447 | if (hrtimer_hres_active()) | 1447 | if (hrtimer_hres_active()) |
1448 | return; | 1448 | return; |
1449 | 1449 | ||
1450 | for (index = 0; index < HRTIMER_MAX_CLOCK_BASES; index++) { | 1450 | for (index = 0; index < HRTIMER_MAX_CLOCK_BASES; index++) { |
1451 | base = &cpu_base->clock_base[index]; | 1451 | base = &cpu_base->clock_base[index]; |
1452 | if (!timerqueue_getnext(&base->active)) | 1452 | if (!timerqueue_getnext(&base->active)) |
1453 | continue; | 1453 | continue; |
1454 | 1454 | ||
1455 | if (gettime) { | 1455 | if (gettime) { |
1456 | hrtimer_get_softirq_time(cpu_base); | 1456 | hrtimer_get_softirq_time(cpu_base); |
1457 | gettime = 0; | 1457 | gettime = 0; |
1458 | } | 1458 | } |
1459 | 1459 | ||
1460 | raw_spin_lock(&cpu_base->lock); | 1460 | raw_spin_lock(&cpu_base->lock); |
1461 | 1461 | ||
1462 | while ((node = timerqueue_getnext(&base->active))) { | 1462 | while ((node = timerqueue_getnext(&base->active))) { |
1463 | struct hrtimer *timer; | 1463 | struct hrtimer *timer; |
1464 | 1464 | ||
1465 | timer = container_of(node, struct hrtimer, node); | 1465 | timer = container_of(node, struct hrtimer, node); |
1466 | if (base->softirq_time.tv64 <= | 1466 | if (base->softirq_time.tv64 <= |
1467 | hrtimer_get_expires_tv64(timer)) | 1467 | hrtimer_get_expires_tv64(timer)) |
1468 | break; | 1468 | break; |
1469 | 1469 | ||
1470 | __run_hrtimer(timer, &base->softirq_time); | 1470 | __run_hrtimer(timer, &base->softirq_time); |
1471 | } | 1471 | } |
1472 | raw_spin_unlock(&cpu_base->lock); | 1472 | raw_spin_unlock(&cpu_base->lock); |
1473 | } | 1473 | } |
1474 | } | 1474 | } |
1475 | 1475 | ||
1476 | /* | 1476 | /* |
1477 | * Sleep related functions: | 1477 | * Sleep related functions: |
1478 | */ | 1478 | */ |
1479 | static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer) | 1479 | static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer) |
1480 | { | 1480 | { |
1481 | struct hrtimer_sleeper *t = | 1481 | struct hrtimer_sleeper *t = |
1482 | container_of(timer, struct hrtimer_sleeper, timer); | 1482 | container_of(timer, struct hrtimer_sleeper, timer); |
1483 | struct task_struct *task = t->task; | 1483 | struct task_struct *task = t->task; |
1484 | 1484 | ||
1485 | t->task = NULL; | 1485 | t->task = NULL; |
1486 | if (task) | 1486 | if (task) |
1487 | wake_up_process(task); | 1487 | wake_up_process(task); |
1488 | 1488 | ||
1489 | return HRTIMER_NORESTART; | 1489 | return HRTIMER_NORESTART; |
1490 | } | 1490 | } |
1491 | 1491 | ||
1492 | void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task) | 1492 | void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task) |
1493 | { | 1493 | { |
1494 | sl->timer.function = hrtimer_wakeup; | 1494 | sl->timer.function = hrtimer_wakeup; |
1495 | sl->task = task; | 1495 | sl->task = task; |
1496 | } | 1496 | } |
1497 | EXPORT_SYMBOL_GPL(hrtimer_init_sleeper); | 1497 | EXPORT_SYMBOL_GPL(hrtimer_init_sleeper); |
1498 | 1498 | ||
1499 | static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mode) | 1499 | static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mode) |
1500 | { | 1500 | { |
1501 | hrtimer_init_sleeper(t, current); | 1501 | hrtimer_init_sleeper(t, current); |
1502 | 1502 | ||
1503 | do { | 1503 | do { |
1504 | set_current_state(TASK_INTERRUPTIBLE); | 1504 | set_current_state(TASK_INTERRUPTIBLE); |
1505 | hrtimer_start_expires(&t->timer, mode); | 1505 | hrtimer_start_expires(&t->timer, mode); |
1506 | if (!hrtimer_active(&t->timer)) | 1506 | if (!hrtimer_active(&t->timer)) |
1507 | t->task = NULL; | 1507 | t->task = NULL; |
1508 | 1508 | ||
1509 | if (likely(t->task)) | 1509 | if (likely(t->task)) |
1510 | freezable_schedule(); | 1510 | freezable_schedule(); |
1511 | 1511 | ||
1512 | hrtimer_cancel(&t->timer); | 1512 | hrtimer_cancel(&t->timer); |
1513 | mode = HRTIMER_MODE_ABS; | 1513 | mode = HRTIMER_MODE_ABS; |
1514 | 1514 | ||
1515 | } while (t->task && !signal_pending(current)); | 1515 | } while (t->task && !signal_pending(current)); |
1516 | 1516 | ||
1517 | __set_current_state(TASK_RUNNING); | 1517 | __set_current_state(TASK_RUNNING); |
1518 | 1518 | ||
1519 | return t->task == NULL; | 1519 | return t->task == NULL; |
1520 | } | 1520 | } |
1521 | 1521 | ||
1522 | static int update_rmtp(struct hrtimer *timer, struct timespec __user *rmtp) | 1522 | static int update_rmtp(struct hrtimer *timer, struct timespec __user *rmtp) |
1523 | { | 1523 | { |
1524 | struct timespec rmt; | 1524 | struct timespec rmt; |
1525 | ktime_t rem; | 1525 | ktime_t rem; |
1526 | 1526 | ||
1527 | rem = hrtimer_expires_remaining(timer); | 1527 | rem = hrtimer_expires_remaining(timer); |
1528 | if (rem.tv64 <= 0) | 1528 | if (rem.tv64 <= 0) |
1529 | return 0; | 1529 | return 0; |
1530 | rmt = ktime_to_timespec(rem); | 1530 | rmt = ktime_to_timespec(rem); |
1531 | 1531 | ||
1532 | if (copy_to_user(rmtp, &rmt, sizeof(*rmtp))) | 1532 | if (copy_to_user(rmtp, &rmt, sizeof(*rmtp))) |
1533 | return -EFAULT; | 1533 | return -EFAULT; |
1534 | 1534 | ||
1535 | return 1; | 1535 | return 1; |
1536 | } | 1536 | } |
1537 | 1537 | ||
1538 | long __sched hrtimer_nanosleep_restart(struct restart_block *restart) | 1538 | long __sched hrtimer_nanosleep_restart(struct restart_block *restart) |
1539 | { | 1539 | { |
1540 | struct hrtimer_sleeper t; | 1540 | struct hrtimer_sleeper t; |
1541 | struct timespec __user *rmtp; | 1541 | struct timespec __user *rmtp; |
1542 | int ret = 0; | 1542 | int ret = 0; |
1543 | 1543 | ||
1544 | hrtimer_init_on_stack(&t.timer, restart->nanosleep.clockid, | 1544 | hrtimer_init_on_stack(&t.timer, restart->nanosleep.clockid, |
1545 | HRTIMER_MODE_ABS); | 1545 | HRTIMER_MODE_ABS); |
1546 | hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires); | 1546 | hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires); |
1547 | 1547 | ||
1548 | if (do_nanosleep(&t, HRTIMER_MODE_ABS)) | 1548 | if (do_nanosleep(&t, HRTIMER_MODE_ABS)) |
1549 | goto out; | 1549 | goto out; |
1550 | 1550 | ||
1551 | rmtp = restart->nanosleep.rmtp; | 1551 | rmtp = restart->nanosleep.rmtp; |
1552 | if (rmtp) { | 1552 | if (rmtp) { |
1553 | ret = update_rmtp(&t.timer, rmtp); | 1553 | ret = update_rmtp(&t.timer, rmtp); |
1554 | if (ret <= 0) | 1554 | if (ret <= 0) |
1555 | goto out; | 1555 | goto out; |
1556 | } | 1556 | } |
1557 | 1557 | ||
1558 | /* The other values in restart are already filled in */ | 1558 | /* The other values in restart are already filled in */ |
1559 | ret = -ERESTART_RESTARTBLOCK; | 1559 | ret = -ERESTART_RESTARTBLOCK; |
1560 | out: | 1560 | out: |
1561 | destroy_hrtimer_on_stack(&t.timer); | 1561 | destroy_hrtimer_on_stack(&t.timer); |
1562 | return ret; | 1562 | return ret; |
1563 | } | 1563 | } |
1564 | 1564 | ||
1565 | long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp, | 1565 | long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp, |
1566 | const enum hrtimer_mode mode, const clockid_t clockid) | 1566 | const enum hrtimer_mode mode, const clockid_t clockid) |
1567 | { | 1567 | { |
1568 | struct restart_block *restart; | 1568 | struct restart_block *restart; |
1569 | struct hrtimer_sleeper t; | 1569 | struct hrtimer_sleeper t; |
1570 | int ret = 0; | 1570 | int ret = 0; |
1571 | unsigned long slack; | 1571 | unsigned long slack; |
1572 | 1572 | ||
1573 | slack = current->timer_slack_ns; | 1573 | slack = current->timer_slack_ns; |
1574 | if (dl_task(current) || rt_task(current)) | 1574 | if (dl_task(current) || rt_task(current)) |
1575 | slack = 0; | 1575 | slack = 0; |
1576 | 1576 | ||
1577 | hrtimer_init_on_stack(&t.timer, clockid, mode); | 1577 | hrtimer_init_on_stack(&t.timer, clockid, mode); |
1578 | hrtimer_set_expires_range_ns(&t.timer, timespec_to_ktime(*rqtp), slack); | 1578 | hrtimer_set_expires_range_ns(&t.timer, timespec_to_ktime(*rqtp), slack); |
1579 | if (do_nanosleep(&t, mode)) | 1579 | if (do_nanosleep(&t, mode)) |
1580 | goto out; | 1580 | goto out; |
1581 | 1581 | ||
1582 | /* Absolute timers do not update the rmtp value and restart: */ | 1582 | /* Absolute timers do not update the rmtp value and restart: */ |
1583 | if (mode == HRTIMER_MODE_ABS) { | 1583 | if (mode == HRTIMER_MODE_ABS) { |
1584 | ret = -ERESTARTNOHAND; | 1584 | ret = -ERESTARTNOHAND; |
1585 | goto out; | 1585 | goto out; |
1586 | } | 1586 | } |
1587 | 1587 | ||
1588 | if (rmtp) { | 1588 | if (rmtp) { |
1589 | ret = update_rmtp(&t.timer, rmtp); | 1589 | ret = update_rmtp(&t.timer, rmtp); |
1590 | if (ret <= 0) | 1590 | if (ret <= 0) |
1591 | goto out; | 1591 | goto out; |
1592 | } | 1592 | } |
1593 | 1593 | ||
1594 | restart = ¤t_thread_info()->restart_block; | 1594 | restart = ¤t_thread_info()->restart_block; |
1595 | restart->fn = hrtimer_nanosleep_restart; | 1595 | restart->fn = hrtimer_nanosleep_restart; |
1596 | restart->nanosleep.clockid = t.timer.base->clockid; | 1596 | restart->nanosleep.clockid = t.timer.base->clockid; |
1597 | restart->nanosleep.rmtp = rmtp; | 1597 | restart->nanosleep.rmtp = rmtp; |
1598 | restart->nanosleep.expires = hrtimer_get_expires_tv64(&t.timer); | 1598 | restart->nanosleep.expires = hrtimer_get_expires_tv64(&t.timer); |
1599 | 1599 | ||
1600 | ret = -ERESTART_RESTARTBLOCK; | 1600 | ret = -ERESTART_RESTARTBLOCK; |
1601 | out: | 1601 | out: |
1602 | destroy_hrtimer_on_stack(&t.timer); | 1602 | destroy_hrtimer_on_stack(&t.timer); |
1603 | return ret; | 1603 | return ret; |
1604 | } | 1604 | } |
1605 | 1605 | ||
1606 | SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp, | 1606 | SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp, |
1607 | struct timespec __user *, rmtp) | 1607 | struct timespec __user *, rmtp) |
1608 | { | 1608 | { |
1609 | struct timespec tu; | 1609 | struct timespec tu; |
1610 | 1610 | ||
1611 | if (copy_from_user(&tu, rqtp, sizeof(tu))) | 1611 | if (copy_from_user(&tu, rqtp, sizeof(tu))) |
1612 | return -EFAULT; | 1612 | return -EFAULT; |
1613 | 1613 | ||
1614 | if (!timespec_valid(&tu)) | 1614 | if (!timespec_valid(&tu)) |
1615 | return -EINVAL; | 1615 | return -EINVAL; |
1616 | 1616 | ||
1617 | return hrtimer_nanosleep(&tu, rmtp, HRTIMER_MODE_REL, CLOCK_MONOTONIC); | 1617 | return hrtimer_nanosleep(&tu, rmtp, HRTIMER_MODE_REL, CLOCK_MONOTONIC); |
1618 | } | 1618 | } |
1619 | 1619 | ||
1620 | /* | 1620 | /* |
1621 | * Functions related to boot-time initialization: | 1621 | * Functions related to boot-time initialization: |
1622 | */ | 1622 | */ |
1623 | static void init_hrtimers_cpu(int cpu) | 1623 | static void init_hrtimers_cpu(int cpu) |
1624 | { | 1624 | { |
1625 | struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu); | 1625 | struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu); |
1626 | int i; | 1626 | int i; |
1627 | 1627 | ||
1628 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { | 1628 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { |
1629 | cpu_base->clock_base[i].cpu_base = cpu_base; | 1629 | cpu_base->clock_base[i].cpu_base = cpu_base; |
1630 | timerqueue_init_head(&cpu_base->clock_base[i].active); | 1630 | timerqueue_init_head(&cpu_base->clock_base[i].active); |
1631 | } | 1631 | } |
1632 | 1632 | ||
1633 | cpu_base->cpu = cpu; | 1633 | cpu_base->cpu = cpu; |
1634 | hrtimer_init_hres(cpu_base); | 1634 | hrtimer_init_hres(cpu_base); |
1635 | } | 1635 | } |
1636 | 1636 | ||
1637 | #ifdef CONFIG_HOTPLUG_CPU | 1637 | #ifdef CONFIG_HOTPLUG_CPU |
1638 | 1638 | ||
1639 | static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base, | 1639 | static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base, |
1640 | struct hrtimer_clock_base *new_base) | 1640 | struct hrtimer_clock_base *new_base) |
1641 | { | 1641 | { |
1642 | struct hrtimer *timer; | 1642 | struct hrtimer *timer; |
1643 | struct timerqueue_node *node; | 1643 | struct timerqueue_node *node; |
1644 | 1644 | ||
1645 | while ((node = timerqueue_getnext(&old_base->active))) { | 1645 | while ((node = timerqueue_getnext(&old_base->active))) { |
1646 | timer = container_of(node, struct hrtimer, node); | 1646 | timer = container_of(node, struct hrtimer, node); |
1647 | BUG_ON(hrtimer_callback_running(timer)); | 1647 | BUG_ON(hrtimer_callback_running(timer)); |
1648 | debug_deactivate(timer); | 1648 | debug_deactivate(timer); |
1649 | 1649 | ||
1650 | /* | 1650 | /* |
1651 | * Mark it as STATE_MIGRATE not INACTIVE otherwise the | 1651 | * Mark it as STATE_MIGRATE not INACTIVE otherwise the |
1652 | * timer could be seen as !active and just vanish away | 1652 | * timer could be seen as !active and just vanish away |
1653 | * under us on another CPU | 1653 | * under us on another CPU |
1654 | */ | 1654 | */ |
1655 | __remove_hrtimer(timer, old_base, HRTIMER_STATE_MIGRATE, 0); | 1655 | __remove_hrtimer(timer, old_base, HRTIMER_STATE_MIGRATE, 0); |
1656 | timer->base = new_base; | 1656 | timer->base = new_base; |
1657 | /* | 1657 | /* |
1658 | * Enqueue the timers on the new cpu. This does not | 1658 | * Enqueue the timers on the new cpu. This does not |
1659 | * reprogram the event device in case the timer | 1659 | * reprogram the event device in case the timer |
1660 | * expires before the earliest on this CPU, but we run | 1660 | * expires before the earliest on this CPU, but we run |
1661 | * hrtimer_interrupt after we migrated everything to | 1661 | * hrtimer_interrupt after we migrated everything to |
1662 | * sort out already expired timers and reprogram the | 1662 | * sort out already expired timers and reprogram the |
1663 | * event device. | 1663 | * event device. |
1664 | */ | 1664 | */ |
1665 | enqueue_hrtimer(timer, new_base); | 1665 | enqueue_hrtimer(timer, new_base); |
1666 | 1666 | ||
1667 | /* Clear the migration state bit */ | 1667 | /* Clear the migration state bit */ |
1668 | timer->state &= ~HRTIMER_STATE_MIGRATE; | 1668 | timer->state &= ~HRTIMER_STATE_MIGRATE; |
1669 | } | 1669 | } |
1670 | } | 1670 | } |
1671 | 1671 | ||
1672 | static void migrate_hrtimers(int scpu) | 1672 | static void migrate_hrtimers(int scpu) |
1673 | { | 1673 | { |
1674 | struct hrtimer_cpu_base *old_base, *new_base; | 1674 | struct hrtimer_cpu_base *old_base, *new_base; |
1675 | int i; | 1675 | int i; |
1676 | 1676 | ||
1677 | BUG_ON(cpu_online(scpu)); | 1677 | BUG_ON(cpu_online(scpu)); |
1678 | tick_cancel_sched_timer(scpu); | 1678 | tick_cancel_sched_timer(scpu); |
1679 | 1679 | ||
1680 | local_irq_disable(); | 1680 | local_irq_disable(); |
1681 | old_base = &per_cpu(hrtimer_bases, scpu); | 1681 | old_base = &per_cpu(hrtimer_bases, scpu); |
1682 | new_base = this_cpu_ptr(&hrtimer_bases); | 1682 | new_base = this_cpu_ptr(&hrtimer_bases); |
1683 | /* | 1683 | /* |
1684 | * The caller is globally serialized and nobody else | 1684 | * The caller is globally serialized and nobody else |
1685 | * takes two locks at once, deadlock is not possible. | 1685 | * takes two locks at once, deadlock is not possible. |
1686 | */ | 1686 | */ |
1687 | raw_spin_lock(&new_base->lock); | 1687 | raw_spin_lock(&new_base->lock); |
1688 | raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING); | 1688 | raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING); |
1689 | 1689 | ||
1690 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { | 1690 | for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { |
1691 | migrate_hrtimer_list(&old_base->clock_base[i], | 1691 | migrate_hrtimer_list(&old_base->clock_base[i], |
1692 | &new_base->clock_base[i]); | 1692 | &new_base->clock_base[i]); |
1693 | } | 1693 | } |
1694 | 1694 | ||
1695 | raw_spin_unlock(&old_base->lock); | 1695 | raw_spin_unlock(&old_base->lock); |
1696 | raw_spin_unlock(&new_base->lock); | 1696 | raw_spin_unlock(&new_base->lock); |
1697 | 1697 | ||
1698 | /* Check, if we got expired work to do */ | 1698 | /* Check, if we got expired work to do */ |
1699 | __hrtimer_peek_ahead_timers(); | 1699 | __hrtimer_peek_ahead_timers(); |
1700 | local_irq_enable(); | 1700 | local_irq_enable(); |
1701 | } | 1701 | } |
1702 | 1702 | ||
1703 | #endif /* CONFIG_HOTPLUG_CPU */ | 1703 | #endif /* CONFIG_HOTPLUG_CPU */ |
1704 | 1704 | ||
1705 | static int hrtimer_cpu_notify(struct notifier_block *self, | 1705 | static int hrtimer_cpu_notify(struct notifier_block *self, |
1706 | unsigned long action, void *hcpu) | 1706 | unsigned long action, void *hcpu) |
1707 | { | 1707 | { |
1708 | int scpu = (long)hcpu; | 1708 | int scpu = (long)hcpu; |
1709 | 1709 | ||
1710 | switch (action) { | 1710 | switch (action) { |
1711 | 1711 | ||
1712 | case CPU_UP_PREPARE: | 1712 | case CPU_UP_PREPARE: |
1713 | case CPU_UP_PREPARE_FROZEN: | 1713 | case CPU_UP_PREPARE_FROZEN: |
1714 | init_hrtimers_cpu(scpu); | 1714 | init_hrtimers_cpu(scpu); |
1715 | break; | 1715 | break; |
1716 | 1716 | ||
1717 | #ifdef CONFIG_HOTPLUG_CPU | 1717 | #ifdef CONFIG_HOTPLUG_CPU |
1718 | case CPU_DYING: | 1718 | case CPU_DYING: |
1719 | case CPU_DYING_FROZEN: | 1719 | case CPU_DYING_FROZEN: |
1720 | clockevents_notify(CLOCK_EVT_NOTIFY_CPU_DYING, &scpu); | 1720 | clockevents_notify(CLOCK_EVT_NOTIFY_CPU_DYING, &scpu); |
1721 | break; | 1721 | break; |
1722 | case CPU_DEAD: | 1722 | case CPU_DEAD: |
1723 | case CPU_DEAD_FROZEN: | 1723 | case CPU_DEAD_FROZEN: |
1724 | { | 1724 | { |
1725 | clockevents_notify(CLOCK_EVT_NOTIFY_CPU_DEAD, &scpu); | 1725 | clockevents_notify(CLOCK_EVT_NOTIFY_CPU_DEAD, &scpu); |
1726 | migrate_hrtimers(scpu); | 1726 | migrate_hrtimers(scpu); |
1727 | break; | 1727 | break; |
1728 | } | 1728 | } |
1729 | #endif | 1729 | #endif |
1730 | 1730 | ||
1731 | default: | 1731 | default: |
1732 | break; | 1732 | break; |
1733 | } | 1733 | } |
1734 | 1734 | ||
1735 | return NOTIFY_OK; | 1735 | return NOTIFY_OK; |
1736 | } | 1736 | } |
1737 | 1737 | ||
1738 | static struct notifier_block hrtimers_nb = { | 1738 | static struct notifier_block hrtimers_nb = { |
1739 | .notifier_call = hrtimer_cpu_notify, | 1739 | .notifier_call = hrtimer_cpu_notify, |
1740 | }; | 1740 | }; |
1741 | 1741 | ||
1742 | void __init hrtimers_init(void) | 1742 | void __init hrtimers_init(void) |
1743 | { | 1743 | { |
1744 | hrtimer_cpu_notify(&hrtimers_nb, (unsigned long)CPU_UP_PREPARE, | 1744 | hrtimer_cpu_notify(&hrtimers_nb, (unsigned long)CPU_UP_PREPARE, |
1745 | (void *)(long)smp_processor_id()); | 1745 | (void *)(long)smp_processor_id()); |
1746 | register_cpu_notifier(&hrtimers_nb); | 1746 | register_cpu_notifier(&hrtimers_nb); |
1747 | #ifdef CONFIG_HIGH_RES_TIMERS | 1747 | #ifdef CONFIG_HIGH_RES_TIMERS |
1748 | open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq); | 1748 | open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq); |
1749 | #endif | 1749 | #endif |
1750 | } | 1750 | } |
1751 | 1751 | ||
1752 | /** | 1752 | /** |
1753 | * schedule_hrtimeout_range_clock - sleep until timeout | 1753 | * schedule_hrtimeout_range_clock - sleep until timeout |
1754 | * @expires: timeout value (ktime_t) | 1754 | * @expires: timeout value (ktime_t) |
1755 | * @delta: slack in expires timeout (ktime_t) | 1755 | * @delta: slack in expires timeout (ktime_t) |
1756 | * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL | 1756 | * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL |
1757 | * @clock: timer clock, CLOCK_MONOTONIC or CLOCK_REALTIME | 1757 | * @clock: timer clock, CLOCK_MONOTONIC or CLOCK_REALTIME |
1758 | */ | 1758 | */ |
1759 | int __sched | 1759 | int __sched |
1760 | schedule_hrtimeout_range_clock(ktime_t *expires, unsigned long delta, | 1760 | schedule_hrtimeout_range_clock(ktime_t *expires, unsigned long delta, |
1761 | const enum hrtimer_mode mode, int clock) | 1761 | const enum hrtimer_mode mode, int clock) |
1762 | { | 1762 | { |
1763 | struct hrtimer_sleeper t; | 1763 | struct hrtimer_sleeper t; |
1764 | 1764 | ||
1765 | /* | 1765 | /* |
1766 | * Optimize when a zero timeout value is given. It does not | 1766 | * Optimize when a zero timeout value is given. It does not |
1767 | * matter whether this is an absolute or a relative time. | 1767 | * matter whether this is an absolute or a relative time. |
1768 | */ | 1768 | */ |
1769 | if (expires && !expires->tv64) { | 1769 | if (expires && !expires->tv64) { |
1770 | __set_current_state(TASK_RUNNING); | 1770 | __set_current_state(TASK_RUNNING); |
1771 | return 0; | 1771 | return 0; |
1772 | } | 1772 | } |
1773 | 1773 | ||
1774 | /* | 1774 | /* |
1775 | * A NULL parameter means "infinite" | 1775 | * A NULL parameter means "infinite" |
1776 | */ | 1776 | */ |
1777 | if (!expires) { | 1777 | if (!expires) { |
1778 | schedule(); | 1778 | schedule(); |
1779 | return -EINTR; | 1779 | return -EINTR; |
1780 | } | 1780 | } |
1781 | 1781 | ||
1782 | hrtimer_init_on_stack(&t.timer, clock, mode); | 1782 | hrtimer_init_on_stack(&t.timer, clock, mode); |
1783 | hrtimer_set_expires_range_ns(&t.timer, *expires, delta); | 1783 | hrtimer_set_expires_range_ns(&t.timer, *expires, delta); |
1784 | 1784 | ||
1785 | hrtimer_init_sleeper(&t, current); | 1785 | hrtimer_init_sleeper(&t, current); |
1786 | 1786 | ||
1787 | hrtimer_start_expires(&t.timer, mode); | 1787 | hrtimer_start_expires(&t.timer, mode); |
1788 | if (!hrtimer_active(&t.timer)) | 1788 | if (!hrtimer_active(&t.timer)) |
1789 | t.task = NULL; | 1789 | t.task = NULL; |
1790 | 1790 | ||
1791 | if (likely(t.task)) | 1791 | if (likely(t.task)) |
1792 | schedule(); | 1792 | schedule(); |
1793 | 1793 | ||
1794 | hrtimer_cancel(&t.timer); | 1794 | hrtimer_cancel(&t.timer); |
1795 | destroy_hrtimer_on_stack(&t.timer); | 1795 | destroy_hrtimer_on_stack(&t.timer); |
1796 | 1796 | ||
1797 | __set_current_state(TASK_RUNNING); | 1797 | __set_current_state(TASK_RUNNING); |
1798 | 1798 | ||
1799 | return !t.task ? 0 : -EINTR; | 1799 | return !t.task ? 0 : -EINTR; |
1800 | } | 1800 | } |
1801 | 1801 | ||
1802 | /** | 1802 | /** |
1803 | * schedule_hrtimeout_range - sleep until timeout | 1803 | * schedule_hrtimeout_range - sleep until timeout |
1804 | * @expires: timeout value (ktime_t) | 1804 | * @expires: timeout value (ktime_t) |
1805 | * @delta: slack in expires timeout (ktime_t) | 1805 | * @delta: slack in expires timeout (ktime_t) |
1806 | * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL | 1806 | * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL |
1807 | * | 1807 | * |
1808 | * Make the current task sleep until the given expiry time has | 1808 | * Make the current task sleep until the given expiry time has |
1809 | * elapsed. The routine will return immediately unless | 1809 | * elapsed. The routine will return immediately unless |
1810 | * the current task state has been set (see set_current_state()). | 1810 | * the current task state has been set (see set_current_state()). |
1811 | * | 1811 | * |
1812 | * The @delta argument gives the kernel the freedom to schedule the | 1812 | * The @delta argument gives the kernel the freedom to schedule the |
1813 | * actual wakeup to a time that is both power and performance friendly. | 1813 | * actual wakeup to a time that is both power and performance friendly. |
1814 | * The kernel give the normal best effort behavior for "@expires+@delta", | 1814 | * The kernel give the normal best effort behavior for "@expires+@delta", |
1815 | * but may decide to fire the timer earlier, but no earlier than @expires. | 1815 | * but may decide to fire the timer earlier, but no earlier than @expires. |
1816 | * | 1816 | * |
1817 | * You can set the task state as follows - | 1817 | * You can set the task state as follows - |
1818 | * | 1818 | * |
1819 | * %TASK_UNINTERRUPTIBLE - at least @timeout time is guaranteed to | 1819 | * %TASK_UNINTERRUPTIBLE - at least @timeout time is guaranteed to |
1820 | * pass before the routine returns. | 1820 | * pass before the routine returns. |
1821 | * | 1821 | * |
1822 | * %TASK_INTERRUPTIBLE - the routine may return early if a signal is | 1822 | * %TASK_INTERRUPTIBLE - the routine may return early if a signal is |
1823 | * delivered to the current task. | 1823 | * delivered to the current task. |
1824 | * | 1824 | * |
1825 | * The current task state is guaranteed to be TASK_RUNNING when this | 1825 | * The current task state is guaranteed to be TASK_RUNNING when this |
1826 | * routine returns. | 1826 | * routine returns. |
1827 | * | 1827 | * |
1828 | * Returns 0 when the timer has expired otherwise -EINTR | 1828 | * Returns 0 when the timer has expired otherwise -EINTR |
1829 | */ | 1829 | */ |
1830 | int __sched schedule_hrtimeout_range(ktime_t *expires, unsigned long delta, | 1830 | int __sched schedule_hrtimeout_range(ktime_t *expires, unsigned long delta, |
1831 | const enum hrtimer_mode mode) | 1831 | const enum hrtimer_mode mode) |
1832 | { | 1832 | { |
1833 | return schedule_hrtimeout_range_clock(expires, delta, mode, | 1833 | return schedule_hrtimeout_range_clock(expires, delta, mode, |
1834 | CLOCK_MONOTONIC); | 1834 | CLOCK_MONOTONIC); |
1835 | } | 1835 | } |
1836 | EXPORT_SYMBOL_GPL(schedule_hrtimeout_range); | 1836 | EXPORT_SYMBOL_GPL(schedule_hrtimeout_range); |
1837 | 1837 | ||
1838 | /** | 1838 | /** |
1839 | * schedule_hrtimeout - sleep until timeout | 1839 | * schedule_hrtimeout - sleep until timeout |
1840 | * @expires: timeout value (ktime_t) | 1840 | * @expires: timeout value (ktime_t) |
1841 | * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL | 1841 | * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL |
1842 | * | 1842 | * |
1843 | * Make the current task sleep until the given expiry time has | 1843 | * Make the current task sleep until the given expiry time has |
1844 | * elapsed. The routine will return immediately unless | 1844 | * elapsed. The routine will return immediately unless |
1845 | * the current task state has been set (see set_current_state()). | 1845 | * the current task state has been set (see set_current_state()). |
1846 | * | 1846 | * |
1847 | * You can set the task state as follows - | 1847 | * You can set the task state as follows - |
1848 | * | 1848 | * |
1849 | * %TASK_UNINTERRUPTIBLE - at least @timeout time is guaranteed to | 1849 | * %TASK_UNINTERRUPTIBLE - at least @timeout time is guaranteed to |
1850 | * pass before the routine returns. | 1850 | * pass before the routine returns. |
1851 | * | 1851 | * |
1852 | * %TASK_INTERRUPTIBLE - the routine may return early if a signal is | 1852 | * %TASK_INTERRUPTIBLE - the routine may return early if a signal is |
1853 | * delivered to the current task. | 1853 | * delivered to the current task. |
1854 | * | 1854 | * |
1855 | * The current task state is guaranteed to be TASK_RUNNING when this | 1855 | * The current task state is guaranteed to be TASK_RUNNING when this |
1856 | * routine returns. | 1856 | * routine returns. |
1857 | * | 1857 | * |
1858 | * Returns 0 when the timer has expired otherwise -EINTR | 1858 | * Returns 0 when the timer has expired otherwise -EINTR |
1859 | */ | 1859 | */ |
1860 | int __sched schedule_hrtimeout(ktime_t *expires, | 1860 | int __sched schedule_hrtimeout(ktime_t *expires, |
1861 | const enum hrtimer_mode mode) | 1861 | const enum hrtimer_mode mode) |
1862 | { | 1862 | { |
1863 | return schedule_hrtimeout_range(expires, 0, mode); | 1863 | return schedule_hrtimeout_range(expires, 0, mode); |
1864 | } | 1864 | } |
1865 | EXPORT_SYMBOL_GPL(schedule_hrtimeout); | 1865 | EXPORT_SYMBOL_GPL(schedule_hrtimeout); |
1866 | 1866 |