Commit 226528c6100e4191842e61997110c8ace40605f7
Committed by
Dave Jones
1 parent
00e299fff3
Exists in
master
and in
7 other branches
[CPUFREQ] unexport (un)lock_policy_rwsem* functions
lock_policy_rwsem_* and unlock_policy_rwsem_* functions are scheduled to be unexported when 2.6.33. Now there are no other callers of them out of cpufreq.c, unexport them and make them static. Signed-off-by: WANG Cong <amwang@redhat.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Dave Jones <davej@redhat.com>
Showing 3 changed files with 3 additions and 22 deletions Inline Diff
Documentation/feature-removal-schedule.txt
1 | The following is a list of files and features that are going to be | 1 | The following is a list of files and features that are going to be |
2 | removed in the kernel source tree. Every entry should contain what | 2 | removed in the kernel source tree. Every entry should contain what |
3 | exactly is going away, why it is happening, and who is going to be doing | 3 | exactly is going away, why it is happening, and who is going to be doing |
4 | the work. When the feature is removed from the kernel, it should also | 4 | the work. When the feature is removed from the kernel, it should also |
5 | be removed from this file. | 5 | be removed from this file. |
6 | 6 | ||
7 | --------------------------- | 7 | --------------------------- |
8 | 8 | ||
9 | What: PRISM54 | 9 | What: PRISM54 |
10 | When: 2.6.34 | 10 | When: 2.6.34 |
11 | 11 | ||
12 | Why: prism54 FullMAC PCI / Cardbus devices used to be supported only by the | 12 | Why: prism54 FullMAC PCI / Cardbus devices used to be supported only by the |
13 | prism54 wireless driver. After Intersil stopped selling these | 13 | prism54 wireless driver. After Intersil stopped selling these |
14 | devices in preference for the newer more flexible SoftMAC devices | 14 | devices in preference for the newer more flexible SoftMAC devices |
15 | a SoftMAC device driver was required and prism54 did not support | 15 | a SoftMAC device driver was required and prism54 did not support |
16 | them. The p54pci driver now exists and has been present in the kernel for | 16 | them. The p54pci driver now exists and has been present in the kernel for |
17 | a while. This driver supports both SoftMAC devices and FullMAC devices. | 17 | a while. This driver supports both SoftMAC devices and FullMAC devices. |
18 | The main difference between these devices was the amount of memory which | 18 | The main difference between these devices was the amount of memory which |
19 | could be used for the firmware. The SoftMAC devices support a smaller | 19 | could be used for the firmware. The SoftMAC devices support a smaller |
20 | amount of memory. Because of this the SoftMAC firmware fits into FullMAC | 20 | amount of memory. Because of this the SoftMAC firmware fits into FullMAC |
21 | devices's memory. p54pci supports not only PCI / Cardbus but also USB | 21 | devices's memory. p54pci supports not only PCI / Cardbus but also USB |
22 | and SPI. Since p54pci supports all devices prism54 supports | 22 | and SPI. Since p54pci supports all devices prism54 supports |
23 | you will have a conflict. I'm not quite sure how distributions are | 23 | you will have a conflict. I'm not quite sure how distributions are |
24 | handling this conflict right now. prism54 was kept around due to | 24 | handling this conflict right now. prism54 was kept around due to |
25 | claims users may experience issues when using the SoftMAC driver. | 25 | claims users may experience issues when using the SoftMAC driver. |
26 | Time has passed users have not reported issues. If you use prism54 | 26 | Time has passed users have not reported issues. If you use prism54 |
27 | and for whatever reason you cannot use p54pci please let us know! | 27 | and for whatever reason you cannot use p54pci please let us know! |
28 | E-mail us at: linux-wireless@vger.kernel.org | 28 | E-mail us at: linux-wireless@vger.kernel.org |
29 | 29 | ||
30 | For more information see the p54 wiki page: | 30 | For more information see the p54 wiki page: |
31 | 31 | ||
32 | http://wireless.kernel.org/en/users/Drivers/p54 | 32 | http://wireless.kernel.org/en/users/Drivers/p54 |
33 | 33 | ||
34 | Who: Luis R. Rodriguez <lrodriguez@atheros.com> | 34 | Who: Luis R. Rodriguez <lrodriguez@atheros.com> |
35 | 35 | ||
36 | --------------------------- | 36 | --------------------------- |
37 | 37 | ||
38 | What: IRQF_SAMPLE_RANDOM | 38 | What: IRQF_SAMPLE_RANDOM |
39 | Check: IRQF_SAMPLE_RANDOM | 39 | Check: IRQF_SAMPLE_RANDOM |
40 | When: July 2009 | 40 | When: July 2009 |
41 | 41 | ||
42 | Why: Many of IRQF_SAMPLE_RANDOM users are technically bogus as entropy | 42 | Why: Many of IRQF_SAMPLE_RANDOM users are technically bogus as entropy |
43 | sources in the kernel's current entropy model. To resolve this, every | 43 | sources in the kernel's current entropy model. To resolve this, every |
44 | input point to the kernel's entropy pool needs to better document the | 44 | input point to the kernel's entropy pool needs to better document the |
45 | type of entropy source it actually is. This will be replaced with | 45 | type of entropy source it actually is. This will be replaced with |
46 | additional add_*_randomness functions in drivers/char/random.c | 46 | additional add_*_randomness functions in drivers/char/random.c |
47 | 47 | ||
48 | Who: Robin Getz <rgetz@blackfin.uclinux.org> & Matt Mackall <mpm@selenic.com> | 48 | Who: Robin Getz <rgetz@blackfin.uclinux.org> & Matt Mackall <mpm@selenic.com> |
49 | 49 | ||
50 | --------------------------- | 50 | --------------------------- |
51 | 51 | ||
52 | What: Deprecated snapshot ioctls | 52 | What: Deprecated snapshot ioctls |
53 | When: 2.6.36 | 53 | When: 2.6.36 |
54 | 54 | ||
55 | Why: The ioctls in kernel/power/user.c were marked as deprecated long time | 55 | Why: The ioctls in kernel/power/user.c were marked as deprecated long time |
56 | ago. Now they notify users about that so that they need to replace | 56 | ago. Now they notify users about that so that they need to replace |
57 | their userspace. After some more time, remove them completely. | 57 | their userspace. After some more time, remove them completely. |
58 | 58 | ||
59 | Who: Jiri Slaby <jirislaby@gmail.com> | 59 | Who: Jiri Slaby <jirislaby@gmail.com> |
60 | 60 | ||
61 | --------------------------- | 61 | --------------------------- |
62 | 62 | ||
63 | What: The ieee80211_regdom module parameter | 63 | What: The ieee80211_regdom module parameter |
64 | When: March 2010 / desktop catchup | 64 | When: March 2010 / desktop catchup |
65 | 65 | ||
66 | Why: This was inherited by the CONFIG_WIRELESS_OLD_REGULATORY code, | 66 | Why: This was inherited by the CONFIG_WIRELESS_OLD_REGULATORY code, |
67 | and currently serves as an option for users to define an | 67 | and currently serves as an option for users to define an |
68 | ISO / IEC 3166 alpha2 code for the country they are currently | 68 | ISO / IEC 3166 alpha2 code for the country they are currently |
69 | present in. Although there are userspace API replacements for this | 69 | present in. Although there are userspace API replacements for this |
70 | through nl80211 distributions haven't yet caught up with implementing | 70 | through nl80211 distributions haven't yet caught up with implementing |
71 | decent alternatives through standard GUIs. Although available as an | 71 | decent alternatives through standard GUIs. Although available as an |
72 | option through iw or wpa_supplicant its just a matter of time before | 72 | option through iw or wpa_supplicant its just a matter of time before |
73 | distributions pick up good GUI options for this. The ideal solution | 73 | distributions pick up good GUI options for this. The ideal solution |
74 | would actually consist of intelligent designs which would do this for | 74 | would actually consist of intelligent designs which would do this for |
75 | the user automatically even when travelling through different countries. | 75 | the user automatically even when travelling through different countries. |
76 | Until then we leave this module parameter as a compromise. | 76 | Until then we leave this module parameter as a compromise. |
77 | 77 | ||
78 | When userspace improves with reasonable widely-available alternatives for | 78 | When userspace improves with reasonable widely-available alternatives for |
79 | this we will no longer need this module parameter. This entry hopes that | 79 | this we will no longer need this module parameter. This entry hopes that |
80 | by the super-futuristically looking date of "March 2010" we will have | 80 | by the super-futuristically looking date of "March 2010" we will have |
81 | such replacements widely available. | 81 | such replacements widely available. |
82 | 82 | ||
83 | Who: Luis R. Rodriguez <lrodriguez@atheros.com> | 83 | Who: Luis R. Rodriguez <lrodriguez@atheros.com> |
84 | 84 | ||
85 | --------------------------- | 85 | --------------------------- |
86 | 86 | ||
87 | What: dev->power.power_state | 87 | What: dev->power.power_state |
88 | When: July 2007 | 88 | When: July 2007 |
89 | Why: Broken design for runtime control over driver power states, confusing | 89 | Why: Broken design for runtime control over driver power states, confusing |
90 | driver-internal runtime power management with: mechanisms to support | 90 | driver-internal runtime power management with: mechanisms to support |
91 | system-wide sleep state transitions; event codes that distinguish | 91 | system-wide sleep state transitions; event codes that distinguish |
92 | different phases of swsusp "sleep" transitions; and userspace policy | 92 | different phases of swsusp "sleep" transitions; and userspace policy |
93 | inputs. This framework was never widely used, and most attempts to | 93 | inputs. This framework was never widely used, and most attempts to |
94 | use it were broken. Drivers should instead be exposing domain-specific | 94 | use it were broken. Drivers should instead be exposing domain-specific |
95 | interfaces either to kernel or to userspace. | 95 | interfaces either to kernel or to userspace. |
96 | Who: Pavel Machek <pavel@suse.cz> | 96 | Who: Pavel Machek <pavel@suse.cz> |
97 | 97 | ||
98 | --------------------------- | 98 | --------------------------- |
99 | 99 | ||
100 | What: Video4Linux API 1 ioctls and from Video devices. | 100 | What: Video4Linux API 1 ioctls and from Video devices. |
101 | When: July 2009 | 101 | When: July 2009 |
102 | Files: include/linux/videodev.h | 102 | Files: include/linux/videodev.h |
103 | Check: include/linux/videodev.h | 103 | Check: include/linux/videodev.h |
104 | Why: V4L1 AP1 was replaced by V4L2 API during migration from 2.4 to 2.6 | 104 | Why: V4L1 AP1 was replaced by V4L2 API during migration from 2.4 to 2.6 |
105 | series. The old API have lots of drawbacks and don't provide enough | 105 | series. The old API have lots of drawbacks and don't provide enough |
106 | means to work with all video and audio standards. The newer API is | 106 | means to work with all video and audio standards. The newer API is |
107 | already available on the main drivers and should be used instead. | 107 | already available on the main drivers and should be used instead. |
108 | Newer drivers should use v4l_compat_translate_ioctl function to handle | 108 | Newer drivers should use v4l_compat_translate_ioctl function to handle |
109 | old calls, replacing to newer ones. | 109 | old calls, replacing to newer ones. |
110 | Decoder iocts are using internally to allow video drivers to | 110 | Decoder iocts are using internally to allow video drivers to |
111 | communicate with video decoders. This should also be improved to allow | 111 | communicate with video decoders. This should also be improved to allow |
112 | V4L2 calls being translated into compatible internal ioctls. | 112 | V4L2 calls being translated into compatible internal ioctls. |
113 | Compatibility ioctls will be provided, for a while, via | 113 | Compatibility ioctls will be provided, for a while, via |
114 | v4l1-compat module. | 114 | v4l1-compat module. |
115 | Who: Mauro Carvalho Chehab <mchehab@infradead.org> | 115 | Who: Mauro Carvalho Chehab <mchehab@infradead.org> |
116 | 116 | ||
117 | --------------------------- | 117 | --------------------------- |
118 | 118 | ||
119 | What: PCMCIA control ioctl (needed for pcmcia-cs [cardmgr, cardctl]) | 119 | What: PCMCIA control ioctl (needed for pcmcia-cs [cardmgr, cardctl]) |
120 | When: 2.6.35/2.6.36 | 120 | When: 2.6.35/2.6.36 |
121 | Files: drivers/pcmcia/: pcmcia_ioctl.c | 121 | Files: drivers/pcmcia/: pcmcia_ioctl.c |
122 | Why: With the 16-bit PCMCIA subsystem now behaving (almost) like a | 122 | Why: With the 16-bit PCMCIA subsystem now behaving (almost) like a |
123 | normal hotpluggable bus, and with it using the default kernel | 123 | normal hotpluggable bus, and with it using the default kernel |
124 | infrastructure (hotplug, driver core, sysfs) keeping the PCMCIA | 124 | infrastructure (hotplug, driver core, sysfs) keeping the PCMCIA |
125 | control ioctl needed by cardmgr and cardctl from pcmcia-cs is | 125 | control ioctl needed by cardmgr and cardctl from pcmcia-cs is |
126 | unnecessary and potentially harmful (it does not provide for | 126 | unnecessary and potentially harmful (it does not provide for |
127 | proper locking), and makes further cleanups and integration of the | 127 | proper locking), and makes further cleanups and integration of the |
128 | PCMCIA subsystem into the Linux kernel device driver model more | 128 | PCMCIA subsystem into the Linux kernel device driver model more |
129 | difficult. The features provided by cardmgr and cardctl are either | 129 | difficult. The features provided by cardmgr and cardctl are either |
130 | handled by the kernel itself now or are available in the new | 130 | handled by the kernel itself now or are available in the new |
131 | pcmciautils package available at | 131 | pcmciautils package available at |
132 | http://kernel.org/pub/linux/utils/kernel/pcmcia/ | 132 | http://kernel.org/pub/linux/utils/kernel/pcmcia/ |
133 | 133 | ||
134 | For all architectures except ARM, the associated config symbol | 134 | For all architectures except ARM, the associated config symbol |
135 | has been removed from kernel 2.6.34; for ARM, it will be likely | 135 | has been removed from kernel 2.6.34; for ARM, it will be likely |
136 | be removed from kernel 2.6.35. The actual code will then likely | 136 | be removed from kernel 2.6.35. The actual code will then likely |
137 | be removed from kernel 2.6.36. | 137 | be removed from kernel 2.6.36. |
138 | Who: Dominik Brodowski <linux@dominikbrodowski.net> | 138 | Who: Dominik Brodowski <linux@dominikbrodowski.net> |
139 | 139 | ||
140 | --------------------------- | 140 | --------------------------- |
141 | 141 | ||
142 | What: sys_sysctl | 142 | What: sys_sysctl |
143 | When: September 2010 | 143 | When: September 2010 |
144 | Option: CONFIG_SYSCTL_SYSCALL | 144 | Option: CONFIG_SYSCTL_SYSCALL |
145 | Why: The same information is available in a more convenient from | 145 | Why: The same information is available in a more convenient from |
146 | /proc/sys, and none of the sysctl variables appear to be | 146 | /proc/sys, and none of the sysctl variables appear to be |
147 | important performance wise. | 147 | important performance wise. |
148 | 148 | ||
149 | Binary sysctls are a long standing source of subtle kernel | 149 | Binary sysctls are a long standing source of subtle kernel |
150 | bugs and security issues. | 150 | bugs and security issues. |
151 | 151 | ||
152 | When I looked several months ago all I could find after | 152 | When I looked several months ago all I could find after |
153 | searching several distributions were 5 user space programs and | 153 | searching several distributions were 5 user space programs and |
154 | glibc (which falls back to /proc/sys) using this syscall. | 154 | glibc (which falls back to /proc/sys) using this syscall. |
155 | 155 | ||
156 | The man page for sysctl(2) documents it as unusable for user | 156 | The man page for sysctl(2) documents it as unusable for user |
157 | space programs. | 157 | space programs. |
158 | 158 | ||
159 | sysctl(2) is not generally ABI compatible to a 32bit user | 159 | sysctl(2) is not generally ABI compatible to a 32bit user |
160 | space application on a 64bit and a 32bit kernel. | 160 | space application on a 64bit and a 32bit kernel. |
161 | 161 | ||
162 | For the last several months the policy has been no new binary | 162 | For the last several months the policy has been no new binary |
163 | sysctls and no one has put forward an argument to use them. | 163 | sysctls and no one has put forward an argument to use them. |
164 | 164 | ||
165 | Binary sysctls issues seem to keep happening appearing so | 165 | Binary sysctls issues seem to keep happening appearing so |
166 | properly deprecating them (with a warning to user space) and a | 166 | properly deprecating them (with a warning to user space) and a |
167 | 2 year grace warning period will mean eventually we can kill | 167 | 2 year grace warning period will mean eventually we can kill |
168 | them and end the pain. | 168 | them and end the pain. |
169 | 169 | ||
170 | In the mean time individual binary sysctls can be dealt with | 170 | In the mean time individual binary sysctls can be dealt with |
171 | in a piecewise fashion. | 171 | in a piecewise fashion. |
172 | 172 | ||
173 | Who: Eric Biederman <ebiederm@xmission.com> | 173 | Who: Eric Biederman <ebiederm@xmission.com> |
174 | 174 | ||
175 | --------------------------- | 175 | --------------------------- |
176 | 176 | ||
177 | What: remove EXPORT_SYMBOL(kernel_thread) | 177 | What: remove EXPORT_SYMBOL(kernel_thread) |
178 | When: August 2006 | 178 | When: August 2006 |
179 | Files: arch/*/kernel/*_ksyms.c | 179 | Files: arch/*/kernel/*_ksyms.c |
180 | Check: kernel_thread | 180 | Check: kernel_thread |
181 | Why: kernel_thread is a low-level implementation detail. Drivers should | 181 | Why: kernel_thread is a low-level implementation detail. Drivers should |
182 | use the <linux/kthread.h> API instead which shields them from | 182 | use the <linux/kthread.h> API instead which shields them from |
183 | implementation details and provides a higherlevel interface that | 183 | implementation details and provides a higherlevel interface that |
184 | prevents bugs and code duplication | 184 | prevents bugs and code duplication |
185 | Who: Christoph Hellwig <hch@lst.de> | 185 | Who: Christoph Hellwig <hch@lst.de> |
186 | 186 | ||
187 | --------------------------- | 187 | --------------------------- |
188 | 188 | ||
189 | What: Unused EXPORT_SYMBOL/EXPORT_SYMBOL_GPL exports | 189 | What: Unused EXPORT_SYMBOL/EXPORT_SYMBOL_GPL exports |
190 | (temporary transition config option provided until then) | 190 | (temporary transition config option provided until then) |
191 | The transition config option will also be removed at the same time. | 191 | The transition config option will also be removed at the same time. |
192 | When: before 2.6.19 | 192 | When: before 2.6.19 |
193 | Why: Unused symbols are both increasing the size of the kernel binary | 193 | Why: Unused symbols are both increasing the size of the kernel binary |
194 | and are often a sign of "wrong API" | 194 | and are often a sign of "wrong API" |
195 | Who: Arjan van de Ven <arjan@linux.intel.com> | 195 | Who: Arjan van de Ven <arjan@linux.intel.com> |
196 | 196 | ||
197 | --------------------------- | 197 | --------------------------- |
198 | 198 | ||
199 | What: PHYSDEVPATH, PHYSDEVBUS, PHYSDEVDRIVER in the uevent environment | 199 | What: PHYSDEVPATH, PHYSDEVBUS, PHYSDEVDRIVER in the uevent environment |
200 | When: October 2008 | 200 | When: October 2008 |
201 | Why: The stacking of class devices makes these values misleading and | 201 | Why: The stacking of class devices makes these values misleading and |
202 | inconsistent. | 202 | inconsistent. |
203 | Class devices should not carry any of these properties, and bus | 203 | Class devices should not carry any of these properties, and bus |
204 | devices have SUBSYTEM and DRIVER as a replacement. | 204 | devices have SUBSYTEM and DRIVER as a replacement. |
205 | Who: Kay Sievers <kay.sievers@suse.de> | 205 | Who: Kay Sievers <kay.sievers@suse.de> |
206 | 206 | ||
207 | --------------------------- | 207 | --------------------------- |
208 | 208 | ||
209 | What: ACPI procfs interface | 209 | What: ACPI procfs interface |
210 | When: July 2008 | 210 | When: July 2008 |
211 | Why: ACPI sysfs conversion should be finished by January 2008. | 211 | Why: ACPI sysfs conversion should be finished by January 2008. |
212 | ACPI procfs interface will be removed in July 2008 so that | 212 | ACPI procfs interface will be removed in July 2008 so that |
213 | there is enough time for the user space to catch up. | 213 | there is enough time for the user space to catch up. |
214 | Who: Zhang Rui <rui.zhang@intel.com> | 214 | Who: Zhang Rui <rui.zhang@intel.com> |
215 | 215 | ||
216 | --------------------------- | 216 | --------------------------- |
217 | 217 | ||
218 | What: /proc/acpi/button | 218 | What: /proc/acpi/button |
219 | When: August 2007 | 219 | When: August 2007 |
220 | Why: /proc/acpi/button has been replaced by events to the input layer | 220 | Why: /proc/acpi/button has been replaced by events to the input layer |
221 | since 2.6.20. | 221 | since 2.6.20. |
222 | Who: Len Brown <len.brown@intel.com> | 222 | Who: Len Brown <len.brown@intel.com> |
223 | 223 | ||
224 | --------------------------- | 224 | --------------------------- |
225 | 225 | ||
226 | What: /proc/acpi/event | 226 | What: /proc/acpi/event |
227 | When: February 2008 | 227 | When: February 2008 |
228 | Why: /proc/acpi/event has been replaced by events via the input layer | 228 | Why: /proc/acpi/event has been replaced by events via the input layer |
229 | and netlink since 2.6.23. | 229 | and netlink since 2.6.23. |
230 | Who: Len Brown <len.brown@intel.com> | 230 | Who: Len Brown <len.brown@intel.com> |
231 | 231 | ||
232 | --------------------------- | 232 | --------------------------- |
233 | 233 | ||
234 | What: i386/x86_64 bzImage symlinks | 234 | What: i386/x86_64 bzImage symlinks |
235 | When: April 2010 | 235 | When: April 2010 |
236 | 236 | ||
237 | Why: The i386/x86_64 merge provides a symlink to the old bzImage | 237 | Why: The i386/x86_64 merge provides a symlink to the old bzImage |
238 | location so not yet updated user space tools, e.g. package | 238 | location so not yet updated user space tools, e.g. package |
239 | scripts, do not break. | 239 | scripts, do not break. |
240 | Who: Thomas Gleixner <tglx@linutronix.de> | 240 | Who: Thomas Gleixner <tglx@linutronix.de> |
241 | 241 | ||
242 | --------------------------- | 242 | --------------------------- |
243 | 243 | ||
244 | What: GPIO autorequest on gpio_direction_{input,output}() in gpiolib | 244 | What: GPIO autorequest on gpio_direction_{input,output}() in gpiolib |
245 | When: February 2010 | 245 | When: February 2010 |
246 | Why: All callers should use explicit gpio_request()/gpio_free(). | 246 | Why: All callers should use explicit gpio_request()/gpio_free(). |
247 | The autorequest mechanism in gpiolib was provided mostly as a | 247 | The autorequest mechanism in gpiolib was provided mostly as a |
248 | migration aid for legacy GPIO interfaces (for SOC based GPIOs). | 248 | migration aid for legacy GPIO interfaces (for SOC based GPIOs). |
249 | Those users have now largely migrated. Platforms implementing | 249 | Those users have now largely migrated. Platforms implementing |
250 | the GPIO interfaces without using gpiolib will see no changes. | 250 | the GPIO interfaces without using gpiolib will see no changes. |
251 | Who: David Brownell <dbrownell@users.sourceforge.net> | 251 | Who: David Brownell <dbrownell@users.sourceforge.net> |
252 | --------------------------- | 252 | --------------------------- |
253 | 253 | ||
254 | What: b43 support for firmware revision < 410 | 254 | What: b43 support for firmware revision < 410 |
255 | When: The schedule was July 2008, but it was decided that we are going to keep the | 255 | When: The schedule was July 2008, but it was decided that we are going to keep the |
256 | code as long as there are no major maintanance headaches. | 256 | code as long as there are no major maintanance headaches. |
257 | So it _could_ be removed _any_ time now, if it conflicts with something new. | 257 | So it _could_ be removed _any_ time now, if it conflicts with something new. |
258 | Why: The support code for the old firmware hurts code readability/maintainability | 258 | Why: The support code for the old firmware hurts code readability/maintainability |
259 | and slightly hurts runtime performance. Bugfixes for the old firmware | 259 | and slightly hurts runtime performance. Bugfixes for the old firmware |
260 | are not provided by Broadcom anymore. | 260 | are not provided by Broadcom anymore. |
261 | Who: Michael Buesch <mb@bu3sch.de> | 261 | Who: Michael Buesch <mb@bu3sch.de> |
262 | 262 | ||
263 | --------------------------- | 263 | --------------------------- |
264 | 264 | ||
265 | What: /sys/o2cb symlink | 265 | What: /sys/o2cb symlink |
266 | When: January 2010 | 266 | When: January 2010 |
267 | Why: /sys/fs/o2cb is the proper location for this information - /sys/o2cb | 267 | Why: /sys/fs/o2cb is the proper location for this information - /sys/o2cb |
268 | exists as a symlink for backwards compatibility for old versions of | 268 | exists as a symlink for backwards compatibility for old versions of |
269 | ocfs2-tools. 2 years should be sufficient time to phase in new versions | 269 | ocfs2-tools. 2 years should be sufficient time to phase in new versions |
270 | which know to look in /sys/fs/o2cb. | 270 | which know to look in /sys/fs/o2cb. |
271 | Who: ocfs2-devel@oss.oracle.com | 271 | Who: ocfs2-devel@oss.oracle.com |
272 | 272 | ||
273 | --------------------------- | 273 | --------------------------- |
274 | 274 | ||
275 | What: Ability for non root users to shm_get hugetlb pages based on mlock | 275 | What: Ability for non root users to shm_get hugetlb pages based on mlock |
276 | resource limits | 276 | resource limits |
277 | When: 2.6.31 | 277 | When: 2.6.31 |
278 | Why: Non root users need to be part of /proc/sys/vm/hugetlb_shm_group or | 278 | Why: Non root users need to be part of /proc/sys/vm/hugetlb_shm_group or |
279 | have CAP_IPC_LOCK to be able to allocate shm segments backed by | 279 | have CAP_IPC_LOCK to be able to allocate shm segments backed by |
280 | huge pages. The mlock based rlimit check to allow shm hugetlb is | 280 | huge pages. The mlock based rlimit check to allow shm hugetlb is |
281 | inconsistent with mmap based allocations. Hence it is being | 281 | inconsistent with mmap based allocations. Hence it is being |
282 | deprecated. | 282 | deprecated. |
283 | Who: Ravikiran Thirumalai <kiran@scalex86.org> | 283 | Who: Ravikiran Thirumalai <kiran@scalex86.org> |
284 | 284 | ||
285 | --------------------------- | 285 | --------------------------- |
286 | 286 | ||
287 | What: CONFIG_THERMAL_HWMON | 287 | What: CONFIG_THERMAL_HWMON |
288 | When: January 2009 | 288 | When: January 2009 |
289 | Why: This option was introduced just to allow older lm-sensors userspace | 289 | Why: This option was introduced just to allow older lm-sensors userspace |
290 | to keep working over the upgrade to 2.6.26. At the scheduled time of | 290 | to keep working over the upgrade to 2.6.26. At the scheduled time of |
291 | removal fixed lm-sensors (2.x or 3.x) should be readily available. | 291 | removal fixed lm-sensors (2.x or 3.x) should be readily available. |
292 | Who: Rene Herman <rene.herman@gmail.com> | 292 | Who: Rene Herman <rene.herman@gmail.com> |
293 | 293 | ||
294 | --------------------------- | 294 | --------------------------- |
295 | 295 | ||
296 | What: Code that is now under CONFIG_WIRELESS_EXT_SYSFS | 296 | What: Code that is now under CONFIG_WIRELESS_EXT_SYSFS |
297 | (in net/core/net-sysfs.c) | 297 | (in net/core/net-sysfs.c) |
298 | When: After the only user (hal) has seen a release with the patches | 298 | When: After the only user (hal) has seen a release with the patches |
299 | for enough time, probably some time in 2010. | 299 | for enough time, probably some time in 2010. |
300 | Why: Over 1K .text/.data size reduction, data is available in other | 300 | Why: Over 1K .text/.data size reduction, data is available in other |
301 | ways (ioctls) | 301 | ways (ioctls) |
302 | Who: Johannes Berg <johannes@sipsolutions.net> | 302 | Who: Johannes Berg <johannes@sipsolutions.net> |
303 | 303 | ||
304 | --------------------------- | 304 | --------------------------- |
305 | 305 | ||
306 | What: CONFIG_NF_CT_ACCT | 306 | What: CONFIG_NF_CT_ACCT |
307 | When: 2.6.29 | 307 | When: 2.6.29 |
308 | Why: Accounting can now be enabled/disabled without kernel recompilation. | 308 | Why: Accounting can now be enabled/disabled without kernel recompilation. |
309 | Currently used only to set a default value for a feature that is also | 309 | Currently used only to set a default value for a feature that is also |
310 | controlled by a kernel/module/sysfs/sysctl parameter. | 310 | controlled by a kernel/module/sysfs/sysctl parameter. |
311 | Who: Krzysztof Piotr Oledzki <ole@ans.pl> | 311 | Who: Krzysztof Piotr Oledzki <ole@ans.pl> |
312 | 312 | ||
313 | --------------------------- | 313 | --------------------------- |
314 | 314 | ||
315 | What: sysfs ui for changing p4-clockmod parameters | 315 | What: sysfs ui for changing p4-clockmod parameters |
316 | When: September 2009 | 316 | When: September 2009 |
317 | Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and | 317 | Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and |
318 | e088e4c9cdb618675874becb91b2fd581ee707e6. | 318 | e088e4c9cdb618675874becb91b2fd581ee707e6. |
319 | Removal is subject to fixing any remaining bugs in ACPI which may | 319 | Removal is subject to fixing any remaining bugs in ACPI which may |
320 | cause the thermal throttling not to happen at the right time. | 320 | cause the thermal throttling not to happen at the right time. |
321 | Who: Dave Jones <davej@redhat.com>, Matthew Garrett <mjg@redhat.com> | 321 | Who: Dave Jones <davej@redhat.com>, Matthew Garrett <mjg@redhat.com> |
322 | 322 | ||
323 | ----------------------------- | 323 | ----------------------------- |
324 | 324 | ||
325 | What: __do_IRQ all in one fits nothing interrupt handler | 325 | What: __do_IRQ all in one fits nothing interrupt handler |
326 | When: 2.6.32 | 326 | When: 2.6.32 |
327 | Why: __do_IRQ was kept for easy migration to the type flow handlers. | 327 | Why: __do_IRQ was kept for easy migration to the type flow handlers. |
328 | More than two years of migration time is enough. | 328 | More than two years of migration time is enough. |
329 | Who: Thomas Gleixner <tglx@linutronix.de> | 329 | Who: Thomas Gleixner <tglx@linutronix.de> |
330 | 330 | ||
331 | ----------------------------- | 331 | ----------------------------- |
332 | 332 | ||
333 | What: fakephp and associated sysfs files in /sys/bus/pci/slots/ | 333 | What: fakephp and associated sysfs files in /sys/bus/pci/slots/ |
334 | When: 2011 | 334 | When: 2011 |
335 | Why: In 2.6.27, the semantics of /sys/bus/pci/slots was redefined to | 335 | Why: In 2.6.27, the semantics of /sys/bus/pci/slots was redefined to |
336 | represent a machine's physical PCI slots. The change in semantics | 336 | represent a machine's physical PCI slots. The change in semantics |
337 | had userspace implications, as the hotplug core no longer allowed | 337 | had userspace implications, as the hotplug core no longer allowed |
338 | drivers to create multiple sysfs files per physical slot (required | 338 | drivers to create multiple sysfs files per physical slot (required |
339 | for multi-function devices, e.g.). fakephp was seen as a developer's | 339 | for multi-function devices, e.g.). fakephp was seen as a developer's |
340 | tool only, and its interface changed. Too late, we learned that | 340 | tool only, and its interface changed. Too late, we learned that |
341 | there were some users of the fakephp interface. | 341 | there were some users of the fakephp interface. |
342 | 342 | ||
343 | In 2.6.30, the original fakephp interface was restored. At the same | 343 | In 2.6.30, the original fakephp interface was restored. At the same |
344 | time, the PCI core gained the ability that fakephp provided, namely | 344 | time, the PCI core gained the ability that fakephp provided, namely |
345 | function-level hot-remove and hot-add. | 345 | function-level hot-remove and hot-add. |
346 | 346 | ||
347 | Since the PCI core now provides the same functionality, exposed in: | 347 | Since the PCI core now provides the same functionality, exposed in: |
348 | 348 | ||
349 | /sys/bus/pci/rescan | 349 | /sys/bus/pci/rescan |
350 | /sys/bus/pci/devices/.../remove | 350 | /sys/bus/pci/devices/.../remove |
351 | /sys/bus/pci/devices/.../rescan | 351 | /sys/bus/pci/devices/.../rescan |
352 | 352 | ||
353 | there is no functional reason to maintain fakephp as well. | 353 | there is no functional reason to maintain fakephp as well. |
354 | 354 | ||
355 | We will keep the existing module so that 'modprobe fakephp' will | 355 | We will keep the existing module so that 'modprobe fakephp' will |
356 | present the old /sys/bus/pci/slots/... interface for compatibility, | 356 | present the old /sys/bus/pci/slots/... interface for compatibility, |
357 | but users are urged to migrate their applications to the API above. | 357 | but users are urged to migrate their applications to the API above. |
358 | 358 | ||
359 | After a reasonable transition period, we will remove the legacy | 359 | After a reasonable transition period, we will remove the legacy |
360 | fakephp interface. | 360 | fakephp interface. |
361 | Who: Alex Chiang <achiang@hp.com> | 361 | Who: Alex Chiang <achiang@hp.com> |
362 | 362 | ||
363 | --------------------------- | 363 | --------------------------- |
364 | 364 | ||
365 | What: CONFIG_RFKILL_INPUT | 365 | What: CONFIG_RFKILL_INPUT |
366 | When: 2.6.33 | 366 | When: 2.6.33 |
367 | Why: Should be implemented in userspace, policy daemon. | 367 | Why: Should be implemented in userspace, policy daemon. |
368 | Who: Johannes Berg <johannes@sipsolutions.net> | 368 | Who: Johannes Berg <johannes@sipsolutions.net> |
369 | 369 | ||
370 | --------------------------- | 370 | --------------------------- |
371 | 371 | ||
372 | What: CONFIG_INOTIFY | 372 | What: CONFIG_INOTIFY |
373 | When: 2.6.33 | 373 | When: 2.6.33 |
374 | Why: last user (audit) will be converted to the newer more generic | 374 | Why: last user (audit) will be converted to the newer more generic |
375 | and more easily maintained fsnotify subsystem | 375 | and more easily maintained fsnotify subsystem |
376 | Who: Eric Paris <eparis@redhat.com> | 376 | Who: Eric Paris <eparis@redhat.com> |
377 | 377 | ||
378 | ---------------------------- | 378 | ---------------------------- |
379 | 379 | ||
380 | What: lock_policy_rwsem_* and unlock_policy_rwsem_* will not be | ||
381 | exported interface anymore. | ||
382 | When: 2.6.33 | ||
383 | Why: cpu_policy_rwsem has a new cleaner definition making it local to | ||
384 | cpufreq core and contained inside cpufreq.c. Other dependent | ||
385 | drivers should not use it in order to safely avoid lockdep issues. | ||
386 | Who: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> | ||
387 | |||
388 | ---------------------------- | ||
389 | |||
390 | What: sound-slot/service-* module aliases and related clutters in | 380 | What: sound-slot/service-* module aliases and related clutters in |
391 | sound/sound_core.c | 381 | sound/sound_core.c |
392 | When: August 2010 | 382 | When: August 2010 |
393 | Why: OSS sound_core grabs all legacy minors (0-255) of SOUND_MAJOR | 383 | Why: OSS sound_core grabs all legacy minors (0-255) of SOUND_MAJOR |
394 | (14) and requests modules using custom sound-slot/service-* | 384 | (14) and requests modules using custom sound-slot/service-* |
395 | module aliases. The only benefit of doing this is allowing | 385 | module aliases. The only benefit of doing this is allowing |
396 | use of custom module aliases which might as well be considered | 386 | use of custom module aliases which might as well be considered |
397 | a bug at this point. This preemptive claiming prevents | 387 | a bug at this point. This preemptive claiming prevents |
398 | alternative OSS implementations. | 388 | alternative OSS implementations. |
399 | 389 | ||
400 | Till the feature is removed, the kernel will be requesting | 390 | Till the feature is removed, the kernel will be requesting |
401 | both sound-slot/service-* and the standard char-major-* module | 391 | both sound-slot/service-* and the standard char-major-* module |
402 | aliases and allow turning off the pre-claiming selectively via | 392 | aliases and allow turning off the pre-claiming selectively via |
403 | CONFIG_SOUND_OSS_CORE_PRECLAIM and soundcore.preclaim_oss | 393 | CONFIG_SOUND_OSS_CORE_PRECLAIM and soundcore.preclaim_oss |
404 | kernel parameter. | 394 | kernel parameter. |
405 | 395 | ||
406 | After the transition phase is complete, both the custom module | 396 | After the transition phase is complete, both the custom module |
407 | aliases and switches to disable it will go away. This removal | 397 | aliases and switches to disable it will go away. This removal |
408 | will also allow making ALSA OSS emulation independent of | 398 | will also allow making ALSA OSS emulation independent of |
409 | sound_core. The dependency will be broken then too. | 399 | sound_core. The dependency will be broken then too. |
410 | Who: Tejun Heo <tj@kernel.org> | 400 | Who: Tejun Heo <tj@kernel.org> |
411 | 401 | ||
412 | ---------------------------- | 402 | ---------------------------- |
413 | 403 | ||
414 | What: Support for VMware's guest paravirtuliazation technique [VMI] will be | 404 | What: Support for VMware's guest paravirtuliazation technique [VMI] will be |
415 | dropped. | 405 | dropped. |
416 | When: 2.6.37 or earlier. | 406 | When: 2.6.37 or earlier. |
417 | Why: With the recent innovations in CPU hardware acceleration technologies | 407 | Why: With the recent innovations in CPU hardware acceleration technologies |
418 | from Intel and AMD, VMware ran a few experiments to compare these | 408 | from Intel and AMD, VMware ran a few experiments to compare these |
419 | techniques to guest paravirtualization technique on VMware's platform. | 409 | techniques to guest paravirtualization technique on VMware's platform. |
420 | These hardware assisted virtualization techniques have outperformed the | 410 | These hardware assisted virtualization techniques have outperformed the |
421 | performance benefits provided by VMI in most of the workloads. VMware | 411 | performance benefits provided by VMI in most of the workloads. VMware |
422 | expects that these hardware features will be ubiquitous in a couple of | 412 | expects that these hardware features will be ubiquitous in a couple of |
423 | years, as a result, VMware has started a phased retirement of this | 413 | years, as a result, VMware has started a phased retirement of this |
424 | feature from the hypervisor. We will be removing this feature from the | 414 | feature from the hypervisor. We will be removing this feature from the |
425 | Kernel too. Right now we are targeting 2.6.37 but can retire earlier if | 415 | Kernel too. Right now we are targeting 2.6.37 but can retire earlier if |
426 | technical reasons (read opportunity to remove major chunk of pvops) | 416 | technical reasons (read opportunity to remove major chunk of pvops) |
427 | arise. | 417 | arise. |
428 | 418 | ||
429 | Please note that VMI has always been an optimization and non-VMI kernels | 419 | Please note that VMI has always been an optimization and non-VMI kernels |
430 | still work fine on VMware's platform. | 420 | still work fine on VMware's platform. |
431 | Latest versions of VMware's product which support VMI are, | 421 | Latest versions of VMware's product which support VMI are, |
432 | Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence | 422 | Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence |
433 | releases for these products will continue supporting VMI. | 423 | releases for these products will continue supporting VMI. |
434 | 424 | ||
435 | For more details about VMI retirement take a look at this, | 425 | For more details about VMI retirement take a look at this, |
436 | http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html | 426 | http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html |
437 | 427 | ||
438 | Who: Alok N Kataria <akataria@vmware.com> | 428 | Who: Alok N Kataria <akataria@vmware.com> |
439 | 429 | ||
440 | ---------------------------- | 430 | ---------------------------- |
441 | 431 | ||
442 | What: Support for lcd_switch and display_get in asus-laptop driver | 432 | What: Support for lcd_switch and display_get in asus-laptop driver |
443 | When: March 2010 | 433 | When: March 2010 |
444 | Why: These two features use non-standard interfaces. There are the | 434 | Why: These two features use non-standard interfaces. There are the |
445 | only features that really need multiple path to guess what's | 435 | only features that really need multiple path to guess what's |
446 | the right method name on a specific laptop. | 436 | the right method name on a specific laptop. |
447 | 437 | ||
448 | Removing them will allow to remove a lot of code an significantly | 438 | Removing them will allow to remove a lot of code an significantly |
449 | clean the drivers. | 439 | clean the drivers. |
450 | 440 | ||
451 | This will affect the backlight code which won't be able to know | 441 | This will affect the backlight code which won't be able to know |
452 | if the backlight is on or off. The platform display file will also be | 442 | if the backlight is on or off. The platform display file will also be |
453 | write only (like the one in eeepc-laptop). | 443 | write only (like the one in eeepc-laptop). |
454 | 444 | ||
455 | This should'nt affect a lot of user because they usually know | 445 | This should'nt affect a lot of user because they usually know |
456 | when their display is on or off. | 446 | when their display is on or off. |
457 | 447 | ||
458 | Who: Corentin Chary <corentin.chary@gmail.com> | 448 | Who: Corentin Chary <corentin.chary@gmail.com> |
459 | 449 | ||
460 | ---------------------------- | 450 | ---------------------------- |
461 | 451 | ||
462 | What: usbvideo quickcam_messenger driver | 452 | What: usbvideo quickcam_messenger driver |
463 | When: 2.6.35 | 453 | When: 2.6.35 |
464 | Files: drivers/media/video/usbvideo/quickcam_messenger.[ch] | 454 | Files: drivers/media/video/usbvideo/quickcam_messenger.[ch] |
465 | Why: obsolete v4l1 driver replaced by gspca_stv06xx | 455 | Why: obsolete v4l1 driver replaced by gspca_stv06xx |
466 | Who: Hans de Goede <hdegoede@redhat.com> | 456 | Who: Hans de Goede <hdegoede@redhat.com> |
467 | 457 | ||
468 | ---------------------------- | 458 | ---------------------------- |
469 | 459 | ||
470 | What: ov511 v4l1 driver | 460 | What: ov511 v4l1 driver |
471 | When: 2.6.35 | 461 | When: 2.6.35 |
472 | Files: drivers/media/video/ov511.[ch] | 462 | Files: drivers/media/video/ov511.[ch] |
473 | Why: obsolete v4l1 driver replaced by gspca_ov519 | 463 | Why: obsolete v4l1 driver replaced by gspca_ov519 |
474 | Who: Hans de Goede <hdegoede@redhat.com> | 464 | Who: Hans de Goede <hdegoede@redhat.com> |
475 | 465 | ||
476 | ---------------------------- | 466 | ---------------------------- |
477 | 467 | ||
478 | What: w9968cf v4l1 driver | 468 | What: w9968cf v4l1 driver |
479 | When: 2.6.35 | 469 | When: 2.6.35 |
480 | Files: drivers/media/video/w9968cf*.[ch] | 470 | Files: drivers/media/video/w9968cf*.[ch] |
481 | Why: obsolete v4l1 driver replaced by gspca_ov519 | 471 | Why: obsolete v4l1 driver replaced by gspca_ov519 |
482 | Who: Hans de Goede <hdegoede@redhat.com> | 472 | Who: Hans de Goede <hdegoede@redhat.com> |
483 | 473 | ||
484 | ---------------------------- | 474 | ---------------------------- |
485 | 475 | ||
486 | What: ovcamchip sensor framework | 476 | What: ovcamchip sensor framework |
487 | When: 2.6.35 | 477 | When: 2.6.35 |
488 | Files: drivers/media/video/ovcamchip/* | 478 | Files: drivers/media/video/ovcamchip/* |
489 | Why: Only used by obsoleted v4l1 drivers | 479 | Why: Only used by obsoleted v4l1 drivers |
490 | Who: Hans de Goede <hdegoede@redhat.com> | 480 | Who: Hans de Goede <hdegoede@redhat.com> |
491 | 481 | ||
492 | ---------------------------- | 482 | ---------------------------- |
493 | 483 | ||
494 | What: stv680 v4l1 driver | 484 | What: stv680 v4l1 driver |
495 | When: 2.6.35 | 485 | When: 2.6.35 |
496 | Files: drivers/media/video/stv680.[ch] | 486 | Files: drivers/media/video/stv680.[ch] |
497 | Why: obsolete v4l1 driver replaced by gspca_stv0680 | 487 | Why: obsolete v4l1 driver replaced by gspca_stv0680 |
498 | Who: Hans de Goede <hdegoede@redhat.com> | 488 | Who: Hans de Goede <hdegoede@redhat.com> |
499 | 489 | ||
500 | ---------------------------- | 490 | ---------------------------- |
501 | 491 | ||
502 | What: zc0301 v4l driver | 492 | What: zc0301 v4l driver |
503 | When: 2.6.35 | 493 | When: 2.6.35 |
504 | Files: drivers/media/video/zc0301/* | 494 | Files: drivers/media/video/zc0301/* |
505 | Why: Duplicate functionality with the gspca_zc3xx driver, zc0301 only | 495 | Why: Duplicate functionality with the gspca_zc3xx driver, zc0301 only |
506 | supports 2 USB-ID's (because it only supports a limited set of | 496 | supports 2 USB-ID's (because it only supports a limited set of |
507 | sensors) wich are also supported by the gspca_zc3xx driver | 497 | sensors) wich are also supported by the gspca_zc3xx driver |
508 | (which supports 53 USB-ID's in total) | 498 | (which supports 53 USB-ID's in total) |
509 | Who: Hans de Goede <hdegoede@redhat.com> | 499 | Who: Hans de Goede <hdegoede@redhat.com> |
510 | 500 | ||
511 | ---------------------------- | 501 | ---------------------------- |
512 | 502 | ||
513 | What: sysfs-class-rfkill state file | 503 | What: sysfs-class-rfkill state file |
514 | When: Feb 2014 | 504 | When: Feb 2014 |
515 | Files: net/rfkill/core.c | 505 | Files: net/rfkill/core.c |
516 | Why: Documented as obsolete since Feb 2010. This file is limited to 3 | 506 | Why: Documented as obsolete since Feb 2010. This file is limited to 3 |
517 | states while the rfkill drivers can have 4 states. | 507 | states while the rfkill drivers can have 4 states. |
518 | Who: anybody or Florian Mickler <florian@mickler.org> | 508 | Who: anybody or Florian Mickler <florian@mickler.org> |
519 | 509 | ||
520 | ---------------------------- | 510 | ---------------------------- |
521 | 511 | ||
522 | What: sysfs-class-rfkill claim file | 512 | What: sysfs-class-rfkill claim file |
523 | When: Feb 2012 | 513 | When: Feb 2012 |
524 | Files: net/rfkill/core.c | 514 | Files: net/rfkill/core.c |
525 | Why: It is not possible to claim an rfkill driver since 2007. This is | 515 | Why: It is not possible to claim an rfkill driver since 2007. This is |
526 | Documented as obsolete since Feb 2010. | 516 | Documented as obsolete since Feb 2010. |
527 | Who: anybody or Florian Mickler <florian@mickler.org> | 517 | Who: anybody or Florian Mickler <florian@mickler.org> |
528 | 518 | ||
529 | ---------------------------- | 519 | ---------------------------- |
530 | 520 | ||
531 | What: capifs | 521 | What: capifs |
532 | When: February 2011 | 522 | When: February 2011 |
533 | Files: drivers/isdn/capi/capifs.* | 523 | Files: drivers/isdn/capi/capifs.* |
534 | Why: udev fully replaces this special file system that only contains CAPI | 524 | Why: udev fully replaces this special file system that only contains CAPI |
535 | NCCI TTY device nodes. User space (pppdcapiplugin) works without | 525 | NCCI TTY device nodes. User space (pppdcapiplugin) works without |
536 | noticing the difference. | 526 | noticing the difference. |
537 | Who: Jan Kiszka <jan.kiszka@web.de> | 527 | Who: Jan Kiszka <jan.kiszka@web.de> |
538 | 528 | ||
539 | ---------------------------- | 529 | ---------------------------- |
540 | 530 | ||
541 | What: KVM memory aliases support | 531 | What: KVM memory aliases support |
542 | When: July 2010 | 532 | When: July 2010 |
543 | Why: Memory aliasing support is used for speeding up guest vga access | 533 | Why: Memory aliasing support is used for speeding up guest vga access |
544 | through the vga windows. | 534 | through the vga windows. |
545 | 535 | ||
546 | Modern userspace no longer uses this feature, so it's just bitrotted | 536 | Modern userspace no longer uses this feature, so it's just bitrotted |
547 | code and can be removed with no impact. | 537 | code and can be removed with no impact. |
548 | Who: Avi Kivity <avi@redhat.com> | 538 | Who: Avi Kivity <avi@redhat.com> |
549 | 539 | ||
550 | ---------------------------- | 540 | ---------------------------- |
551 | 541 | ||
552 | What: xtime, wall_to_monotonic | 542 | What: xtime, wall_to_monotonic |
553 | When: 2.6.36+ | 543 | When: 2.6.36+ |
554 | Files: kernel/time/timekeeping.c include/linux/time.h | 544 | Files: kernel/time/timekeeping.c include/linux/time.h |
555 | Why: Cleaning up timekeeping internal values. Please use | 545 | Why: Cleaning up timekeeping internal values. Please use |
556 | existing timekeeping accessor functions to access | 546 | existing timekeeping accessor functions to access |
557 | the equivalent functionality. | 547 | the equivalent functionality. |
558 | Who: John Stultz <johnstul@us.ibm.com> | 548 | Who: John Stultz <johnstul@us.ibm.com> |
559 | 549 | ||
560 | ---------------------------- | 550 | ---------------------------- |
561 | 551 | ||
562 | What: KVM kernel-allocated memory slots | 552 | What: KVM kernel-allocated memory slots |
563 | When: July 2010 | 553 | When: July 2010 |
564 | Why: Since 2.6.25, kvm supports user-allocated memory slots, which are | 554 | Why: Since 2.6.25, kvm supports user-allocated memory slots, which are |
565 | much more flexible than kernel-allocated slots. All current userspace | 555 | much more flexible than kernel-allocated slots. All current userspace |
566 | supports the newer interface and this code can be removed with no | 556 | supports the newer interface and this code can be removed with no |
567 | impact. | 557 | impact. |
568 | Who: Avi Kivity <avi@redhat.com> | 558 | Who: Avi Kivity <avi@redhat.com> |
569 | 559 | ||
570 | ---------------------------- | 560 | ---------------------------- |
571 | 561 | ||
572 | What: KVM paravirt mmu host support | 562 | What: KVM paravirt mmu host support |
573 | When: January 2011 | 563 | When: January 2011 |
574 | Why: The paravirt mmu host support is slower than non-paravirt mmu, both | 564 | Why: The paravirt mmu host support is slower than non-paravirt mmu, both |
575 | on newer and older hardware. It is already not exposed to the guest, | 565 | on newer and older hardware. It is already not exposed to the guest, |
576 | and kept only for live migration purposes. | 566 | and kept only for live migration purposes. |
577 | Who: Avi Kivity <avi@redhat.com> | 567 | Who: Avi Kivity <avi@redhat.com> |
578 | 568 | ||
579 | ---------------------------- | 569 | ---------------------------- |
580 | 570 | ||
581 | What: iwlwifi 50XX module parameters | 571 | What: iwlwifi 50XX module parameters |
582 | When: 2.6.40 | 572 | When: 2.6.40 |
583 | Why: The "..50" modules parameters were used to configure 5000 series and | 573 | Why: The "..50" modules parameters were used to configure 5000 series and |
584 | up devices; different set of module parameters also available for 4965 | 574 | up devices; different set of module parameters also available for 4965 |
585 | with same functionalities. Consolidate both set into single place | 575 | with same functionalities. Consolidate both set into single place |
586 | in drivers/net/wireless/iwlwifi/iwl-agn.c | 576 | in drivers/net/wireless/iwlwifi/iwl-agn.c |
587 | 577 | ||
588 | Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> | 578 | Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> |
589 | 579 | ||
590 | ---------------------------- | 580 | ---------------------------- |
591 | 581 | ||
592 | What: iwl4965 alias support | 582 | What: iwl4965 alias support |
593 | When: 2.6.40 | 583 | When: 2.6.40 |
594 | Why: Internal alias support has been present in module-init-tools for some | 584 | Why: Internal alias support has been present in module-init-tools for some |
595 | time, the MODULE_ALIAS("iwl4965") boilerplate aliases can be removed | 585 | time, the MODULE_ALIAS("iwl4965") boilerplate aliases can be removed |
596 | with no impact. | 586 | with no impact. |
597 | 587 | ||
598 | Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> | 588 | Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> |
599 | 589 | ||
600 | --------------------------- | 590 | --------------------------- |
601 | 591 | ||
602 | What: xt_NOTRACK | 592 | What: xt_NOTRACK |
603 | Files: net/netfilter/xt_NOTRACK.c | 593 | Files: net/netfilter/xt_NOTRACK.c |
604 | When: April 2011 | 594 | When: April 2011 |
605 | Why: Superseded by xt_CT | 595 | Why: Superseded by xt_CT |
606 | Who: Netfilter developer team <netfilter-devel@vger.kernel.org> | 596 | Who: Netfilter developer team <netfilter-devel@vger.kernel.org> |
607 | 597 | ||
608 | --------------------------- | 598 | --------------------------- |
609 | 599 | ||
610 | What: video4linux /dev/vtx teletext API support | 600 | What: video4linux /dev/vtx teletext API support |
611 | When: 2.6.35 | 601 | When: 2.6.35 |
612 | Files: drivers/media/video/saa5246a.c drivers/media/video/saa5249.c | 602 | Files: drivers/media/video/saa5246a.c drivers/media/video/saa5249.c |
613 | include/linux/videotext.h | 603 | include/linux/videotext.h |
614 | Why: The vtx device nodes have been superseded by vbi device nodes | 604 | Why: The vtx device nodes have been superseded by vbi device nodes |
615 | for many years. No applications exist that use the vtx support. | 605 | for many years. No applications exist that use the vtx support. |
616 | Of the two i2c drivers that actually support this API the saa5249 | 606 | Of the two i2c drivers that actually support this API the saa5249 |
617 | has been impossible to use for a year now and no known hardware | 607 | has been impossible to use for a year now and no known hardware |
618 | that supports this device exists. The saa5246a is theoretically | 608 | that supports this device exists. The saa5246a is theoretically |
619 | supported by the old mxb boards, but it never actually worked. | 609 | supported by the old mxb boards, but it never actually worked. |
620 | 610 | ||
621 | In summary: there is no hardware that can use this API and there | 611 | In summary: there is no hardware that can use this API and there |
622 | are no applications actually implementing this API. | 612 | are no applications actually implementing this API. |
623 | 613 | ||
624 | The vtx support still reserves minors 192-223 and we would really | 614 | The vtx support still reserves minors 192-223 and we would really |
625 | like to reuse those for upcoming new functionality. In the unlikely | 615 | like to reuse those for upcoming new functionality. In the unlikely |
626 | event that new hardware appears that wants to use the functionality | 616 | event that new hardware appears that wants to use the functionality |
627 | provided by the vtx API, then that functionality should be build | 617 | provided by the vtx API, then that functionality should be build |
628 | around the sliced VBI API instead. | 618 | around the sliced VBI API instead. |
629 | Who: Hans Verkuil <hverkuil@xs4all.nl> | 619 | Who: Hans Verkuil <hverkuil@xs4all.nl> |
630 | 620 | ||
631 | ---------------------------- | 621 | ---------------------------- |
632 | 622 | ||
633 | What: IRQF_DISABLED | 623 | What: IRQF_DISABLED |
634 | When: 2.6.36 | 624 | When: 2.6.36 |
635 | Why: The flag is a NOOP as we run interrupt handlers with interrupts disabled | 625 | Why: The flag is a NOOP as we run interrupt handlers with interrupts disabled |
636 | Who: Thomas Gleixner <tglx@linutronix.de> | 626 | Who: Thomas Gleixner <tglx@linutronix.de> |
637 | 627 | ||
638 | ---------------------------- | 628 | ---------------------------- |
639 | 629 | ||
640 | What: old ieee1394 subsystem (CONFIG_IEEE1394) | 630 | What: old ieee1394 subsystem (CONFIG_IEEE1394) |
641 | When: 2.6.37 | 631 | When: 2.6.37 |
642 | Files: drivers/ieee1394/ except init_ohci1394_dma.c | 632 | Files: drivers/ieee1394/ except init_ohci1394_dma.c |
643 | Why: superseded by drivers/firewire/ (CONFIG_FIREWIRE) which offers more | 633 | Why: superseded by drivers/firewire/ (CONFIG_FIREWIRE) which offers more |
644 | features, better performance, and better security, all with smaller | 634 | features, better performance, and better security, all with smaller |
645 | and more modern code base | 635 | and more modern code base |
646 | Who: Stefan Richter <stefanr@s5r6.in-berlin.de> | 636 | Who: Stefan Richter <stefanr@s5r6.in-berlin.de> |
647 | 637 | ||
648 | ---------------------------- | 638 | ---------------------------- |
649 | 639 | ||
650 | What: The acpi_sleep=s4_nonvs command line option | 640 | What: The acpi_sleep=s4_nonvs command line option |
651 | When: 2.6.37 | 641 | When: 2.6.37 |
652 | Files: arch/x86/kernel/acpi/sleep.c | 642 | Files: arch/x86/kernel/acpi/sleep.c |
653 | Why: superseded by acpi_sleep=nonvs | 643 | Why: superseded by acpi_sleep=nonvs |
654 | Who: Rafael J. Wysocki <rjw@sisk.pl> | 644 | Who: Rafael J. Wysocki <rjw@sisk.pl> |
655 | 645 | ||
656 | ---------------------------- | 646 | ---------------------------- |
657 | 647 |
drivers/cpufreq/cpufreq.c
1 | /* | 1 | /* |
2 | * linux/drivers/cpufreq/cpufreq.c | 2 | * linux/drivers/cpufreq/cpufreq.c |
3 | * | 3 | * |
4 | * Copyright (C) 2001 Russell King | 4 | * Copyright (C) 2001 Russell King |
5 | * (C) 2002 - 2003 Dominik Brodowski <linux@brodo.de> | 5 | * (C) 2002 - 2003 Dominik Brodowski <linux@brodo.de> |
6 | * | 6 | * |
7 | * Oct 2005 - Ashok Raj <ashok.raj@intel.com> | 7 | * Oct 2005 - Ashok Raj <ashok.raj@intel.com> |
8 | * Added handling for CPU hotplug | 8 | * Added handling for CPU hotplug |
9 | * Feb 2006 - Jacob Shin <jacob.shin@amd.com> | 9 | * Feb 2006 - Jacob Shin <jacob.shin@amd.com> |
10 | * Fix handling for CPU hotplug -- affected CPUs | 10 | * Fix handling for CPU hotplug -- affected CPUs |
11 | * | 11 | * |
12 | * This program is free software; you can redistribute it and/or modify | 12 | * This program is free software; you can redistribute it and/or modify |
13 | * it under the terms of the GNU General Public License version 2 as | 13 | * it under the terms of the GNU General Public License version 2 as |
14 | * published by the Free Software Foundation. | 14 | * published by the Free Software Foundation. |
15 | * | 15 | * |
16 | */ | 16 | */ |
17 | 17 | ||
18 | #include <linux/kernel.h> | 18 | #include <linux/kernel.h> |
19 | #include <linux/module.h> | 19 | #include <linux/module.h> |
20 | #include <linux/init.h> | 20 | #include <linux/init.h> |
21 | #include <linux/notifier.h> | 21 | #include <linux/notifier.h> |
22 | #include <linux/cpufreq.h> | 22 | #include <linux/cpufreq.h> |
23 | #include <linux/delay.h> | 23 | #include <linux/delay.h> |
24 | #include <linux/interrupt.h> | 24 | #include <linux/interrupt.h> |
25 | #include <linux/spinlock.h> | 25 | #include <linux/spinlock.h> |
26 | #include <linux/device.h> | 26 | #include <linux/device.h> |
27 | #include <linux/slab.h> | 27 | #include <linux/slab.h> |
28 | #include <linux/cpu.h> | 28 | #include <linux/cpu.h> |
29 | #include <linux/completion.h> | 29 | #include <linux/completion.h> |
30 | #include <linux/mutex.h> | 30 | #include <linux/mutex.h> |
31 | 31 | ||
32 | #define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_CORE, \ | 32 | #define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_CORE, \ |
33 | "cpufreq-core", msg) | 33 | "cpufreq-core", msg) |
34 | 34 | ||
35 | /** | 35 | /** |
36 | * The "cpufreq driver" - the arch- or hardware-dependent low | 36 | * The "cpufreq driver" - the arch- or hardware-dependent low |
37 | * level driver of CPUFreq support, and its spinlock. This lock | 37 | * level driver of CPUFreq support, and its spinlock. This lock |
38 | * also protects the cpufreq_cpu_data array. | 38 | * also protects the cpufreq_cpu_data array. |
39 | */ | 39 | */ |
40 | static struct cpufreq_driver *cpufreq_driver; | 40 | static struct cpufreq_driver *cpufreq_driver; |
41 | static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); | 41 | static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); |
42 | #ifdef CONFIG_HOTPLUG_CPU | 42 | #ifdef CONFIG_HOTPLUG_CPU |
43 | /* This one keeps track of the previously set governor of a removed CPU */ | 43 | /* This one keeps track of the previously set governor of a removed CPU */ |
44 | static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); | 44 | static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); |
45 | #endif | 45 | #endif |
46 | static DEFINE_SPINLOCK(cpufreq_driver_lock); | 46 | static DEFINE_SPINLOCK(cpufreq_driver_lock); |
47 | 47 | ||
48 | /* | 48 | /* |
49 | * cpu_policy_rwsem is a per CPU reader-writer semaphore designed to cure | 49 | * cpu_policy_rwsem is a per CPU reader-writer semaphore designed to cure |
50 | * all cpufreq/hotplug/workqueue/etc related lock issues. | 50 | * all cpufreq/hotplug/workqueue/etc related lock issues. |
51 | * | 51 | * |
52 | * The rules for this semaphore: | 52 | * The rules for this semaphore: |
53 | * - Any routine that wants to read from the policy structure will | 53 | * - Any routine that wants to read from the policy structure will |
54 | * do a down_read on this semaphore. | 54 | * do a down_read on this semaphore. |
55 | * - Any routine that will write to the policy structure and/or may take away | 55 | * - Any routine that will write to the policy structure and/or may take away |
56 | * the policy altogether (eg. CPU hotplug), will hold this lock in write | 56 | * the policy altogether (eg. CPU hotplug), will hold this lock in write |
57 | * mode before doing so. | 57 | * mode before doing so. |
58 | * | 58 | * |
59 | * Additional rules: | 59 | * Additional rules: |
60 | * - All holders of the lock should check to make sure that the CPU they | 60 | * - All holders of the lock should check to make sure that the CPU they |
61 | * are concerned with are online after they get the lock. | 61 | * are concerned with are online after they get the lock. |
62 | * - Governor routines that can be called in cpufreq hotplug path should not | 62 | * - Governor routines that can be called in cpufreq hotplug path should not |
63 | * take this sem as top level hotplug notifier handler takes this. | 63 | * take this sem as top level hotplug notifier handler takes this. |
64 | * - Lock should not be held across | 64 | * - Lock should not be held across |
65 | * __cpufreq_governor(data, CPUFREQ_GOV_STOP); | 65 | * __cpufreq_governor(data, CPUFREQ_GOV_STOP); |
66 | */ | 66 | */ |
67 | static DEFINE_PER_CPU(int, cpufreq_policy_cpu); | 67 | static DEFINE_PER_CPU(int, cpufreq_policy_cpu); |
68 | static DEFINE_PER_CPU(struct rw_semaphore, cpu_policy_rwsem); | 68 | static DEFINE_PER_CPU(struct rw_semaphore, cpu_policy_rwsem); |
69 | 69 | ||
70 | #define lock_policy_rwsem(mode, cpu) \ | 70 | #define lock_policy_rwsem(mode, cpu) \ |
71 | int lock_policy_rwsem_##mode \ | 71 | static int lock_policy_rwsem_##mode \ |
72 | (int cpu) \ | 72 | (int cpu) \ |
73 | { \ | 73 | { \ |
74 | int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu); \ | 74 | int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu); \ |
75 | BUG_ON(policy_cpu == -1); \ | 75 | BUG_ON(policy_cpu == -1); \ |
76 | down_##mode(&per_cpu(cpu_policy_rwsem, policy_cpu)); \ | 76 | down_##mode(&per_cpu(cpu_policy_rwsem, policy_cpu)); \ |
77 | if (unlikely(!cpu_online(cpu))) { \ | 77 | if (unlikely(!cpu_online(cpu))) { \ |
78 | up_##mode(&per_cpu(cpu_policy_rwsem, policy_cpu)); \ | 78 | up_##mode(&per_cpu(cpu_policy_rwsem, policy_cpu)); \ |
79 | return -1; \ | 79 | return -1; \ |
80 | } \ | 80 | } \ |
81 | \ | 81 | \ |
82 | return 0; \ | 82 | return 0; \ |
83 | } | 83 | } |
84 | 84 | ||
85 | lock_policy_rwsem(read, cpu); | 85 | lock_policy_rwsem(read, cpu); |
86 | EXPORT_SYMBOL_GPL(lock_policy_rwsem_read); | ||
87 | 86 | ||
88 | lock_policy_rwsem(write, cpu); | 87 | lock_policy_rwsem(write, cpu); |
89 | EXPORT_SYMBOL_GPL(lock_policy_rwsem_write); | ||
90 | 88 | ||
91 | void unlock_policy_rwsem_read(int cpu) | 89 | static void unlock_policy_rwsem_read(int cpu) |
92 | { | 90 | { |
93 | int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu); | 91 | int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu); |
94 | BUG_ON(policy_cpu == -1); | 92 | BUG_ON(policy_cpu == -1); |
95 | up_read(&per_cpu(cpu_policy_rwsem, policy_cpu)); | 93 | up_read(&per_cpu(cpu_policy_rwsem, policy_cpu)); |
96 | } | 94 | } |
97 | EXPORT_SYMBOL_GPL(unlock_policy_rwsem_read); | ||
98 | 95 | ||
99 | void unlock_policy_rwsem_write(int cpu) | 96 | static void unlock_policy_rwsem_write(int cpu) |
100 | { | 97 | { |
101 | int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu); | 98 | int policy_cpu = per_cpu(cpufreq_policy_cpu, cpu); |
102 | BUG_ON(policy_cpu == -1); | 99 | BUG_ON(policy_cpu == -1); |
103 | up_write(&per_cpu(cpu_policy_rwsem, policy_cpu)); | 100 | up_write(&per_cpu(cpu_policy_rwsem, policy_cpu)); |
104 | } | 101 | } |
105 | EXPORT_SYMBOL_GPL(unlock_policy_rwsem_write); | ||
106 | 102 | ||
107 | 103 | ||
108 | /* internal prototypes */ | 104 | /* internal prototypes */ |
109 | static int __cpufreq_governor(struct cpufreq_policy *policy, | 105 | static int __cpufreq_governor(struct cpufreq_policy *policy, |
110 | unsigned int event); | 106 | unsigned int event); |
111 | static unsigned int __cpufreq_get(unsigned int cpu); | 107 | static unsigned int __cpufreq_get(unsigned int cpu); |
112 | static void handle_update(struct work_struct *work); | 108 | static void handle_update(struct work_struct *work); |
113 | 109 | ||
114 | /** | 110 | /** |
115 | * Two notifier lists: the "policy" list is involved in the | 111 | * Two notifier lists: the "policy" list is involved in the |
116 | * validation process for a new CPU frequency policy; the | 112 | * validation process for a new CPU frequency policy; the |
117 | * "transition" list for kernel code that needs to handle | 113 | * "transition" list for kernel code that needs to handle |
118 | * changes to devices when the CPU clock speed changes. | 114 | * changes to devices when the CPU clock speed changes. |
119 | * The mutex locks both lists. | 115 | * The mutex locks both lists. |
120 | */ | 116 | */ |
121 | static BLOCKING_NOTIFIER_HEAD(cpufreq_policy_notifier_list); | 117 | static BLOCKING_NOTIFIER_HEAD(cpufreq_policy_notifier_list); |
122 | static struct srcu_notifier_head cpufreq_transition_notifier_list; | 118 | static struct srcu_notifier_head cpufreq_transition_notifier_list; |
123 | 119 | ||
124 | static bool init_cpufreq_transition_notifier_list_called; | 120 | static bool init_cpufreq_transition_notifier_list_called; |
125 | static int __init init_cpufreq_transition_notifier_list(void) | 121 | static int __init init_cpufreq_transition_notifier_list(void) |
126 | { | 122 | { |
127 | srcu_init_notifier_head(&cpufreq_transition_notifier_list); | 123 | srcu_init_notifier_head(&cpufreq_transition_notifier_list); |
128 | init_cpufreq_transition_notifier_list_called = true; | 124 | init_cpufreq_transition_notifier_list_called = true; |
129 | return 0; | 125 | return 0; |
130 | } | 126 | } |
131 | pure_initcall(init_cpufreq_transition_notifier_list); | 127 | pure_initcall(init_cpufreq_transition_notifier_list); |
132 | 128 | ||
133 | static LIST_HEAD(cpufreq_governor_list); | 129 | static LIST_HEAD(cpufreq_governor_list); |
134 | static DEFINE_MUTEX(cpufreq_governor_mutex); | 130 | static DEFINE_MUTEX(cpufreq_governor_mutex); |
135 | 131 | ||
136 | struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu) | 132 | struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu) |
137 | { | 133 | { |
138 | struct cpufreq_policy *data; | 134 | struct cpufreq_policy *data; |
139 | unsigned long flags; | 135 | unsigned long flags; |
140 | 136 | ||
141 | if (cpu >= nr_cpu_ids) | 137 | if (cpu >= nr_cpu_ids) |
142 | goto err_out; | 138 | goto err_out; |
143 | 139 | ||
144 | /* get the cpufreq driver */ | 140 | /* get the cpufreq driver */ |
145 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 141 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
146 | 142 | ||
147 | if (!cpufreq_driver) | 143 | if (!cpufreq_driver) |
148 | goto err_out_unlock; | 144 | goto err_out_unlock; |
149 | 145 | ||
150 | if (!try_module_get(cpufreq_driver->owner)) | 146 | if (!try_module_get(cpufreq_driver->owner)) |
151 | goto err_out_unlock; | 147 | goto err_out_unlock; |
152 | 148 | ||
153 | 149 | ||
154 | /* get the CPU */ | 150 | /* get the CPU */ |
155 | data = per_cpu(cpufreq_cpu_data, cpu); | 151 | data = per_cpu(cpufreq_cpu_data, cpu); |
156 | 152 | ||
157 | if (!data) | 153 | if (!data) |
158 | goto err_out_put_module; | 154 | goto err_out_put_module; |
159 | 155 | ||
160 | if (!kobject_get(&data->kobj)) | 156 | if (!kobject_get(&data->kobj)) |
161 | goto err_out_put_module; | 157 | goto err_out_put_module; |
162 | 158 | ||
163 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 159 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
164 | return data; | 160 | return data; |
165 | 161 | ||
166 | err_out_put_module: | 162 | err_out_put_module: |
167 | module_put(cpufreq_driver->owner); | 163 | module_put(cpufreq_driver->owner); |
168 | err_out_unlock: | 164 | err_out_unlock: |
169 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 165 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
170 | err_out: | 166 | err_out: |
171 | return NULL; | 167 | return NULL; |
172 | } | 168 | } |
173 | EXPORT_SYMBOL_GPL(cpufreq_cpu_get); | 169 | EXPORT_SYMBOL_GPL(cpufreq_cpu_get); |
174 | 170 | ||
175 | 171 | ||
176 | void cpufreq_cpu_put(struct cpufreq_policy *data) | 172 | void cpufreq_cpu_put(struct cpufreq_policy *data) |
177 | { | 173 | { |
178 | kobject_put(&data->kobj); | 174 | kobject_put(&data->kobj); |
179 | module_put(cpufreq_driver->owner); | 175 | module_put(cpufreq_driver->owner); |
180 | } | 176 | } |
181 | EXPORT_SYMBOL_GPL(cpufreq_cpu_put); | 177 | EXPORT_SYMBOL_GPL(cpufreq_cpu_put); |
182 | 178 | ||
183 | 179 | ||
184 | /********************************************************************* | 180 | /********************************************************************* |
185 | * UNIFIED DEBUG HELPERS * | 181 | * UNIFIED DEBUG HELPERS * |
186 | *********************************************************************/ | 182 | *********************************************************************/ |
187 | #ifdef CONFIG_CPU_FREQ_DEBUG | 183 | #ifdef CONFIG_CPU_FREQ_DEBUG |
188 | 184 | ||
189 | /* what part(s) of the CPUfreq subsystem are debugged? */ | 185 | /* what part(s) of the CPUfreq subsystem are debugged? */ |
190 | static unsigned int debug; | 186 | static unsigned int debug; |
191 | 187 | ||
192 | /* is the debug output ratelimit'ed using printk_ratelimit? User can | 188 | /* is the debug output ratelimit'ed using printk_ratelimit? User can |
193 | * set or modify this value. | 189 | * set or modify this value. |
194 | */ | 190 | */ |
195 | static unsigned int debug_ratelimit = 1; | 191 | static unsigned int debug_ratelimit = 1; |
196 | 192 | ||
197 | /* is the printk_ratelimit'ing enabled? It's enabled after a successful | 193 | /* is the printk_ratelimit'ing enabled? It's enabled after a successful |
198 | * loading of a cpufreq driver, temporarily disabled when a new policy | 194 | * loading of a cpufreq driver, temporarily disabled when a new policy |
199 | * is set, and disabled upon cpufreq driver removal | 195 | * is set, and disabled upon cpufreq driver removal |
200 | */ | 196 | */ |
201 | static unsigned int disable_ratelimit = 1; | 197 | static unsigned int disable_ratelimit = 1; |
202 | static DEFINE_SPINLOCK(disable_ratelimit_lock); | 198 | static DEFINE_SPINLOCK(disable_ratelimit_lock); |
203 | 199 | ||
204 | static void cpufreq_debug_enable_ratelimit(void) | 200 | static void cpufreq_debug_enable_ratelimit(void) |
205 | { | 201 | { |
206 | unsigned long flags; | 202 | unsigned long flags; |
207 | 203 | ||
208 | spin_lock_irqsave(&disable_ratelimit_lock, flags); | 204 | spin_lock_irqsave(&disable_ratelimit_lock, flags); |
209 | if (disable_ratelimit) | 205 | if (disable_ratelimit) |
210 | disable_ratelimit--; | 206 | disable_ratelimit--; |
211 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); | 207 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); |
212 | } | 208 | } |
213 | 209 | ||
214 | static void cpufreq_debug_disable_ratelimit(void) | 210 | static void cpufreq_debug_disable_ratelimit(void) |
215 | { | 211 | { |
216 | unsigned long flags; | 212 | unsigned long flags; |
217 | 213 | ||
218 | spin_lock_irqsave(&disable_ratelimit_lock, flags); | 214 | spin_lock_irqsave(&disable_ratelimit_lock, flags); |
219 | disable_ratelimit++; | 215 | disable_ratelimit++; |
220 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); | 216 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); |
221 | } | 217 | } |
222 | 218 | ||
223 | void cpufreq_debug_printk(unsigned int type, const char *prefix, | 219 | void cpufreq_debug_printk(unsigned int type, const char *prefix, |
224 | const char *fmt, ...) | 220 | const char *fmt, ...) |
225 | { | 221 | { |
226 | char s[256]; | 222 | char s[256]; |
227 | va_list args; | 223 | va_list args; |
228 | unsigned int len; | 224 | unsigned int len; |
229 | unsigned long flags; | 225 | unsigned long flags; |
230 | 226 | ||
231 | WARN_ON(!prefix); | 227 | WARN_ON(!prefix); |
232 | if (type & debug) { | 228 | if (type & debug) { |
233 | spin_lock_irqsave(&disable_ratelimit_lock, flags); | 229 | spin_lock_irqsave(&disable_ratelimit_lock, flags); |
234 | if (!disable_ratelimit && debug_ratelimit | 230 | if (!disable_ratelimit && debug_ratelimit |
235 | && !printk_ratelimit()) { | 231 | && !printk_ratelimit()) { |
236 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); | 232 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); |
237 | return; | 233 | return; |
238 | } | 234 | } |
239 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); | 235 | spin_unlock_irqrestore(&disable_ratelimit_lock, flags); |
240 | 236 | ||
241 | len = snprintf(s, 256, KERN_DEBUG "%s: ", prefix); | 237 | len = snprintf(s, 256, KERN_DEBUG "%s: ", prefix); |
242 | 238 | ||
243 | va_start(args, fmt); | 239 | va_start(args, fmt); |
244 | len += vsnprintf(&s[len], (256 - len), fmt, args); | 240 | len += vsnprintf(&s[len], (256 - len), fmt, args); |
245 | va_end(args); | 241 | va_end(args); |
246 | 242 | ||
247 | printk(s); | 243 | printk(s); |
248 | 244 | ||
249 | WARN_ON(len < 5); | 245 | WARN_ON(len < 5); |
250 | } | 246 | } |
251 | } | 247 | } |
252 | EXPORT_SYMBOL(cpufreq_debug_printk); | 248 | EXPORT_SYMBOL(cpufreq_debug_printk); |
253 | 249 | ||
254 | 250 | ||
255 | module_param(debug, uint, 0644); | 251 | module_param(debug, uint, 0644); |
256 | MODULE_PARM_DESC(debug, "CPUfreq debugging: add 1 to debug core," | 252 | MODULE_PARM_DESC(debug, "CPUfreq debugging: add 1 to debug core," |
257 | " 2 to debug drivers, and 4 to debug governors."); | 253 | " 2 to debug drivers, and 4 to debug governors."); |
258 | 254 | ||
259 | module_param(debug_ratelimit, uint, 0644); | 255 | module_param(debug_ratelimit, uint, 0644); |
260 | MODULE_PARM_DESC(debug_ratelimit, "CPUfreq debugging:" | 256 | MODULE_PARM_DESC(debug_ratelimit, "CPUfreq debugging:" |
261 | " set to 0 to disable ratelimiting."); | 257 | " set to 0 to disable ratelimiting."); |
262 | 258 | ||
263 | #else /* !CONFIG_CPU_FREQ_DEBUG */ | 259 | #else /* !CONFIG_CPU_FREQ_DEBUG */ |
264 | 260 | ||
265 | static inline void cpufreq_debug_enable_ratelimit(void) { return; } | 261 | static inline void cpufreq_debug_enable_ratelimit(void) { return; } |
266 | static inline void cpufreq_debug_disable_ratelimit(void) { return; } | 262 | static inline void cpufreq_debug_disable_ratelimit(void) { return; } |
267 | 263 | ||
268 | #endif /* CONFIG_CPU_FREQ_DEBUG */ | 264 | #endif /* CONFIG_CPU_FREQ_DEBUG */ |
269 | 265 | ||
270 | 266 | ||
271 | /********************************************************************* | 267 | /********************************************************************* |
272 | * EXTERNALLY AFFECTING FREQUENCY CHANGES * | 268 | * EXTERNALLY AFFECTING FREQUENCY CHANGES * |
273 | *********************************************************************/ | 269 | *********************************************************************/ |
274 | 270 | ||
275 | /** | 271 | /** |
276 | * adjust_jiffies - adjust the system "loops_per_jiffy" | 272 | * adjust_jiffies - adjust the system "loops_per_jiffy" |
277 | * | 273 | * |
278 | * This function alters the system "loops_per_jiffy" for the clock | 274 | * This function alters the system "loops_per_jiffy" for the clock |
279 | * speed change. Note that loops_per_jiffy cannot be updated on SMP | 275 | * speed change. Note that loops_per_jiffy cannot be updated on SMP |
280 | * systems as each CPU might be scaled differently. So, use the arch | 276 | * systems as each CPU might be scaled differently. So, use the arch |
281 | * per-CPU loops_per_jiffy value wherever possible. | 277 | * per-CPU loops_per_jiffy value wherever possible. |
282 | */ | 278 | */ |
283 | #ifndef CONFIG_SMP | 279 | #ifndef CONFIG_SMP |
284 | static unsigned long l_p_j_ref; | 280 | static unsigned long l_p_j_ref; |
285 | static unsigned int l_p_j_ref_freq; | 281 | static unsigned int l_p_j_ref_freq; |
286 | 282 | ||
287 | static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) | 283 | static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) |
288 | { | 284 | { |
289 | if (ci->flags & CPUFREQ_CONST_LOOPS) | 285 | if (ci->flags & CPUFREQ_CONST_LOOPS) |
290 | return; | 286 | return; |
291 | 287 | ||
292 | if (!l_p_j_ref_freq) { | 288 | if (!l_p_j_ref_freq) { |
293 | l_p_j_ref = loops_per_jiffy; | 289 | l_p_j_ref = loops_per_jiffy; |
294 | l_p_j_ref_freq = ci->old; | 290 | l_p_j_ref_freq = ci->old; |
295 | dprintk("saving %lu as reference value for loops_per_jiffy; " | 291 | dprintk("saving %lu as reference value for loops_per_jiffy; " |
296 | "freq is %u kHz\n", l_p_j_ref, l_p_j_ref_freq); | 292 | "freq is %u kHz\n", l_p_j_ref, l_p_j_ref_freq); |
297 | } | 293 | } |
298 | if ((val == CPUFREQ_PRECHANGE && ci->old < ci->new) || | 294 | if ((val == CPUFREQ_PRECHANGE && ci->old < ci->new) || |
299 | (val == CPUFREQ_POSTCHANGE && ci->old > ci->new) || | 295 | (val == CPUFREQ_POSTCHANGE && ci->old > ci->new) || |
300 | (val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) { | 296 | (val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) { |
301 | loops_per_jiffy = cpufreq_scale(l_p_j_ref, l_p_j_ref_freq, | 297 | loops_per_jiffy = cpufreq_scale(l_p_j_ref, l_p_j_ref_freq, |
302 | ci->new); | 298 | ci->new); |
303 | dprintk("scaling loops_per_jiffy to %lu " | 299 | dprintk("scaling loops_per_jiffy to %lu " |
304 | "for frequency %u kHz\n", loops_per_jiffy, ci->new); | 300 | "for frequency %u kHz\n", loops_per_jiffy, ci->new); |
305 | } | 301 | } |
306 | } | 302 | } |
307 | #else | 303 | #else |
308 | static inline void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) | 304 | static inline void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) |
309 | { | 305 | { |
310 | return; | 306 | return; |
311 | } | 307 | } |
312 | #endif | 308 | #endif |
313 | 309 | ||
314 | 310 | ||
315 | /** | 311 | /** |
316 | * cpufreq_notify_transition - call notifier chain and adjust_jiffies | 312 | * cpufreq_notify_transition - call notifier chain and adjust_jiffies |
317 | * on frequency transition. | 313 | * on frequency transition. |
318 | * | 314 | * |
319 | * This function calls the transition notifiers and the "adjust_jiffies" | 315 | * This function calls the transition notifiers and the "adjust_jiffies" |
320 | * function. It is called twice on all CPU frequency changes that have | 316 | * function. It is called twice on all CPU frequency changes that have |
321 | * external effects. | 317 | * external effects. |
322 | */ | 318 | */ |
323 | void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state) | 319 | void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state) |
324 | { | 320 | { |
325 | struct cpufreq_policy *policy; | 321 | struct cpufreq_policy *policy; |
326 | 322 | ||
327 | BUG_ON(irqs_disabled()); | 323 | BUG_ON(irqs_disabled()); |
328 | 324 | ||
329 | freqs->flags = cpufreq_driver->flags; | 325 | freqs->flags = cpufreq_driver->flags; |
330 | dprintk("notification %u of frequency transition to %u kHz\n", | 326 | dprintk("notification %u of frequency transition to %u kHz\n", |
331 | state, freqs->new); | 327 | state, freqs->new); |
332 | 328 | ||
333 | policy = per_cpu(cpufreq_cpu_data, freqs->cpu); | 329 | policy = per_cpu(cpufreq_cpu_data, freqs->cpu); |
334 | switch (state) { | 330 | switch (state) { |
335 | 331 | ||
336 | case CPUFREQ_PRECHANGE: | 332 | case CPUFREQ_PRECHANGE: |
337 | /* detect if the driver reported a value as "old frequency" | 333 | /* detect if the driver reported a value as "old frequency" |
338 | * which is not equal to what the cpufreq core thinks is | 334 | * which is not equal to what the cpufreq core thinks is |
339 | * "old frequency". | 335 | * "old frequency". |
340 | */ | 336 | */ |
341 | if (!(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { | 337 | if (!(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { |
342 | if ((policy) && (policy->cpu == freqs->cpu) && | 338 | if ((policy) && (policy->cpu == freqs->cpu) && |
343 | (policy->cur) && (policy->cur != freqs->old)) { | 339 | (policy->cur) && (policy->cur != freqs->old)) { |
344 | dprintk("Warning: CPU frequency is" | 340 | dprintk("Warning: CPU frequency is" |
345 | " %u, cpufreq assumed %u kHz.\n", | 341 | " %u, cpufreq assumed %u kHz.\n", |
346 | freqs->old, policy->cur); | 342 | freqs->old, policy->cur); |
347 | freqs->old = policy->cur; | 343 | freqs->old = policy->cur; |
348 | } | 344 | } |
349 | } | 345 | } |
350 | srcu_notifier_call_chain(&cpufreq_transition_notifier_list, | 346 | srcu_notifier_call_chain(&cpufreq_transition_notifier_list, |
351 | CPUFREQ_PRECHANGE, freqs); | 347 | CPUFREQ_PRECHANGE, freqs); |
352 | adjust_jiffies(CPUFREQ_PRECHANGE, freqs); | 348 | adjust_jiffies(CPUFREQ_PRECHANGE, freqs); |
353 | break; | 349 | break; |
354 | 350 | ||
355 | case CPUFREQ_POSTCHANGE: | 351 | case CPUFREQ_POSTCHANGE: |
356 | adjust_jiffies(CPUFREQ_POSTCHANGE, freqs); | 352 | adjust_jiffies(CPUFREQ_POSTCHANGE, freqs); |
357 | srcu_notifier_call_chain(&cpufreq_transition_notifier_list, | 353 | srcu_notifier_call_chain(&cpufreq_transition_notifier_list, |
358 | CPUFREQ_POSTCHANGE, freqs); | 354 | CPUFREQ_POSTCHANGE, freqs); |
359 | if (likely(policy) && likely(policy->cpu == freqs->cpu)) | 355 | if (likely(policy) && likely(policy->cpu == freqs->cpu)) |
360 | policy->cur = freqs->new; | 356 | policy->cur = freqs->new; |
361 | break; | 357 | break; |
362 | } | 358 | } |
363 | } | 359 | } |
364 | EXPORT_SYMBOL_GPL(cpufreq_notify_transition); | 360 | EXPORT_SYMBOL_GPL(cpufreq_notify_transition); |
365 | 361 | ||
366 | 362 | ||
367 | 363 | ||
368 | /********************************************************************* | 364 | /********************************************************************* |
369 | * SYSFS INTERFACE * | 365 | * SYSFS INTERFACE * |
370 | *********************************************************************/ | 366 | *********************************************************************/ |
371 | 367 | ||
372 | static struct cpufreq_governor *__find_governor(const char *str_governor) | 368 | static struct cpufreq_governor *__find_governor(const char *str_governor) |
373 | { | 369 | { |
374 | struct cpufreq_governor *t; | 370 | struct cpufreq_governor *t; |
375 | 371 | ||
376 | list_for_each_entry(t, &cpufreq_governor_list, governor_list) | 372 | list_for_each_entry(t, &cpufreq_governor_list, governor_list) |
377 | if (!strnicmp(str_governor, t->name, CPUFREQ_NAME_LEN)) | 373 | if (!strnicmp(str_governor, t->name, CPUFREQ_NAME_LEN)) |
378 | return t; | 374 | return t; |
379 | 375 | ||
380 | return NULL; | 376 | return NULL; |
381 | } | 377 | } |
382 | 378 | ||
383 | /** | 379 | /** |
384 | * cpufreq_parse_governor - parse a governor string | 380 | * cpufreq_parse_governor - parse a governor string |
385 | */ | 381 | */ |
386 | static int cpufreq_parse_governor(char *str_governor, unsigned int *policy, | 382 | static int cpufreq_parse_governor(char *str_governor, unsigned int *policy, |
387 | struct cpufreq_governor **governor) | 383 | struct cpufreq_governor **governor) |
388 | { | 384 | { |
389 | int err = -EINVAL; | 385 | int err = -EINVAL; |
390 | 386 | ||
391 | if (!cpufreq_driver) | 387 | if (!cpufreq_driver) |
392 | goto out; | 388 | goto out; |
393 | 389 | ||
394 | if (cpufreq_driver->setpolicy) { | 390 | if (cpufreq_driver->setpolicy) { |
395 | if (!strnicmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { | 391 | if (!strnicmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { |
396 | *policy = CPUFREQ_POLICY_PERFORMANCE; | 392 | *policy = CPUFREQ_POLICY_PERFORMANCE; |
397 | err = 0; | 393 | err = 0; |
398 | } else if (!strnicmp(str_governor, "powersave", | 394 | } else if (!strnicmp(str_governor, "powersave", |
399 | CPUFREQ_NAME_LEN)) { | 395 | CPUFREQ_NAME_LEN)) { |
400 | *policy = CPUFREQ_POLICY_POWERSAVE; | 396 | *policy = CPUFREQ_POLICY_POWERSAVE; |
401 | err = 0; | 397 | err = 0; |
402 | } | 398 | } |
403 | } else if (cpufreq_driver->target) { | 399 | } else if (cpufreq_driver->target) { |
404 | struct cpufreq_governor *t; | 400 | struct cpufreq_governor *t; |
405 | 401 | ||
406 | mutex_lock(&cpufreq_governor_mutex); | 402 | mutex_lock(&cpufreq_governor_mutex); |
407 | 403 | ||
408 | t = __find_governor(str_governor); | 404 | t = __find_governor(str_governor); |
409 | 405 | ||
410 | if (t == NULL) { | 406 | if (t == NULL) { |
411 | char *name = kasprintf(GFP_KERNEL, "cpufreq_%s", | 407 | char *name = kasprintf(GFP_KERNEL, "cpufreq_%s", |
412 | str_governor); | 408 | str_governor); |
413 | 409 | ||
414 | if (name) { | 410 | if (name) { |
415 | int ret; | 411 | int ret; |
416 | 412 | ||
417 | mutex_unlock(&cpufreq_governor_mutex); | 413 | mutex_unlock(&cpufreq_governor_mutex); |
418 | ret = request_module("%s", name); | 414 | ret = request_module("%s", name); |
419 | mutex_lock(&cpufreq_governor_mutex); | 415 | mutex_lock(&cpufreq_governor_mutex); |
420 | 416 | ||
421 | if (ret == 0) | 417 | if (ret == 0) |
422 | t = __find_governor(str_governor); | 418 | t = __find_governor(str_governor); |
423 | } | 419 | } |
424 | 420 | ||
425 | kfree(name); | 421 | kfree(name); |
426 | } | 422 | } |
427 | 423 | ||
428 | if (t != NULL) { | 424 | if (t != NULL) { |
429 | *governor = t; | 425 | *governor = t; |
430 | err = 0; | 426 | err = 0; |
431 | } | 427 | } |
432 | 428 | ||
433 | mutex_unlock(&cpufreq_governor_mutex); | 429 | mutex_unlock(&cpufreq_governor_mutex); |
434 | } | 430 | } |
435 | out: | 431 | out: |
436 | return err; | 432 | return err; |
437 | } | 433 | } |
438 | 434 | ||
439 | 435 | ||
440 | /** | 436 | /** |
441 | * cpufreq_per_cpu_attr_read() / show_##file_name() - | 437 | * cpufreq_per_cpu_attr_read() / show_##file_name() - |
442 | * print out cpufreq information | 438 | * print out cpufreq information |
443 | * | 439 | * |
444 | * Write out information from cpufreq_driver->policy[cpu]; object must be | 440 | * Write out information from cpufreq_driver->policy[cpu]; object must be |
445 | * "unsigned int". | 441 | * "unsigned int". |
446 | */ | 442 | */ |
447 | 443 | ||
448 | #define show_one(file_name, object) \ | 444 | #define show_one(file_name, object) \ |
449 | static ssize_t show_##file_name \ | 445 | static ssize_t show_##file_name \ |
450 | (struct cpufreq_policy *policy, char *buf) \ | 446 | (struct cpufreq_policy *policy, char *buf) \ |
451 | { \ | 447 | { \ |
452 | return sprintf(buf, "%u\n", policy->object); \ | 448 | return sprintf(buf, "%u\n", policy->object); \ |
453 | } | 449 | } |
454 | 450 | ||
455 | show_one(cpuinfo_min_freq, cpuinfo.min_freq); | 451 | show_one(cpuinfo_min_freq, cpuinfo.min_freq); |
456 | show_one(cpuinfo_max_freq, cpuinfo.max_freq); | 452 | show_one(cpuinfo_max_freq, cpuinfo.max_freq); |
457 | show_one(cpuinfo_transition_latency, cpuinfo.transition_latency); | 453 | show_one(cpuinfo_transition_latency, cpuinfo.transition_latency); |
458 | show_one(scaling_min_freq, min); | 454 | show_one(scaling_min_freq, min); |
459 | show_one(scaling_max_freq, max); | 455 | show_one(scaling_max_freq, max); |
460 | show_one(scaling_cur_freq, cur); | 456 | show_one(scaling_cur_freq, cur); |
461 | 457 | ||
462 | static int __cpufreq_set_policy(struct cpufreq_policy *data, | 458 | static int __cpufreq_set_policy(struct cpufreq_policy *data, |
463 | struct cpufreq_policy *policy); | 459 | struct cpufreq_policy *policy); |
464 | 460 | ||
465 | /** | 461 | /** |
466 | * cpufreq_per_cpu_attr_write() / store_##file_name() - sysfs write access | 462 | * cpufreq_per_cpu_attr_write() / store_##file_name() - sysfs write access |
467 | */ | 463 | */ |
468 | #define store_one(file_name, object) \ | 464 | #define store_one(file_name, object) \ |
469 | static ssize_t store_##file_name \ | 465 | static ssize_t store_##file_name \ |
470 | (struct cpufreq_policy *policy, const char *buf, size_t count) \ | 466 | (struct cpufreq_policy *policy, const char *buf, size_t count) \ |
471 | { \ | 467 | { \ |
472 | unsigned int ret = -EINVAL; \ | 468 | unsigned int ret = -EINVAL; \ |
473 | struct cpufreq_policy new_policy; \ | 469 | struct cpufreq_policy new_policy; \ |
474 | \ | 470 | \ |
475 | ret = cpufreq_get_policy(&new_policy, policy->cpu); \ | 471 | ret = cpufreq_get_policy(&new_policy, policy->cpu); \ |
476 | if (ret) \ | 472 | if (ret) \ |
477 | return -EINVAL; \ | 473 | return -EINVAL; \ |
478 | \ | 474 | \ |
479 | ret = sscanf(buf, "%u", &new_policy.object); \ | 475 | ret = sscanf(buf, "%u", &new_policy.object); \ |
480 | if (ret != 1) \ | 476 | if (ret != 1) \ |
481 | return -EINVAL; \ | 477 | return -EINVAL; \ |
482 | \ | 478 | \ |
483 | ret = __cpufreq_set_policy(policy, &new_policy); \ | 479 | ret = __cpufreq_set_policy(policy, &new_policy); \ |
484 | policy->user_policy.object = policy->object; \ | 480 | policy->user_policy.object = policy->object; \ |
485 | \ | 481 | \ |
486 | return ret ? ret : count; \ | 482 | return ret ? ret : count; \ |
487 | } | 483 | } |
488 | 484 | ||
489 | store_one(scaling_min_freq, min); | 485 | store_one(scaling_min_freq, min); |
490 | store_one(scaling_max_freq, max); | 486 | store_one(scaling_max_freq, max); |
491 | 487 | ||
492 | /** | 488 | /** |
493 | * show_cpuinfo_cur_freq - current CPU frequency as detected by hardware | 489 | * show_cpuinfo_cur_freq - current CPU frequency as detected by hardware |
494 | */ | 490 | */ |
495 | static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy, | 491 | static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy, |
496 | char *buf) | 492 | char *buf) |
497 | { | 493 | { |
498 | unsigned int cur_freq = __cpufreq_get(policy->cpu); | 494 | unsigned int cur_freq = __cpufreq_get(policy->cpu); |
499 | if (!cur_freq) | 495 | if (!cur_freq) |
500 | return sprintf(buf, "<unknown>"); | 496 | return sprintf(buf, "<unknown>"); |
501 | return sprintf(buf, "%u\n", cur_freq); | 497 | return sprintf(buf, "%u\n", cur_freq); |
502 | } | 498 | } |
503 | 499 | ||
504 | 500 | ||
505 | /** | 501 | /** |
506 | * show_scaling_governor - show the current policy for the specified CPU | 502 | * show_scaling_governor - show the current policy for the specified CPU |
507 | */ | 503 | */ |
508 | static ssize_t show_scaling_governor(struct cpufreq_policy *policy, char *buf) | 504 | static ssize_t show_scaling_governor(struct cpufreq_policy *policy, char *buf) |
509 | { | 505 | { |
510 | if (policy->policy == CPUFREQ_POLICY_POWERSAVE) | 506 | if (policy->policy == CPUFREQ_POLICY_POWERSAVE) |
511 | return sprintf(buf, "powersave\n"); | 507 | return sprintf(buf, "powersave\n"); |
512 | else if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) | 508 | else if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) |
513 | return sprintf(buf, "performance\n"); | 509 | return sprintf(buf, "performance\n"); |
514 | else if (policy->governor) | 510 | else if (policy->governor) |
515 | return scnprintf(buf, CPUFREQ_NAME_LEN, "%s\n", | 511 | return scnprintf(buf, CPUFREQ_NAME_LEN, "%s\n", |
516 | policy->governor->name); | 512 | policy->governor->name); |
517 | return -EINVAL; | 513 | return -EINVAL; |
518 | } | 514 | } |
519 | 515 | ||
520 | 516 | ||
521 | /** | 517 | /** |
522 | * store_scaling_governor - store policy for the specified CPU | 518 | * store_scaling_governor - store policy for the specified CPU |
523 | */ | 519 | */ |
524 | static ssize_t store_scaling_governor(struct cpufreq_policy *policy, | 520 | static ssize_t store_scaling_governor(struct cpufreq_policy *policy, |
525 | const char *buf, size_t count) | 521 | const char *buf, size_t count) |
526 | { | 522 | { |
527 | unsigned int ret = -EINVAL; | 523 | unsigned int ret = -EINVAL; |
528 | char str_governor[16]; | 524 | char str_governor[16]; |
529 | struct cpufreq_policy new_policy; | 525 | struct cpufreq_policy new_policy; |
530 | 526 | ||
531 | ret = cpufreq_get_policy(&new_policy, policy->cpu); | 527 | ret = cpufreq_get_policy(&new_policy, policy->cpu); |
532 | if (ret) | 528 | if (ret) |
533 | return ret; | 529 | return ret; |
534 | 530 | ||
535 | ret = sscanf(buf, "%15s", str_governor); | 531 | ret = sscanf(buf, "%15s", str_governor); |
536 | if (ret != 1) | 532 | if (ret != 1) |
537 | return -EINVAL; | 533 | return -EINVAL; |
538 | 534 | ||
539 | if (cpufreq_parse_governor(str_governor, &new_policy.policy, | 535 | if (cpufreq_parse_governor(str_governor, &new_policy.policy, |
540 | &new_policy.governor)) | 536 | &new_policy.governor)) |
541 | return -EINVAL; | 537 | return -EINVAL; |
542 | 538 | ||
543 | /* Do not use cpufreq_set_policy here or the user_policy.max | 539 | /* Do not use cpufreq_set_policy here or the user_policy.max |
544 | will be wrongly overridden */ | 540 | will be wrongly overridden */ |
545 | ret = __cpufreq_set_policy(policy, &new_policy); | 541 | ret = __cpufreq_set_policy(policy, &new_policy); |
546 | 542 | ||
547 | policy->user_policy.policy = policy->policy; | 543 | policy->user_policy.policy = policy->policy; |
548 | policy->user_policy.governor = policy->governor; | 544 | policy->user_policy.governor = policy->governor; |
549 | 545 | ||
550 | if (ret) | 546 | if (ret) |
551 | return ret; | 547 | return ret; |
552 | else | 548 | else |
553 | return count; | 549 | return count; |
554 | } | 550 | } |
555 | 551 | ||
556 | /** | 552 | /** |
557 | * show_scaling_driver - show the cpufreq driver currently loaded | 553 | * show_scaling_driver - show the cpufreq driver currently loaded |
558 | */ | 554 | */ |
559 | static ssize_t show_scaling_driver(struct cpufreq_policy *policy, char *buf) | 555 | static ssize_t show_scaling_driver(struct cpufreq_policy *policy, char *buf) |
560 | { | 556 | { |
561 | return scnprintf(buf, CPUFREQ_NAME_LEN, "%s\n", cpufreq_driver->name); | 557 | return scnprintf(buf, CPUFREQ_NAME_LEN, "%s\n", cpufreq_driver->name); |
562 | } | 558 | } |
563 | 559 | ||
564 | /** | 560 | /** |
565 | * show_scaling_available_governors - show the available CPUfreq governors | 561 | * show_scaling_available_governors - show the available CPUfreq governors |
566 | */ | 562 | */ |
567 | static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy, | 563 | static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy, |
568 | char *buf) | 564 | char *buf) |
569 | { | 565 | { |
570 | ssize_t i = 0; | 566 | ssize_t i = 0; |
571 | struct cpufreq_governor *t; | 567 | struct cpufreq_governor *t; |
572 | 568 | ||
573 | if (!cpufreq_driver->target) { | 569 | if (!cpufreq_driver->target) { |
574 | i += sprintf(buf, "performance powersave"); | 570 | i += sprintf(buf, "performance powersave"); |
575 | goto out; | 571 | goto out; |
576 | } | 572 | } |
577 | 573 | ||
578 | list_for_each_entry(t, &cpufreq_governor_list, governor_list) { | 574 | list_for_each_entry(t, &cpufreq_governor_list, governor_list) { |
579 | if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char)) | 575 | if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char)) |
580 | - (CPUFREQ_NAME_LEN + 2))) | 576 | - (CPUFREQ_NAME_LEN + 2))) |
581 | goto out; | 577 | goto out; |
582 | i += scnprintf(&buf[i], CPUFREQ_NAME_LEN, "%s ", t->name); | 578 | i += scnprintf(&buf[i], CPUFREQ_NAME_LEN, "%s ", t->name); |
583 | } | 579 | } |
584 | out: | 580 | out: |
585 | i += sprintf(&buf[i], "\n"); | 581 | i += sprintf(&buf[i], "\n"); |
586 | return i; | 582 | return i; |
587 | } | 583 | } |
588 | 584 | ||
589 | static ssize_t show_cpus(const struct cpumask *mask, char *buf) | 585 | static ssize_t show_cpus(const struct cpumask *mask, char *buf) |
590 | { | 586 | { |
591 | ssize_t i = 0; | 587 | ssize_t i = 0; |
592 | unsigned int cpu; | 588 | unsigned int cpu; |
593 | 589 | ||
594 | for_each_cpu(cpu, mask) { | 590 | for_each_cpu(cpu, mask) { |
595 | if (i) | 591 | if (i) |
596 | i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), " "); | 592 | i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), " "); |
597 | i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), "%u", cpu); | 593 | i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), "%u", cpu); |
598 | if (i >= (PAGE_SIZE - 5)) | 594 | if (i >= (PAGE_SIZE - 5)) |
599 | break; | 595 | break; |
600 | } | 596 | } |
601 | i += sprintf(&buf[i], "\n"); | 597 | i += sprintf(&buf[i], "\n"); |
602 | return i; | 598 | return i; |
603 | } | 599 | } |
604 | 600 | ||
605 | /** | 601 | /** |
606 | * show_related_cpus - show the CPUs affected by each transition even if | 602 | * show_related_cpus - show the CPUs affected by each transition even if |
607 | * hw coordination is in use | 603 | * hw coordination is in use |
608 | */ | 604 | */ |
609 | static ssize_t show_related_cpus(struct cpufreq_policy *policy, char *buf) | 605 | static ssize_t show_related_cpus(struct cpufreq_policy *policy, char *buf) |
610 | { | 606 | { |
611 | if (cpumask_empty(policy->related_cpus)) | 607 | if (cpumask_empty(policy->related_cpus)) |
612 | return show_cpus(policy->cpus, buf); | 608 | return show_cpus(policy->cpus, buf); |
613 | return show_cpus(policy->related_cpus, buf); | 609 | return show_cpus(policy->related_cpus, buf); |
614 | } | 610 | } |
615 | 611 | ||
616 | /** | 612 | /** |
617 | * show_affected_cpus - show the CPUs affected by each transition | 613 | * show_affected_cpus - show the CPUs affected by each transition |
618 | */ | 614 | */ |
619 | static ssize_t show_affected_cpus(struct cpufreq_policy *policy, char *buf) | 615 | static ssize_t show_affected_cpus(struct cpufreq_policy *policy, char *buf) |
620 | { | 616 | { |
621 | return show_cpus(policy->cpus, buf); | 617 | return show_cpus(policy->cpus, buf); |
622 | } | 618 | } |
623 | 619 | ||
624 | static ssize_t store_scaling_setspeed(struct cpufreq_policy *policy, | 620 | static ssize_t store_scaling_setspeed(struct cpufreq_policy *policy, |
625 | const char *buf, size_t count) | 621 | const char *buf, size_t count) |
626 | { | 622 | { |
627 | unsigned int freq = 0; | 623 | unsigned int freq = 0; |
628 | unsigned int ret; | 624 | unsigned int ret; |
629 | 625 | ||
630 | if (!policy->governor || !policy->governor->store_setspeed) | 626 | if (!policy->governor || !policy->governor->store_setspeed) |
631 | return -EINVAL; | 627 | return -EINVAL; |
632 | 628 | ||
633 | ret = sscanf(buf, "%u", &freq); | 629 | ret = sscanf(buf, "%u", &freq); |
634 | if (ret != 1) | 630 | if (ret != 1) |
635 | return -EINVAL; | 631 | return -EINVAL; |
636 | 632 | ||
637 | policy->governor->store_setspeed(policy, freq); | 633 | policy->governor->store_setspeed(policy, freq); |
638 | 634 | ||
639 | return count; | 635 | return count; |
640 | } | 636 | } |
641 | 637 | ||
642 | static ssize_t show_scaling_setspeed(struct cpufreq_policy *policy, char *buf) | 638 | static ssize_t show_scaling_setspeed(struct cpufreq_policy *policy, char *buf) |
643 | { | 639 | { |
644 | if (!policy->governor || !policy->governor->show_setspeed) | 640 | if (!policy->governor || !policy->governor->show_setspeed) |
645 | return sprintf(buf, "<unsupported>\n"); | 641 | return sprintf(buf, "<unsupported>\n"); |
646 | 642 | ||
647 | return policy->governor->show_setspeed(policy, buf); | 643 | return policy->governor->show_setspeed(policy, buf); |
648 | } | 644 | } |
649 | 645 | ||
650 | /** | 646 | /** |
651 | * show_scaling_driver - show the current cpufreq HW/BIOS limitation | 647 | * show_scaling_driver - show the current cpufreq HW/BIOS limitation |
652 | */ | 648 | */ |
653 | static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf) | 649 | static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf) |
654 | { | 650 | { |
655 | unsigned int limit; | 651 | unsigned int limit; |
656 | int ret; | 652 | int ret; |
657 | if (cpufreq_driver->bios_limit) { | 653 | if (cpufreq_driver->bios_limit) { |
658 | ret = cpufreq_driver->bios_limit(policy->cpu, &limit); | 654 | ret = cpufreq_driver->bios_limit(policy->cpu, &limit); |
659 | if (!ret) | 655 | if (!ret) |
660 | return sprintf(buf, "%u\n", limit); | 656 | return sprintf(buf, "%u\n", limit); |
661 | } | 657 | } |
662 | return sprintf(buf, "%u\n", policy->cpuinfo.max_freq); | 658 | return sprintf(buf, "%u\n", policy->cpuinfo.max_freq); |
663 | } | 659 | } |
664 | 660 | ||
665 | cpufreq_freq_attr_ro_perm(cpuinfo_cur_freq, 0400); | 661 | cpufreq_freq_attr_ro_perm(cpuinfo_cur_freq, 0400); |
666 | cpufreq_freq_attr_ro(cpuinfo_min_freq); | 662 | cpufreq_freq_attr_ro(cpuinfo_min_freq); |
667 | cpufreq_freq_attr_ro(cpuinfo_max_freq); | 663 | cpufreq_freq_attr_ro(cpuinfo_max_freq); |
668 | cpufreq_freq_attr_ro(cpuinfo_transition_latency); | 664 | cpufreq_freq_attr_ro(cpuinfo_transition_latency); |
669 | cpufreq_freq_attr_ro(scaling_available_governors); | 665 | cpufreq_freq_attr_ro(scaling_available_governors); |
670 | cpufreq_freq_attr_ro(scaling_driver); | 666 | cpufreq_freq_attr_ro(scaling_driver); |
671 | cpufreq_freq_attr_ro(scaling_cur_freq); | 667 | cpufreq_freq_attr_ro(scaling_cur_freq); |
672 | cpufreq_freq_attr_ro(bios_limit); | 668 | cpufreq_freq_attr_ro(bios_limit); |
673 | cpufreq_freq_attr_ro(related_cpus); | 669 | cpufreq_freq_attr_ro(related_cpus); |
674 | cpufreq_freq_attr_ro(affected_cpus); | 670 | cpufreq_freq_attr_ro(affected_cpus); |
675 | cpufreq_freq_attr_rw(scaling_min_freq); | 671 | cpufreq_freq_attr_rw(scaling_min_freq); |
676 | cpufreq_freq_attr_rw(scaling_max_freq); | 672 | cpufreq_freq_attr_rw(scaling_max_freq); |
677 | cpufreq_freq_attr_rw(scaling_governor); | 673 | cpufreq_freq_attr_rw(scaling_governor); |
678 | cpufreq_freq_attr_rw(scaling_setspeed); | 674 | cpufreq_freq_attr_rw(scaling_setspeed); |
679 | 675 | ||
680 | static struct attribute *default_attrs[] = { | 676 | static struct attribute *default_attrs[] = { |
681 | &cpuinfo_min_freq.attr, | 677 | &cpuinfo_min_freq.attr, |
682 | &cpuinfo_max_freq.attr, | 678 | &cpuinfo_max_freq.attr, |
683 | &cpuinfo_transition_latency.attr, | 679 | &cpuinfo_transition_latency.attr, |
684 | &scaling_min_freq.attr, | 680 | &scaling_min_freq.attr, |
685 | &scaling_max_freq.attr, | 681 | &scaling_max_freq.attr, |
686 | &affected_cpus.attr, | 682 | &affected_cpus.attr, |
687 | &related_cpus.attr, | 683 | &related_cpus.attr, |
688 | &scaling_governor.attr, | 684 | &scaling_governor.attr, |
689 | &scaling_driver.attr, | 685 | &scaling_driver.attr, |
690 | &scaling_available_governors.attr, | 686 | &scaling_available_governors.attr, |
691 | &scaling_setspeed.attr, | 687 | &scaling_setspeed.attr, |
692 | NULL | 688 | NULL |
693 | }; | 689 | }; |
694 | 690 | ||
695 | struct kobject *cpufreq_global_kobject; | 691 | struct kobject *cpufreq_global_kobject; |
696 | EXPORT_SYMBOL(cpufreq_global_kobject); | 692 | EXPORT_SYMBOL(cpufreq_global_kobject); |
697 | 693 | ||
698 | #define to_policy(k) container_of(k, struct cpufreq_policy, kobj) | 694 | #define to_policy(k) container_of(k, struct cpufreq_policy, kobj) |
699 | #define to_attr(a) container_of(a, struct freq_attr, attr) | 695 | #define to_attr(a) container_of(a, struct freq_attr, attr) |
700 | 696 | ||
701 | static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf) | 697 | static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf) |
702 | { | 698 | { |
703 | struct cpufreq_policy *policy = to_policy(kobj); | 699 | struct cpufreq_policy *policy = to_policy(kobj); |
704 | struct freq_attr *fattr = to_attr(attr); | 700 | struct freq_attr *fattr = to_attr(attr); |
705 | ssize_t ret = -EINVAL; | 701 | ssize_t ret = -EINVAL; |
706 | policy = cpufreq_cpu_get(policy->cpu); | 702 | policy = cpufreq_cpu_get(policy->cpu); |
707 | if (!policy) | 703 | if (!policy) |
708 | goto no_policy; | 704 | goto no_policy; |
709 | 705 | ||
710 | if (lock_policy_rwsem_read(policy->cpu) < 0) | 706 | if (lock_policy_rwsem_read(policy->cpu) < 0) |
711 | goto fail; | 707 | goto fail; |
712 | 708 | ||
713 | if (fattr->show) | 709 | if (fattr->show) |
714 | ret = fattr->show(policy, buf); | 710 | ret = fattr->show(policy, buf); |
715 | else | 711 | else |
716 | ret = -EIO; | 712 | ret = -EIO; |
717 | 713 | ||
718 | unlock_policy_rwsem_read(policy->cpu); | 714 | unlock_policy_rwsem_read(policy->cpu); |
719 | fail: | 715 | fail: |
720 | cpufreq_cpu_put(policy); | 716 | cpufreq_cpu_put(policy); |
721 | no_policy: | 717 | no_policy: |
722 | return ret; | 718 | return ret; |
723 | } | 719 | } |
724 | 720 | ||
725 | static ssize_t store(struct kobject *kobj, struct attribute *attr, | 721 | static ssize_t store(struct kobject *kobj, struct attribute *attr, |
726 | const char *buf, size_t count) | 722 | const char *buf, size_t count) |
727 | { | 723 | { |
728 | struct cpufreq_policy *policy = to_policy(kobj); | 724 | struct cpufreq_policy *policy = to_policy(kobj); |
729 | struct freq_attr *fattr = to_attr(attr); | 725 | struct freq_attr *fattr = to_attr(attr); |
730 | ssize_t ret = -EINVAL; | 726 | ssize_t ret = -EINVAL; |
731 | policy = cpufreq_cpu_get(policy->cpu); | 727 | policy = cpufreq_cpu_get(policy->cpu); |
732 | if (!policy) | 728 | if (!policy) |
733 | goto no_policy; | 729 | goto no_policy; |
734 | 730 | ||
735 | if (lock_policy_rwsem_write(policy->cpu) < 0) | 731 | if (lock_policy_rwsem_write(policy->cpu) < 0) |
736 | goto fail; | 732 | goto fail; |
737 | 733 | ||
738 | if (fattr->store) | 734 | if (fattr->store) |
739 | ret = fattr->store(policy, buf, count); | 735 | ret = fattr->store(policy, buf, count); |
740 | else | 736 | else |
741 | ret = -EIO; | 737 | ret = -EIO; |
742 | 738 | ||
743 | unlock_policy_rwsem_write(policy->cpu); | 739 | unlock_policy_rwsem_write(policy->cpu); |
744 | fail: | 740 | fail: |
745 | cpufreq_cpu_put(policy); | 741 | cpufreq_cpu_put(policy); |
746 | no_policy: | 742 | no_policy: |
747 | return ret; | 743 | return ret; |
748 | } | 744 | } |
749 | 745 | ||
750 | static void cpufreq_sysfs_release(struct kobject *kobj) | 746 | static void cpufreq_sysfs_release(struct kobject *kobj) |
751 | { | 747 | { |
752 | struct cpufreq_policy *policy = to_policy(kobj); | 748 | struct cpufreq_policy *policy = to_policy(kobj); |
753 | dprintk("last reference is dropped\n"); | 749 | dprintk("last reference is dropped\n"); |
754 | complete(&policy->kobj_unregister); | 750 | complete(&policy->kobj_unregister); |
755 | } | 751 | } |
756 | 752 | ||
757 | static const struct sysfs_ops sysfs_ops = { | 753 | static const struct sysfs_ops sysfs_ops = { |
758 | .show = show, | 754 | .show = show, |
759 | .store = store, | 755 | .store = store, |
760 | }; | 756 | }; |
761 | 757 | ||
762 | static struct kobj_type ktype_cpufreq = { | 758 | static struct kobj_type ktype_cpufreq = { |
763 | .sysfs_ops = &sysfs_ops, | 759 | .sysfs_ops = &sysfs_ops, |
764 | .default_attrs = default_attrs, | 760 | .default_attrs = default_attrs, |
765 | .release = cpufreq_sysfs_release, | 761 | .release = cpufreq_sysfs_release, |
766 | }; | 762 | }; |
767 | 763 | ||
768 | /* | 764 | /* |
769 | * Returns: | 765 | * Returns: |
770 | * Negative: Failure | 766 | * Negative: Failure |
771 | * 0: Success | 767 | * 0: Success |
772 | * Positive: When we have a managed CPU and the sysfs got symlinked | 768 | * Positive: When we have a managed CPU and the sysfs got symlinked |
773 | */ | 769 | */ |
774 | static int cpufreq_add_dev_policy(unsigned int cpu, | 770 | static int cpufreq_add_dev_policy(unsigned int cpu, |
775 | struct cpufreq_policy *policy, | 771 | struct cpufreq_policy *policy, |
776 | struct sys_device *sys_dev) | 772 | struct sys_device *sys_dev) |
777 | { | 773 | { |
778 | int ret = 0; | 774 | int ret = 0; |
779 | #ifdef CONFIG_SMP | 775 | #ifdef CONFIG_SMP |
780 | unsigned long flags; | 776 | unsigned long flags; |
781 | unsigned int j; | 777 | unsigned int j; |
782 | #ifdef CONFIG_HOTPLUG_CPU | 778 | #ifdef CONFIG_HOTPLUG_CPU |
783 | struct cpufreq_governor *gov; | 779 | struct cpufreq_governor *gov; |
784 | 780 | ||
785 | gov = __find_governor(per_cpu(cpufreq_cpu_governor, cpu)); | 781 | gov = __find_governor(per_cpu(cpufreq_cpu_governor, cpu)); |
786 | if (gov) { | 782 | if (gov) { |
787 | policy->governor = gov; | 783 | policy->governor = gov; |
788 | dprintk("Restoring governor %s for cpu %d\n", | 784 | dprintk("Restoring governor %s for cpu %d\n", |
789 | policy->governor->name, cpu); | 785 | policy->governor->name, cpu); |
790 | } | 786 | } |
791 | #endif | 787 | #endif |
792 | 788 | ||
793 | for_each_cpu(j, policy->cpus) { | 789 | for_each_cpu(j, policy->cpus) { |
794 | struct cpufreq_policy *managed_policy; | 790 | struct cpufreq_policy *managed_policy; |
795 | 791 | ||
796 | if (cpu == j) | 792 | if (cpu == j) |
797 | continue; | 793 | continue; |
798 | 794 | ||
799 | /* Check for existing affected CPUs. | 795 | /* Check for existing affected CPUs. |
800 | * They may not be aware of it due to CPU Hotplug. | 796 | * They may not be aware of it due to CPU Hotplug. |
801 | * cpufreq_cpu_put is called when the device is removed | 797 | * cpufreq_cpu_put is called when the device is removed |
802 | * in __cpufreq_remove_dev() | 798 | * in __cpufreq_remove_dev() |
803 | */ | 799 | */ |
804 | managed_policy = cpufreq_cpu_get(j); | 800 | managed_policy = cpufreq_cpu_get(j); |
805 | if (unlikely(managed_policy)) { | 801 | if (unlikely(managed_policy)) { |
806 | 802 | ||
807 | /* Set proper policy_cpu */ | 803 | /* Set proper policy_cpu */ |
808 | unlock_policy_rwsem_write(cpu); | 804 | unlock_policy_rwsem_write(cpu); |
809 | per_cpu(cpufreq_policy_cpu, cpu) = managed_policy->cpu; | 805 | per_cpu(cpufreq_policy_cpu, cpu) = managed_policy->cpu; |
810 | 806 | ||
811 | if (lock_policy_rwsem_write(cpu) < 0) { | 807 | if (lock_policy_rwsem_write(cpu) < 0) { |
812 | /* Should not go through policy unlock path */ | 808 | /* Should not go through policy unlock path */ |
813 | if (cpufreq_driver->exit) | 809 | if (cpufreq_driver->exit) |
814 | cpufreq_driver->exit(policy); | 810 | cpufreq_driver->exit(policy); |
815 | cpufreq_cpu_put(managed_policy); | 811 | cpufreq_cpu_put(managed_policy); |
816 | return -EBUSY; | 812 | return -EBUSY; |
817 | } | 813 | } |
818 | 814 | ||
819 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 815 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
820 | cpumask_copy(managed_policy->cpus, policy->cpus); | 816 | cpumask_copy(managed_policy->cpus, policy->cpus); |
821 | per_cpu(cpufreq_cpu_data, cpu) = managed_policy; | 817 | per_cpu(cpufreq_cpu_data, cpu) = managed_policy; |
822 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 818 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
823 | 819 | ||
824 | dprintk("CPU already managed, adding link\n"); | 820 | dprintk("CPU already managed, adding link\n"); |
825 | ret = sysfs_create_link(&sys_dev->kobj, | 821 | ret = sysfs_create_link(&sys_dev->kobj, |
826 | &managed_policy->kobj, | 822 | &managed_policy->kobj, |
827 | "cpufreq"); | 823 | "cpufreq"); |
828 | if (ret) | 824 | if (ret) |
829 | cpufreq_cpu_put(managed_policy); | 825 | cpufreq_cpu_put(managed_policy); |
830 | /* | 826 | /* |
831 | * Success. We only needed to be added to the mask. | 827 | * Success. We only needed to be added to the mask. |
832 | * Call driver->exit() because only the cpu parent of | 828 | * Call driver->exit() because only the cpu parent of |
833 | * the kobj needed to call init(). | 829 | * the kobj needed to call init(). |
834 | */ | 830 | */ |
835 | if (cpufreq_driver->exit) | 831 | if (cpufreq_driver->exit) |
836 | cpufreq_driver->exit(policy); | 832 | cpufreq_driver->exit(policy); |
837 | 833 | ||
838 | if (!ret) | 834 | if (!ret) |
839 | return 1; | 835 | return 1; |
840 | else | 836 | else |
841 | return ret; | 837 | return ret; |
842 | } | 838 | } |
843 | } | 839 | } |
844 | #endif | 840 | #endif |
845 | return ret; | 841 | return ret; |
846 | } | 842 | } |
847 | 843 | ||
848 | 844 | ||
849 | /* symlink affected CPUs */ | 845 | /* symlink affected CPUs */ |
850 | static int cpufreq_add_dev_symlink(unsigned int cpu, | 846 | static int cpufreq_add_dev_symlink(unsigned int cpu, |
851 | struct cpufreq_policy *policy) | 847 | struct cpufreq_policy *policy) |
852 | { | 848 | { |
853 | unsigned int j; | 849 | unsigned int j; |
854 | int ret = 0; | 850 | int ret = 0; |
855 | 851 | ||
856 | for_each_cpu(j, policy->cpus) { | 852 | for_each_cpu(j, policy->cpus) { |
857 | struct cpufreq_policy *managed_policy; | 853 | struct cpufreq_policy *managed_policy; |
858 | struct sys_device *cpu_sys_dev; | 854 | struct sys_device *cpu_sys_dev; |
859 | 855 | ||
860 | if (j == cpu) | 856 | if (j == cpu) |
861 | continue; | 857 | continue; |
862 | if (!cpu_online(j)) | 858 | if (!cpu_online(j)) |
863 | continue; | 859 | continue; |
864 | 860 | ||
865 | dprintk("CPU %u already managed, adding link\n", j); | 861 | dprintk("CPU %u already managed, adding link\n", j); |
866 | managed_policy = cpufreq_cpu_get(cpu); | 862 | managed_policy = cpufreq_cpu_get(cpu); |
867 | cpu_sys_dev = get_cpu_sysdev(j); | 863 | cpu_sys_dev = get_cpu_sysdev(j); |
868 | ret = sysfs_create_link(&cpu_sys_dev->kobj, &policy->kobj, | 864 | ret = sysfs_create_link(&cpu_sys_dev->kobj, &policy->kobj, |
869 | "cpufreq"); | 865 | "cpufreq"); |
870 | if (ret) { | 866 | if (ret) { |
871 | cpufreq_cpu_put(managed_policy); | 867 | cpufreq_cpu_put(managed_policy); |
872 | return ret; | 868 | return ret; |
873 | } | 869 | } |
874 | } | 870 | } |
875 | return ret; | 871 | return ret; |
876 | } | 872 | } |
877 | 873 | ||
878 | static int cpufreq_add_dev_interface(unsigned int cpu, | 874 | static int cpufreq_add_dev_interface(unsigned int cpu, |
879 | struct cpufreq_policy *policy, | 875 | struct cpufreq_policy *policy, |
880 | struct sys_device *sys_dev) | 876 | struct sys_device *sys_dev) |
881 | { | 877 | { |
882 | struct cpufreq_policy new_policy; | 878 | struct cpufreq_policy new_policy; |
883 | struct freq_attr **drv_attr; | 879 | struct freq_attr **drv_attr; |
884 | unsigned long flags; | 880 | unsigned long flags; |
885 | int ret = 0; | 881 | int ret = 0; |
886 | unsigned int j; | 882 | unsigned int j; |
887 | 883 | ||
888 | /* prepare interface data */ | 884 | /* prepare interface data */ |
889 | ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, | 885 | ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, |
890 | &sys_dev->kobj, "cpufreq"); | 886 | &sys_dev->kobj, "cpufreq"); |
891 | if (ret) | 887 | if (ret) |
892 | return ret; | 888 | return ret; |
893 | 889 | ||
894 | /* set up files for this cpu device */ | 890 | /* set up files for this cpu device */ |
895 | drv_attr = cpufreq_driver->attr; | 891 | drv_attr = cpufreq_driver->attr; |
896 | while ((drv_attr) && (*drv_attr)) { | 892 | while ((drv_attr) && (*drv_attr)) { |
897 | ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr)); | 893 | ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr)); |
898 | if (ret) | 894 | if (ret) |
899 | goto err_out_kobj_put; | 895 | goto err_out_kobj_put; |
900 | drv_attr++; | 896 | drv_attr++; |
901 | } | 897 | } |
902 | if (cpufreq_driver->get) { | 898 | if (cpufreq_driver->get) { |
903 | ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr); | 899 | ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr); |
904 | if (ret) | 900 | if (ret) |
905 | goto err_out_kobj_put; | 901 | goto err_out_kobj_put; |
906 | } | 902 | } |
907 | if (cpufreq_driver->target) { | 903 | if (cpufreq_driver->target) { |
908 | ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); | 904 | ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); |
909 | if (ret) | 905 | if (ret) |
910 | goto err_out_kobj_put; | 906 | goto err_out_kobj_put; |
911 | } | 907 | } |
912 | if (cpufreq_driver->bios_limit) { | 908 | if (cpufreq_driver->bios_limit) { |
913 | ret = sysfs_create_file(&policy->kobj, &bios_limit.attr); | 909 | ret = sysfs_create_file(&policy->kobj, &bios_limit.attr); |
914 | if (ret) | 910 | if (ret) |
915 | goto err_out_kobj_put; | 911 | goto err_out_kobj_put; |
916 | } | 912 | } |
917 | 913 | ||
918 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 914 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
919 | for_each_cpu(j, policy->cpus) { | 915 | for_each_cpu(j, policy->cpus) { |
920 | if (!cpu_online(j)) | 916 | if (!cpu_online(j)) |
921 | continue; | 917 | continue; |
922 | per_cpu(cpufreq_cpu_data, j) = policy; | 918 | per_cpu(cpufreq_cpu_data, j) = policy; |
923 | per_cpu(cpufreq_policy_cpu, j) = policy->cpu; | 919 | per_cpu(cpufreq_policy_cpu, j) = policy->cpu; |
924 | } | 920 | } |
925 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 921 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
926 | 922 | ||
927 | ret = cpufreq_add_dev_symlink(cpu, policy); | 923 | ret = cpufreq_add_dev_symlink(cpu, policy); |
928 | if (ret) | 924 | if (ret) |
929 | goto err_out_kobj_put; | 925 | goto err_out_kobj_put; |
930 | 926 | ||
931 | memcpy(&new_policy, policy, sizeof(struct cpufreq_policy)); | 927 | memcpy(&new_policy, policy, sizeof(struct cpufreq_policy)); |
932 | /* assure that the starting sequence is run in __cpufreq_set_policy */ | 928 | /* assure that the starting sequence is run in __cpufreq_set_policy */ |
933 | policy->governor = NULL; | 929 | policy->governor = NULL; |
934 | 930 | ||
935 | /* set default policy */ | 931 | /* set default policy */ |
936 | ret = __cpufreq_set_policy(policy, &new_policy); | 932 | ret = __cpufreq_set_policy(policy, &new_policy); |
937 | policy->user_policy.policy = policy->policy; | 933 | policy->user_policy.policy = policy->policy; |
938 | policy->user_policy.governor = policy->governor; | 934 | policy->user_policy.governor = policy->governor; |
939 | 935 | ||
940 | if (ret) { | 936 | if (ret) { |
941 | dprintk("setting policy failed\n"); | 937 | dprintk("setting policy failed\n"); |
942 | if (cpufreq_driver->exit) | 938 | if (cpufreq_driver->exit) |
943 | cpufreq_driver->exit(policy); | 939 | cpufreq_driver->exit(policy); |
944 | } | 940 | } |
945 | return ret; | 941 | return ret; |
946 | 942 | ||
947 | err_out_kobj_put: | 943 | err_out_kobj_put: |
948 | kobject_put(&policy->kobj); | 944 | kobject_put(&policy->kobj); |
949 | wait_for_completion(&policy->kobj_unregister); | 945 | wait_for_completion(&policy->kobj_unregister); |
950 | return ret; | 946 | return ret; |
951 | } | 947 | } |
952 | 948 | ||
953 | 949 | ||
954 | /** | 950 | /** |
955 | * cpufreq_add_dev - add a CPU device | 951 | * cpufreq_add_dev - add a CPU device |
956 | * | 952 | * |
957 | * Adds the cpufreq interface for a CPU device. | 953 | * Adds the cpufreq interface for a CPU device. |
958 | * | 954 | * |
959 | * The Oracle says: try running cpufreq registration/unregistration concurrently | 955 | * The Oracle says: try running cpufreq registration/unregistration concurrently |
960 | * with with cpu hotplugging and all hell will break loose. Tried to clean this | 956 | * with with cpu hotplugging and all hell will break loose. Tried to clean this |
961 | * mess up, but more thorough testing is needed. - Mathieu | 957 | * mess up, but more thorough testing is needed. - Mathieu |
962 | */ | 958 | */ |
963 | static int cpufreq_add_dev(struct sys_device *sys_dev) | 959 | static int cpufreq_add_dev(struct sys_device *sys_dev) |
964 | { | 960 | { |
965 | unsigned int cpu = sys_dev->id; | 961 | unsigned int cpu = sys_dev->id; |
966 | int ret = 0, found = 0; | 962 | int ret = 0, found = 0; |
967 | struct cpufreq_policy *policy; | 963 | struct cpufreq_policy *policy; |
968 | unsigned long flags; | 964 | unsigned long flags; |
969 | unsigned int j; | 965 | unsigned int j; |
970 | #ifdef CONFIG_HOTPLUG_CPU | 966 | #ifdef CONFIG_HOTPLUG_CPU |
971 | int sibling; | 967 | int sibling; |
972 | #endif | 968 | #endif |
973 | 969 | ||
974 | if (cpu_is_offline(cpu)) | 970 | if (cpu_is_offline(cpu)) |
975 | return 0; | 971 | return 0; |
976 | 972 | ||
977 | cpufreq_debug_disable_ratelimit(); | 973 | cpufreq_debug_disable_ratelimit(); |
978 | dprintk("adding CPU %u\n", cpu); | 974 | dprintk("adding CPU %u\n", cpu); |
979 | 975 | ||
980 | #ifdef CONFIG_SMP | 976 | #ifdef CONFIG_SMP |
981 | /* check whether a different CPU already registered this | 977 | /* check whether a different CPU already registered this |
982 | * CPU because it is in the same boat. */ | 978 | * CPU because it is in the same boat. */ |
983 | policy = cpufreq_cpu_get(cpu); | 979 | policy = cpufreq_cpu_get(cpu); |
984 | if (unlikely(policy)) { | 980 | if (unlikely(policy)) { |
985 | cpufreq_cpu_put(policy); | 981 | cpufreq_cpu_put(policy); |
986 | cpufreq_debug_enable_ratelimit(); | 982 | cpufreq_debug_enable_ratelimit(); |
987 | return 0; | 983 | return 0; |
988 | } | 984 | } |
989 | #endif | 985 | #endif |
990 | 986 | ||
991 | if (!try_module_get(cpufreq_driver->owner)) { | 987 | if (!try_module_get(cpufreq_driver->owner)) { |
992 | ret = -EINVAL; | 988 | ret = -EINVAL; |
993 | goto module_out; | 989 | goto module_out; |
994 | } | 990 | } |
995 | 991 | ||
996 | ret = -ENOMEM; | 992 | ret = -ENOMEM; |
997 | policy = kzalloc(sizeof(struct cpufreq_policy), GFP_KERNEL); | 993 | policy = kzalloc(sizeof(struct cpufreq_policy), GFP_KERNEL); |
998 | if (!policy) | 994 | if (!policy) |
999 | goto nomem_out; | 995 | goto nomem_out; |
1000 | 996 | ||
1001 | if (!alloc_cpumask_var(&policy->cpus, GFP_KERNEL)) | 997 | if (!alloc_cpumask_var(&policy->cpus, GFP_KERNEL)) |
1002 | goto err_free_policy; | 998 | goto err_free_policy; |
1003 | 999 | ||
1004 | if (!zalloc_cpumask_var(&policy->related_cpus, GFP_KERNEL)) | 1000 | if (!zalloc_cpumask_var(&policy->related_cpus, GFP_KERNEL)) |
1005 | goto err_free_cpumask; | 1001 | goto err_free_cpumask; |
1006 | 1002 | ||
1007 | policy->cpu = cpu; | 1003 | policy->cpu = cpu; |
1008 | cpumask_copy(policy->cpus, cpumask_of(cpu)); | 1004 | cpumask_copy(policy->cpus, cpumask_of(cpu)); |
1009 | 1005 | ||
1010 | /* Initially set CPU itself as the policy_cpu */ | 1006 | /* Initially set CPU itself as the policy_cpu */ |
1011 | per_cpu(cpufreq_policy_cpu, cpu) = cpu; | 1007 | per_cpu(cpufreq_policy_cpu, cpu) = cpu; |
1012 | ret = (lock_policy_rwsem_write(cpu) < 0); | 1008 | ret = (lock_policy_rwsem_write(cpu) < 0); |
1013 | WARN_ON(ret); | 1009 | WARN_ON(ret); |
1014 | 1010 | ||
1015 | init_completion(&policy->kobj_unregister); | 1011 | init_completion(&policy->kobj_unregister); |
1016 | INIT_WORK(&policy->update, handle_update); | 1012 | INIT_WORK(&policy->update, handle_update); |
1017 | 1013 | ||
1018 | /* Set governor before ->init, so that driver could check it */ | 1014 | /* Set governor before ->init, so that driver could check it */ |
1019 | #ifdef CONFIG_HOTPLUG_CPU | 1015 | #ifdef CONFIG_HOTPLUG_CPU |
1020 | for_each_online_cpu(sibling) { | 1016 | for_each_online_cpu(sibling) { |
1021 | struct cpufreq_policy *cp = per_cpu(cpufreq_cpu_data, sibling); | 1017 | struct cpufreq_policy *cp = per_cpu(cpufreq_cpu_data, sibling); |
1022 | if (cp && cp->governor && | 1018 | if (cp && cp->governor && |
1023 | (cpumask_test_cpu(cpu, cp->related_cpus))) { | 1019 | (cpumask_test_cpu(cpu, cp->related_cpus))) { |
1024 | policy->governor = cp->governor; | 1020 | policy->governor = cp->governor; |
1025 | found = 1; | 1021 | found = 1; |
1026 | break; | 1022 | break; |
1027 | } | 1023 | } |
1028 | } | 1024 | } |
1029 | #endif | 1025 | #endif |
1030 | if (!found) | 1026 | if (!found) |
1031 | policy->governor = CPUFREQ_DEFAULT_GOVERNOR; | 1027 | policy->governor = CPUFREQ_DEFAULT_GOVERNOR; |
1032 | /* call driver. From then on the cpufreq must be able | 1028 | /* call driver. From then on the cpufreq must be able |
1033 | * to accept all calls to ->verify and ->setpolicy for this CPU | 1029 | * to accept all calls to ->verify and ->setpolicy for this CPU |
1034 | */ | 1030 | */ |
1035 | ret = cpufreq_driver->init(policy); | 1031 | ret = cpufreq_driver->init(policy); |
1036 | if (ret) { | 1032 | if (ret) { |
1037 | dprintk("initialization failed\n"); | 1033 | dprintk("initialization failed\n"); |
1038 | goto err_unlock_policy; | 1034 | goto err_unlock_policy; |
1039 | } | 1035 | } |
1040 | policy->user_policy.min = policy->min; | 1036 | policy->user_policy.min = policy->min; |
1041 | policy->user_policy.max = policy->max; | 1037 | policy->user_policy.max = policy->max; |
1042 | 1038 | ||
1043 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, | 1039 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, |
1044 | CPUFREQ_START, policy); | 1040 | CPUFREQ_START, policy); |
1045 | 1041 | ||
1046 | ret = cpufreq_add_dev_policy(cpu, policy, sys_dev); | 1042 | ret = cpufreq_add_dev_policy(cpu, policy, sys_dev); |
1047 | if (ret) { | 1043 | if (ret) { |
1048 | if (ret > 0) | 1044 | if (ret > 0) |
1049 | /* This is a managed cpu, symlink created, | 1045 | /* This is a managed cpu, symlink created, |
1050 | exit with 0 */ | 1046 | exit with 0 */ |
1051 | ret = 0; | 1047 | ret = 0; |
1052 | goto err_unlock_policy; | 1048 | goto err_unlock_policy; |
1053 | } | 1049 | } |
1054 | 1050 | ||
1055 | ret = cpufreq_add_dev_interface(cpu, policy, sys_dev); | 1051 | ret = cpufreq_add_dev_interface(cpu, policy, sys_dev); |
1056 | if (ret) | 1052 | if (ret) |
1057 | goto err_out_unregister; | 1053 | goto err_out_unregister; |
1058 | 1054 | ||
1059 | unlock_policy_rwsem_write(cpu); | 1055 | unlock_policy_rwsem_write(cpu); |
1060 | 1056 | ||
1061 | kobject_uevent(&policy->kobj, KOBJ_ADD); | 1057 | kobject_uevent(&policy->kobj, KOBJ_ADD); |
1062 | module_put(cpufreq_driver->owner); | 1058 | module_put(cpufreq_driver->owner); |
1063 | dprintk("initialization complete\n"); | 1059 | dprintk("initialization complete\n"); |
1064 | cpufreq_debug_enable_ratelimit(); | 1060 | cpufreq_debug_enable_ratelimit(); |
1065 | 1061 | ||
1066 | return 0; | 1062 | return 0; |
1067 | 1063 | ||
1068 | 1064 | ||
1069 | err_out_unregister: | 1065 | err_out_unregister: |
1070 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1066 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
1071 | for_each_cpu(j, policy->cpus) | 1067 | for_each_cpu(j, policy->cpus) |
1072 | per_cpu(cpufreq_cpu_data, j) = NULL; | 1068 | per_cpu(cpufreq_cpu_data, j) = NULL; |
1073 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1069 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1074 | 1070 | ||
1075 | kobject_put(&policy->kobj); | 1071 | kobject_put(&policy->kobj); |
1076 | wait_for_completion(&policy->kobj_unregister); | 1072 | wait_for_completion(&policy->kobj_unregister); |
1077 | 1073 | ||
1078 | err_unlock_policy: | 1074 | err_unlock_policy: |
1079 | unlock_policy_rwsem_write(cpu); | 1075 | unlock_policy_rwsem_write(cpu); |
1080 | free_cpumask_var(policy->related_cpus); | 1076 | free_cpumask_var(policy->related_cpus); |
1081 | err_free_cpumask: | 1077 | err_free_cpumask: |
1082 | free_cpumask_var(policy->cpus); | 1078 | free_cpumask_var(policy->cpus); |
1083 | err_free_policy: | 1079 | err_free_policy: |
1084 | kfree(policy); | 1080 | kfree(policy); |
1085 | nomem_out: | 1081 | nomem_out: |
1086 | module_put(cpufreq_driver->owner); | 1082 | module_put(cpufreq_driver->owner); |
1087 | module_out: | 1083 | module_out: |
1088 | cpufreq_debug_enable_ratelimit(); | 1084 | cpufreq_debug_enable_ratelimit(); |
1089 | return ret; | 1085 | return ret; |
1090 | } | 1086 | } |
1091 | 1087 | ||
1092 | 1088 | ||
1093 | /** | 1089 | /** |
1094 | * __cpufreq_remove_dev - remove a CPU device | 1090 | * __cpufreq_remove_dev - remove a CPU device |
1095 | * | 1091 | * |
1096 | * Removes the cpufreq interface for a CPU device. | 1092 | * Removes the cpufreq interface for a CPU device. |
1097 | * Caller should already have policy_rwsem in write mode for this CPU. | 1093 | * Caller should already have policy_rwsem in write mode for this CPU. |
1098 | * This routine frees the rwsem before returning. | 1094 | * This routine frees the rwsem before returning. |
1099 | */ | 1095 | */ |
1100 | static int __cpufreq_remove_dev(struct sys_device *sys_dev) | 1096 | static int __cpufreq_remove_dev(struct sys_device *sys_dev) |
1101 | { | 1097 | { |
1102 | unsigned int cpu = sys_dev->id; | 1098 | unsigned int cpu = sys_dev->id; |
1103 | unsigned long flags; | 1099 | unsigned long flags; |
1104 | struct cpufreq_policy *data; | 1100 | struct cpufreq_policy *data; |
1105 | struct kobject *kobj; | 1101 | struct kobject *kobj; |
1106 | struct completion *cmp; | 1102 | struct completion *cmp; |
1107 | #ifdef CONFIG_SMP | 1103 | #ifdef CONFIG_SMP |
1108 | struct sys_device *cpu_sys_dev; | 1104 | struct sys_device *cpu_sys_dev; |
1109 | unsigned int j; | 1105 | unsigned int j; |
1110 | #endif | 1106 | #endif |
1111 | 1107 | ||
1112 | cpufreq_debug_disable_ratelimit(); | 1108 | cpufreq_debug_disable_ratelimit(); |
1113 | dprintk("unregistering CPU %u\n", cpu); | 1109 | dprintk("unregistering CPU %u\n", cpu); |
1114 | 1110 | ||
1115 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1111 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
1116 | data = per_cpu(cpufreq_cpu_data, cpu); | 1112 | data = per_cpu(cpufreq_cpu_data, cpu); |
1117 | 1113 | ||
1118 | if (!data) { | 1114 | if (!data) { |
1119 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1115 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1120 | cpufreq_debug_enable_ratelimit(); | 1116 | cpufreq_debug_enable_ratelimit(); |
1121 | unlock_policy_rwsem_write(cpu); | 1117 | unlock_policy_rwsem_write(cpu); |
1122 | return -EINVAL; | 1118 | return -EINVAL; |
1123 | } | 1119 | } |
1124 | per_cpu(cpufreq_cpu_data, cpu) = NULL; | 1120 | per_cpu(cpufreq_cpu_data, cpu) = NULL; |
1125 | 1121 | ||
1126 | 1122 | ||
1127 | #ifdef CONFIG_SMP | 1123 | #ifdef CONFIG_SMP |
1128 | /* if this isn't the CPU which is the parent of the kobj, we | 1124 | /* if this isn't the CPU which is the parent of the kobj, we |
1129 | * only need to unlink, put and exit | 1125 | * only need to unlink, put and exit |
1130 | */ | 1126 | */ |
1131 | if (unlikely(cpu != data->cpu)) { | 1127 | if (unlikely(cpu != data->cpu)) { |
1132 | dprintk("removing link\n"); | 1128 | dprintk("removing link\n"); |
1133 | cpumask_clear_cpu(cpu, data->cpus); | 1129 | cpumask_clear_cpu(cpu, data->cpus); |
1134 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1130 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1135 | kobj = &sys_dev->kobj; | 1131 | kobj = &sys_dev->kobj; |
1136 | cpufreq_cpu_put(data); | 1132 | cpufreq_cpu_put(data); |
1137 | cpufreq_debug_enable_ratelimit(); | 1133 | cpufreq_debug_enable_ratelimit(); |
1138 | unlock_policy_rwsem_write(cpu); | 1134 | unlock_policy_rwsem_write(cpu); |
1139 | sysfs_remove_link(kobj, "cpufreq"); | 1135 | sysfs_remove_link(kobj, "cpufreq"); |
1140 | return 0; | 1136 | return 0; |
1141 | } | 1137 | } |
1142 | #endif | 1138 | #endif |
1143 | 1139 | ||
1144 | #ifdef CONFIG_SMP | 1140 | #ifdef CONFIG_SMP |
1145 | 1141 | ||
1146 | #ifdef CONFIG_HOTPLUG_CPU | 1142 | #ifdef CONFIG_HOTPLUG_CPU |
1147 | strncpy(per_cpu(cpufreq_cpu_governor, cpu), data->governor->name, | 1143 | strncpy(per_cpu(cpufreq_cpu_governor, cpu), data->governor->name, |
1148 | CPUFREQ_NAME_LEN); | 1144 | CPUFREQ_NAME_LEN); |
1149 | #endif | 1145 | #endif |
1150 | 1146 | ||
1151 | /* if we have other CPUs still registered, we need to unlink them, | 1147 | /* if we have other CPUs still registered, we need to unlink them, |
1152 | * or else wait_for_completion below will lock up. Clean the | 1148 | * or else wait_for_completion below will lock up. Clean the |
1153 | * per_cpu(cpufreq_cpu_data) while holding the lock, and remove | 1149 | * per_cpu(cpufreq_cpu_data) while holding the lock, and remove |
1154 | * the sysfs links afterwards. | 1150 | * the sysfs links afterwards. |
1155 | */ | 1151 | */ |
1156 | if (unlikely(cpumask_weight(data->cpus) > 1)) { | 1152 | if (unlikely(cpumask_weight(data->cpus) > 1)) { |
1157 | for_each_cpu(j, data->cpus) { | 1153 | for_each_cpu(j, data->cpus) { |
1158 | if (j == cpu) | 1154 | if (j == cpu) |
1159 | continue; | 1155 | continue; |
1160 | per_cpu(cpufreq_cpu_data, j) = NULL; | 1156 | per_cpu(cpufreq_cpu_data, j) = NULL; |
1161 | } | 1157 | } |
1162 | } | 1158 | } |
1163 | 1159 | ||
1164 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1160 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1165 | 1161 | ||
1166 | if (unlikely(cpumask_weight(data->cpus) > 1)) { | 1162 | if (unlikely(cpumask_weight(data->cpus) > 1)) { |
1167 | for_each_cpu(j, data->cpus) { | 1163 | for_each_cpu(j, data->cpus) { |
1168 | if (j == cpu) | 1164 | if (j == cpu) |
1169 | continue; | 1165 | continue; |
1170 | dprintk("removing link for cpu %u\n", j); | 1166 | dprintk("removing link for cpu %u\n", j); |
1171 | #ifdef CONFIG_HOTPLUG_CPU | 1167 | #ifdef CONFIG_HOTPLUG_CPU |
1172 | strncpy(per_cpu(cpufreq_cpu_governor, j), | 1168 | strncpy(per_cpu(cpufreq_cpu_governor, j), |
1173 | data->governor->name, CPUFREQ_NAME_LEN); | 1169 | data->governor->name, CPUFREQ_NAME_LEN); |
1174 | #endif | 1170 | #endif |
1175 | cpu_sys_dev = get_cpu_sysdev(j); | 1171 | cpu_sys_dev = get_cpu_sysdev(j); |
1176 | kobj = &cpu_sys_dev->kobj; | 1172 | kobj = &cpu_sys_dev->kobj; |
1177 | unlock_policy_rwsem_write(cpu); | 1173 | unlock_policy_rwsem_write(cpu); |
1178 | sysfs_remove_link(kobj, "cpufreq"); | 1174 | sysfs_remove_link(kobj, "cpufreq"); |
1179 | lock_policy_rwsem_write(cpu); | 1175 | lock_policy_rwsem_write(cpu); |
1180 | cpufreq_cpu_put(data); | 1176 | cpufreq_cpu_put(data); |
1181 | } | 1177 | } |
1182 | } | 1178 | } |
1183 | #else | 1179 | #else |
1184 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1180 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1185 | #endif | 1181 | #endif |
1186 | 1182 | ||
1187 | if (cpufreq_driver->target) | 1183 | if (cpufreq_driver->target) |
1188 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); | 1184 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); |
1189 | 1185 | ||
1190 | kobj = &data->kobj; | 1186 | kobj = &data->kobj; |
1191 | cmp = &data->kobj_unregister; | 1187 | cmp = &data->kobj_unregister; |
1192 | unlock_policy_rwsem_write(cpu); | 1188 | unlock_policy_rwsem_write(cpu); |
1193 | kobject_put(kobj); | 1189 | kobject_put(kobj); |
1194 | 1190 | ||
1195 | /* we need to make sure that the underlying kobj is actually | 1191 | /* we need to make sure that the underlying kobj is actually |
1196 | * not referenced anymore by anybody before we proceed with | 1192 | * not referenced anymore by anybody before we proceed with |
1197 | * unloading. | 1193 | * unloading. |
1198 | */ | 1194 | */ |
1199 | dprintk("waiting for dropping of refcount\n"); | 1195 | dprintk("waiting for dropping of refcount\n"); |
1200 | wait_for_completion(cmp); | 1196 | wait_for_completion(cmp); |
1201 | dprintk("wait complete\n"); | 1197 | dprintk("wait complete\n"); |
1202 | 1198 | ||
1203 | lock_policy_rwsem_write(cpu); | 1199 | lock_policy_rwsem_write(cpu); |
1204 | if (cpufreq_driver->exit) | 1200 | if (cpufreq_driver->exit) |
1205 | cpufreq_driver->exit(data); | 1201 | cpufreq_driver->exit(data); |
1206 | unlock_policy_rwsem_write(cpu); | 1202 | unlock_policy_rwsem_write(cpu); |
1207 | 1203 | ||
1208 | free_cpumask_var(data->related_cpus); | 1204 | free_cpumask_var(data->related_cpus); |
1209 | free_cpumask_var(data->cpus); | 1205 | free_cpumask_var(data->cpus); |
1210 | kfree(data); | 1206 | kfree(data); |
1211 | per_cpu(cpufreq_cpu_data, cpu) = NULL; | 1207 | per_cpu(cpufreq_cpu_data, cpu) = NULL; |
1212 | 1208 | ||
1213 | cpufreq_debug_enable_ratelimit(); | 1209 | cpufreq_debug_enable_ratelimit(); |
1214 | return 0; | 1210 | return 0; |
1215 | } | 1211 | } |
1216 | 1212 | ||
1217 | 1213 | ||
1218 | static int cpufreq_remove_dev(struct sys_device *sys_dev) | 1214 | static int cpufreq_remove_dev(struct sys_device *sys_dev) |
1219 | { | 1215 | { |
1220 | unsigned int cpu = sys_dev->id; | 1216 | unsigned int cpu = sys_dev->id; |
1221 | int retval; | 1217 | int retval; |
1222 | 1218 | ||
1223 | if (cpu_is_offline(cpu)) | 1219 | if (cpu_is_offline(cpu)) |
1224 | return 0; | 1220 | return 0; |
1225 | 1221 | ||
1226 | if (unlikely(lock_policy_rwsem_write(cpu))) | 1222 | if (unlikely(lock_policy_rwsem_write(cpu))) |
1227 | BUG(); | 1223 | BUG(); |
1228 | 1224 | ||
1229 | retval = __cpufreq_remove_dev(sys_dev); | 1225 | retval = __cpufreq_remove_dev(sys_dev); |
1230 | return retval; | 1226 | return retval; |
1231 | } | 1227 | } |
1232 | 1228 | ||
1233 | 1229 | ||
1234 | static void handle_update(struct work_struct *work) | 1230 | static void handle_update(struct work_struct *work) |
1235 | { | 1231 | { |
1236 | struct cpufreq_policy *policy = | 1232 | struct cpufreq_policy *policy = |
1237 | container_of(work, struct cpufreq_policy, update); | 1233 | container_of(work, struct cpufreq_policy, update); |
1238 | unsigned int cpu = policy->cpu; | 1234 | unsigned int cpu = policy->cpu; |
1239 | dprintk("handle_update for cpu %u called\n", cpu); | 1235 | dprintk("handle_update for cpu %u called\n", cpu); |
1240 | cpufreq_update_policy(cpu); | 1236 | cpufreq_update_policy(cpu); |
1241 | } | 1237 | } |
1242 | 1238 | ||
1243 | /** | 1239 | /** |
1244 | * cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're in deep trouble. | 1240 | * cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're in deep trouble. |
1245 | * @cpu: cpu number | 1241 | * @cpu: cpu number |
1246 | * @old_freq: CPU frequency the kernel thinks the CPU runs at | 1242 | * @old_freq: CPU frequency the kernel thinks the CPU runs at |
1247 | * @new_freq: CPU frequency the CPU actually runs at | 1243 | * @new_freq: CPU frequency the CPU actually runs at |
1248 | * | 1244 | * |
1249 | * We adjust to current frequency first, and need to clean up later. | 1245 | * We adjust to current frequency first, and need to clean up later. |
1250 | * So either call to cpufreq_update_policy() or schedule handle_update()). | 1246 | * So either call to cpufreq_update_policy() or schedule handle_update()). |
1251 | */ | 1247 | */ |
1252 | static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq, | 1248 | static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq, |
1253 | unsigned int new_freq) | 1249 | unsigned int new_freq) |
1254 | { | 1250 | { |
1255 | struct cpufreq_freqs freqs; | 1251 | struct cpufreq_freqs freqs; |
1256 | 1252 | ||
1257 | dprintk("Warning: CPU frequency out of sync: cpufreq and timing " | 1253 | dprintk("Warning: CPU frequency out of sync: cpufreq and timing " |
1258 | "core thinks of %u, is %u kHz.\n", old_freq, new_freq); | 1254 | "core thinks of %u, is %u kHz.\n", old_freq, new_freq); |
1259 | 1255 | ||
1260 | freqs.cpu = cpu; | 1256 | freqs.cpu = cpu; |
1261 | freqs.old = old_freq; | 1257 | freqs.old = old_freq; |
1262 | freqs.new = new_freq; | 1258 | freqs.new = new_freq; |
1263 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 1259 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); |
1264 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 1260 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); |
1265 | } | 1261 | } |
1266 | 1262 | ||
1267 | 1263 | ||
1268 | /** | 1264 | /** |
1269 | * cpufreq_quick_get - get the CPU frequency (in kHz) from policy->cur | 1265 | * cpufreq_quick_get - get the CPU frequency (in kHz) from policy->cur |
1270 | * @cpu: CPU number | 1266 | * @cpu: CPU number |
1271 | * | 1267 | * |
1272 | * This is the last known freq, without actually getting it from the driver. | 1268 | * This is the last known freq, without actually getting it from the driver. |
1273 | * Return value will be same as what is shown in scaling_cur_freq in sysfs. | 1269 | * Return value will be same as what is shown in scaling_cur_freq in sysfs. |
1274 | */ | 1270 | */ |
1275 | unsigned int cpufreq_quick_get(unsigned int cpu) | 1271 | unsigned int cpufreq_quick_get(unsigned int cpu) |
1276 | { | 1272 | { |
1277 | struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); | 1273 | struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); |
1278 | unsigned int ret_freq = 0; | 1274 | unsigned int ret_freq = 0; |
1279 | 1275 | ||
1280 | if (policy) { | 1276 | if (policy) { |
1281 | ret_freq = policy->cur; | 1277 | ret_freq = policy->cur; |
1282 | cpufreq_cpu_put(policy); | 1278 | cpufreq_cpu_put(policy); |
1283 | } | 1279 | } |
1284 | 1280 | ||
1285 | return ret_freq; | 1281 | return ret_freq; |
1286 | } | 1282 | } |
1287 | EXPORT_SYMBOL(cpufreq_quick_get); | 1283 | EXPORT_SYMBOL(cpufreq_quick_get); |
1288 | 1284 | ||
1289 | 1285 | ||
1290 | static unsigned int __cpufreq_get(unsigned int cpu) | 1286 | static unsigned int __cpufreq_get(unsigned int cpu) |
1291 | { | 1287 | { |
1292 | struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); | 1288 | struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); |
1293 | unsigned int ret_freq = 0; | 1289 | unsigned int ret_freq = 0; |
1294 | 1290 | ||
1295 | if (!cpufreq_driver->get) | 1291 | if (!cpufreq_driver->get) |
1296 | return ret_freq; | 1292 | return ret_freq; |
1297 | 1293 | ||
1298 | ret_freq = cpufreq_driver->get(cpu); | 1294 | ret_freq = cpufreq_driver->get(cpu); |
1299 | 1295 | ||
1300 | if (ret_freq && policy->cur && | 1296 | if (ret_freq && policy->cur && |
1301 | !(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { | 1297 | !(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { |
1302 | /* verify no discrepancy between actual and | 1298 | /* verify no discrepancy between actual and |
1303 | saved value exists */ | 1299 | saved value exists */ |
1304 | if (unlikely(ret_freq != policy->cur)) { | 1300 | if (unlikely(ret_freq != policy->cur)) { |
1305 | cpufreq_out_of_sync(cpu, policy->cur, ret_freq); | 1301 | cpufreq_out_of_sync(cpu, policy->cur, ret_freq); |
1306 | schedule_work(&policy->update); | 1302 | schedule_work(&policy->update); |
1307 | } | 1303 | } |
1308 | } | 1304 | } |
1309 | 1305 | ||
1310 | return ret_freq; | 1306 | return ret_freq; |
1311 | } | 1307 | } |
1312 | 1308 | ||
1313 | /** | 1309 | /** |
1314 | * cpufreq_get - get the current CPU frequency (in kHz) | 1310 | * cpufreq_get - get the current CPU frequency (in kHz) |
1315 | * @cpu: CPU number | 1311 | * @cpu: CPU number |
1316 | * | 1312 | * |
1317 | * Get the CPU current (static) CPU frequency | 1313 | * Get the CPU current (static) CPU frequency |
1318 | */ | 1314 | */ |
1319 | unsigned int cpufreq_get(unsigned int cpu) | 1315 | unsigned int cpufreq_get(unsigned int cpu) |
1320 | { | 1316 | { |
1321 | unsigned int ret_freq = 0; | 1317 | unsigned int ret_freq = 0; |
1322 | struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); | 1318 | struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); |
1323 | 1319 | ||
1324 | if (!policy) | 1320 | if (!policy) |
1325 | goto out; | 1321 | goto out; |
1326 | 1322 | ||
1327 | if (unlikely(lock_policy_rwsem_read(cpu))) | 1323 | if (unlikely(lock_policy_rwsem_read(cpu))) |
1328 | goto out_policy; | 1324 | goto out_policy; |
1329 | 1325 | ||
1330 | ret_freq = __cpufreq_get(cpu); | 1326 | ret_freq = __cpufreq_get(cpu); |
1331 | 1327 | ||
1332 | unlock_policy_rwsem_read(cpu); | 1328 | unlock_policy_rwsem_read(cpu); |
1333 | 1329 | ||
1334 | out_policy: | 1330 | out_policy: |
1335 | cpufreq_cpu_put(policy); | 1331 | cpufreq_cpu_put(policy); |
1336 | out: | 1332 | out: |
1337 | return ret_freq; | 1333 | return ret_freq; |
1338 | } | 1334 | } |
1339 | EXPORT_SYMBOL(cpufreq_get); | 1335 | EXPORT_SYMBOL(cpufreq_get); |
1340 | 1336 | ||
1341 | 1337 | ||
1342 | /** | 1338 | /** |
1343 | * cpufreq_suspend - let the low level driver prepare for suspend | 1339 | * cpufreq_suspend - let the low level driver prepare for suspend |
1344 | */ | 1340 | */ |
1345 | 1341 | ||
1346 | static int cpufreq_suspend(struct sys_device *sysdev, pm_message_t pmsg) | 1342 | static int cpufreq_suspend(struct sys_device *sysdev, pm_message_t pmsg) |
1347 | { | 1343 | { |
1348 | int ret = 0; | 1344 | int ret = 0; |
1349 | 1345 | ||
1350 | int cpu = sysdev->id; | 1346 | int cpu = sysdev->id; |
1351 | struct cpufreq_policy *cpu_policy; | 1347 | struct cpufreq_policy *cpu_policy; |
1352 | 1348 | ||
1353 | dprintk("suspending cpu %u\n", cpu); | 1349 | dprintk("suspending cpu %u\n", cpu); |
1354 | 1350 | ||
1355 | if (!cpu_online(cpu)) | 1351 | if (!cpu_online(cpu)) |
1356 | return 0; | 1352 | return 0; |
1357 | 1353 | ||
1358 | /* we may be lax here as interrupts are off. Nonetheless | 1354 | /* we may be lax here as interrupts are off. Nonetheless |
1359 | * we need to grab the correct cpu policy, as to check | 1355 | * we need to grab the correct cpu policy, as to check |
1360 | * whether we really run on this CPU. | 1356 | * whether we really run on this CPU. |
1361 | */ | 1357 | */ |
1362 | 1358 | ||
1363 | cpu_policy = cpufreq_cpu_get(cpu); | 1359 | cpu_policy = cpufreq_cpu_get(cpu); |
1364 | if (!cpu_policy) | 1360 | if (!cpu_policy) |
1365 | return -EINVAL; | 1361 | return -EINVAL; |
1366 | 1362 | ||
1367 | /* only handle each CPU group once */ | 1363 | /* only handle each CPU group once */ |
1368 | if (unlikely(cpu_policy->cpu != cpu)) | 1364 | if (unlikely(cpu_policy->cpu != cpu)) |
1369 | goto out; | 1365 | goto out; |
1370 | 1366 | ||
1371 | if (cpufreq_driver->suspend) { | 1367 | if (cpufreq_driver->suspend) { |
1372 | ret = cpufreq_driver->suspend(cpu_policy, pmsg); | 1368 | ret = cpufreq_driver->suspend(cpu_policy, pmsg); |
1373 | if (ret) | 1369 | if (ret) |
1374 | printk(KERN_ERR "cpufreq: suspend failed in ->suspend " | 1370 | printk(KERN_ERR "cpufreq: suspend failed in ->suspend " |
1375 | "step on CPU %u\n", cpu_policy->cpu); | 1371 | "step on CPU %u\n", cpu_policy->cpu); |
1376 | } | 1372 | } |
1377 | 1373 | ||
1378 | out: | 1374 | out: |
1379 | cpufreq_cpu_put(cpu_policy); | 1375 | cpufreq_cpu_put(cpu_policy); |
1380 | return ret; | 1376 | return ret; |
1381 | } | 1377 | } |
1382 | 1378 | ||
1383 | /** | 1379 | /** |
1384 | * cpufreq_resume - restore proper CPU frequency handling after resume | 1380 | * cpufreq_resume - restore proper CPU frequency handling after resume |
1385 | * | 1381 | * |
1386 | * 1.) resume CPUfreq hardware support (cpufreq_driver->resume()) | 1382 | * 1.) resume CPUfreq hardware support (cpufreq_driver->resume()) |
1387 | * 2.) schedule call cpufreq_update_policy() ASAP as interrupts are | 1383 | * 2.) schedule call cpufreq_update_policy() ASAP as interrupts are |
1388 | * restored. It will verify that the current freq is in sync with | 1384 | * restored. It will verify that the current freq is in sync with |
1389 | * what we believe it to be. This is a bit later than when it | 1385 | * what we believe it to be. This is a bit later than when it |
1390 | * should be, but nonethteless it's better than calling | 1386 | * should be, but nonethteless it's better than calling |
1391 | * cpufreq_driver->get() here which might re-enable interrupts... | 1387 | * cpufreq_driver->get() here which might re-enable interrupts... |
1392 | */ | 1388 | */ |
1393 | static int cpufreq_resume(struct sys_device *sysdev) | 1389 | static int cpufreq_resume(struct sys_device *sysdev) |
1394 | { | 1390 | { |
1395 | int ret = 0; | 1391 | int ret = 0; |
1396 | 1392 | ||
1397 | int cpu = sysdev->id; | 1393 | int cpu = sysdev->id; |
1398 | struct cpufreq_policy *cpu_policy; | 1394 | struct cpufreq_policy *cpu_policy; |
1399 | 1395 | ||
1400 | dprintk("resuming cpu %u\n", cpu); | 1396 | dprintk("resuming cpu %u\n", cpu); |
1401 | 1397 | ||
1402 | if (!cpu_online(cpu)) | 1398 | if (!cpu_online(cpu)) |
1403 | return 0; | 1399 | return 0; |
1404 | 1400 | ||
1405 | /* we may be lax here as interrupts are off. Nonetheless | 1401 | /* we may be lax here as interrupts are off. Nonetheless |
1406 | * we need to grab the correct cpu policy, as to check | 1402 | * we need to grab the correct cpu policy, as to check |
1407 | * whether we really run on this CPU. | 1403 | * whether we really run on this CPU. |
1408 | */ | 1404 | */ |
1409 | 1405 | ||
1410 | cpu_policy = cpufreq_cpu_get(cpu); | 1406 | cpu_policy = cpufreq_cpu_get(cpu); |
1411 | if (!cpu_policy) | 1407 | if (!cpu_policy) |
1412 | return -EINVAL; | 1408 | return -EINVAL; |
1413 | 1409 | ||
1414 | /* only handle each CPU group once */ | 1410 | /* only handle each CPU group once */ |
1415 | if (unlikely(cpu_policy->cpu != cpu)) | 1411 | if (unlikely(cpu_policy->cpu != cpu)) |
1416 | goto fail; | 1412 | goto fail; |
1417 | 1413 | ||
1418 | if (cpufreq_driver->resume) { | 1414 | if (cpufreq_driver->resume) { |
1419 | ret = cpufreq_driver->resume(cpu_policy); | 1415 | ret = cpufreq_driver->resume(cpu_policy); |
1420 | if (ret) { | 1416 | if (ret) { |
1421 | printk(KERN_ERR "cpufreq: resume failed in ->resume " | 1417 | printk(KERN_ERR "cpufreq: resume failed in ->resume " |
1422 | "step on CPU %u\n", cpu_policy->cpu); | 1418 | "step on CPU %u\n", cpu_policy->cpu); |
1423 | goto fail; | 1419 | goto fail; |
1424 | } | 1420 | } |
1425 | } | 1421 | } |
1426 | 1422 | ||
1427 | schedule_work(&cpu_policy->update); | 1423 | schedule_work(&cpu_policy->update); |
1428 | 1424 | ||
1429 | fail: | 1425 | fail: |
1430 | cpufreq_cpu_put(cpu_policy); | 1426 | cpufreq_cpu_put(cpu_policy); |
1431 | return ret; | 1427 | return ret; |
1432 | } | 1428 | } |
1433 | 1429 | ||
1434 | static struct sysdev_driver cpufreq_sysdev_driver = { | 1430 | static struct sysdev_driver cpufreq_sysdev_driver = { |
1435 | .add = cpufreq_add_dev, | 1431 | .add = cpufreq_add_dev, |
1436 | .remove = cpufreq_remove_dev, | 1432 | .remove = cpufreq_remove_dev, |
1437 | .suspend = cpufreq_suspend, | 1433 | .suspend = cpufreq_suspend, |
1438 | .resume = cpufreq_resume, | 1434 | .resume = cpufreq_resume, |
1439 | }; | 1435 | }; |
1440 | 1436 | ||
1441 | 1437 | ||
1442 | /********************************************************************* | 1438 | /********************************************************************* |
1443 | * NOTIFIER LISTS INTERFACE * | 1439 | * NOTIFIER LISTS INTERFACE * |
1444 | *********************************************************************/ | 1440 | *********************************************************************/ |
1445 | 1441 | ||
1446 | /** | 1442 | /** |
1447 | * cpufreq_register_notifier - register a driver with cpufreq | 1443 | * cpufreq_register_notifier - register a driver with cpufreq |
1448 | * @nb: notifier function to register | 1444 | * @nb: notifier function to register |
1449 | * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER | 1445 | * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER |
1450 | * | 1446 | * |
1451 | * Add a driver to one of two lists: either a list of drivers that | 1447 | * Add a driver to one of two lists: either a list of drivers that |
1452 | * are notified about clock rate changes (once before and once after | 1448 | * are notified about clock rate changes (once before and once after |
1453 | * the transition), or a list of drivers that are notified about | 1449 | * the transition), or a list of drivers that are notified about |
1454 | * changes in cpufreq policy. | 1450 | * changes in cpufreq policy. |
1455 | * | 1451 | * |
1456 | * This function may sleep, and has the same return conditions as | 1452 | * This function may sleep, and has the same return conditions as |
1457 | * blocking_notifier_chain_register. | 1453 | * blocking_notifier_chain_register. |
1458 | */ | 1454 | */ |
1459 | int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list) | 1455 | int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list) |
1460 | { | 1456 | { |
1461 | int ret; | 1457 | int ret; |
1462 | 1458 | ||
1463 | WARN_ON(!init_cpufreq_transition_notifier_list_called); | 1459 | WARN_ON(!init_cpufreq_transition_notifier_list_called); |
1464 | 1460 | ||
1465 | switch (list) { | 1461 | switch (list) { |
1466 | case CPUFREQ_TRANSITION_NOTIFIER: | 1462 | case CPUFREQ_TRANSITION_NOTIFIER: |
1467 | ret = srcu_notifier_chain_register( | 1463 | ret = srcu_notifier_chain_register( |
1468 | &cpufreq_transition_notifier_list, nb); | 1464 | &cpufreq_transition_notifier_list, nb); |
1469 | break; | 1465 | break; |
1470 | case CPUFREQ_POLICY_NOTIFIER: | 1466 | case CPUFREQ_POLICY_NOTIFIER: |
1471 | ret = blocking_notifier_chain_register( | 1467 | ret = blocking_notifier_chain_register( |
1472 | &cpufreq_policy_notifier_list, nb); | 1468 | &cpufreq_policy_notifier_list, nb); |
1473 | break; | 1469 | break; |
1474 | default: | 1470 | default: |
1475 | ret = -EINVAL; | 1471 | ret = -EINVAL; |
1476 | } | 1472 | } |
1477 | 1473 | ||
1478 | return ret; | 1474 | return ret; |
1479 | } | 1475 | } |
1480 | EXPORT_SYMBOL(cpufreq_register_notifier); | 1476 | EXPORT_SYMBOL(cpufreq_register_notifier); |
1481 | 1477 | ||
1482 | 1478 | ||
1483 | /** | 1479 | /** |
1484 | * cpufreq_unregister_notifier - unregister a driver with cpufreq | 1480 | * cpufreq_unregister_notifier - unregister a driver with cpufreq |
1485 | * @nb: notifier block to be unregistered | 1481 | * @nb: notifier block to be unregistered |
1486 | * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER | 1482 | * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER |
1487 | * | 1483 | * |
1488 | * Remove a driver from the CPU frequency notifier list. | 1484 | * Remove a driver from the CPU frequency notifier list. |
1489 | * | 1485 | * |
1490 | * This function may sleep, and has the same return conditions as | 1486 | * This function may sleep, and has the same return conditions as |
1491 | * blocking_notifier_chain_unregister. | 1487 | * blocking_notifier_chain_unregister. |
1492 | */ | 1488 | */ |
1493 | int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list) | 1489 | int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list) |
1494 | { | 1490 | { |
1495 | int ret; | 1491 | int ret; |
1496 | 1492 | ||
1497 | switch (list) { | 1493 | switch (list) { |
1498 | case CPUFREQ_TRANSITION_NOTIFIER: | 1494 | case CPUFREQ_TRANSITION_NOTIFIER: |
1499 | ret = srcu_notifier_chain_unregister( | 1495 | ret = srcu_notifier_chain_unregister( |
1500 | &cpufreq_transition_notifier_list, nb); | 1496 | &cpufreq_transition_notifier_list, nb); |
1501 | break; | 1497 | break; |
1502 | case CPUFREQ_POLICY_NOTIFIER: | 1498 | case CPUFREQ_POLICY_NOTIFIER: |
1503 | ret = blocking_notifier_chain_unregister( | 1499 | ret = blocking_notifier_chain_unregister( |
1504 | &cpufreq_policy_notifier_list, nb); | 1500 | &cpufreq_policy_notifier_list, nb); |
1505 | break; | 1501 | break; |
1506 | default: | 1502 | default: |
1507 | ret = -EINVAL; | 1503 | ret = -EINVAL; |
1508 | } | 1504 | } |
1509 | 1505 | ||
1510 | return ret; | 1506 | return ret; |
1511 | } | 1507 | } |
1512 | EXPORT_SYMBOL(cpufreq_unregister_notifier); | 1508 | EXPORT_SYMBOL(cpufreq_unregister_notifier); |
1513 | 1509 | ||
1514 | 1510 | ||
1515 | /********************************************************************* | 1511 | /********************************************************************* |
1516 | * GOVERNORS * | 1512 | * GOVERNORS * |
1517 | *********************************************************************/ | 1513 | *********************************************************************/ |
1518 | 1514 | ||
1519 | 1515 | ||
1520 | int __cpufreq_driver_target(struct cpufreq_policy *policy, | 1516 | int __cpufreq_driver_target(struct cpufreq_policy *policy, |
1521 | unsigned int target_freq, | 1517 | unsigned int target_freq, |
1522 | unsigned int relation) | 1518 | unsigned int relation) |
1523 | { | 1519 | { |
1524 | int retval = -EINVAL; | 1520 | int retval = -EINVAL; |
1525 | 1521 | ||
1526 | dprintk("target for CPU %u: %u kHz, relation %u\n", policy->cpu, | 1522 | dprintk("target for CPU %u: %u kHz, relation %u\n", policy->cpu, |
1527 | target_freq, relation); | 1523 | target_freq, relation); |
1528 | if (cpu_online(policy->cpu) && cpufreq_driver->target) | 1524 | if (cpu_online(policy->cpu) && cpufreq_driver->target) |
1529 | retval = cpufreq_driver->target(policy, target_freq, relation); | 1525 | retval = cpufreq_driver->target(policy, target_freq, relation); |
1530 | 1526 | ||
1531 | return retval; | 1527 | return retval; |
1532 | } | 1528 | } |
1533 | EXPORT_SYMBOL_GPL(__cpufreq_driver_target); | 1529 | EXPORT_SYMBOL_GPL(__cpufreq_driver_target); |
1534 | 1530 | ||
1535 | int cpufreq_driver_target(struct cpufreq_policy *policy, | 1531 | int cpufreq_driver_target(struct cpufreq_policy *policy, |
1536 | unsigned int target_freq, | 1532 | unsigned int target_freq, |
1537 | unsigned int relation) | 1533 | unsigned int relation) |
1538 | { | 1534 | { |
1539 | int ret = -EINVAL; | 1535 | int ret = -EINVAL; |
1540 | 1536 | ||
1541 | policy = cpufreq_cpu_get(policy->cpu); | 1537 | policy = cpufreq_cpu_get(policy->cpu); |
1542 | if (!policy) | 1538 | if (!policy) |
1543 | goto no_policy; | 1539 | goto no_policy; |
1544 | 1540 | ||
1545 | if (unlikely(lock_policy_rwsem_write(policy->cpu))) | 1541 | if (unlikely(lock_policy_rwsem_write(policy->cpu))) |
1546 | goto fail; | 1542 | goto fail; |
1547 | 1543 | ||
1548 | ret = __cpufreq_driver_target(policy, target_freq, relation); | 1544 | ret = __cpufreq_driver_target(policy, target_freq, relation); |
1549 | 1545 | ||
1550 | unlock_policy_rwsem_write(policy->cpu); | 1546 | unlock_policy_rwsem_write(policy->cpu); |
1551 | 1547 | ||
1552 | fail: | 1548 | fail: |
1553 | cpufreq_cpu_put(policy); | 1549 | cpufreq_cpu_put(policy); |
1554 | no_policy: | 1550 | no_policy: |
1555 | return ret; | 1551 | return ret; |
1556 | } | 1552 | } |
1557 | EXPORT_SYMBOL_GPL(cpufreq_driver_target); | 1553 | EXPORT_SYMBOL_GPL(cpufreq_driver_target); |
1558 | 1554 | ||
1559 | int __cpufreq_driver_getavg(struct cpufreq_policy *policy, unsigned int cpu) | 1555 | int __cpufreq_driver_getavg(struct cpufreq_policy *policy, unsigned int cpu) |
1560 | { | 1556 | { |
1561 | int ret = 0; | 1557 | int ret = 0; |
1562 | 1558 | ||
1563 | policy = cpufreq_cpu_get(policy->cpu); | 1559 | policy = cpufreq_cpu_get(policy->cpu); |
1564 | if (!policy) | 1560 | if (!policy) |
1565 | return -EINVAL; | 1561 | return -EINVAL; |
1566 | 1562 | ||
1567 | if (cpu_online(cpu) && cpufreq_driver->getavg) | 1563 | if (cpu_online(cpu) && cpufreq_driver->getavg) |
1568 | ret = cpufreq_driver->getavg(policy, cpu); | 1564 | ret = cpufreq_driver->getavg(policy, cpu); |
1569 | 1565 | ||
1570 | cpufreq_cpu_put(policy); | 1566 | cpufreq_cpu_put(policy); |
1571 | return ret; | 1567 | return ret; |
1572 | } | 1568 | } |
1573 | EXPORT_SYMBOL_GPL(__cpufreq_driver_getavg); | 1569 | EXPORT_SYMBOL_GPL(__cpufreq_driver_getavg); |
1574 | 1570 | ||
1575 | /* | 1571 | /* |
1576 | * when "event" is CPUFREQ_GOV_LIMITS | 1572 | * when "event" is CPUFREQ_GOV_LIMITS |
1577 | */ | 1573 | */ |
1578 | 1574 | ||
1579 | static int __cpufreq_governor(struct cpufreq_policy *policy, | 1575 | static int __cpufreq_governor(struct cpufreq_policy *policy, |
1580 | unsigned int event) | 1576 | unsigned int event) |
1581 | { | 1577 | { |
1582 | int ret; | 1578 | int ret; |
1583 | 1579 | ||
1584 | /* Only must be defined when default governor is known to have latency | 1580 | /* Only must be defined when default governor is known to have latency |
1585 | restrictions, like e.g. conservative or ondemand. | 1581 | restrictions, like e.g. conservative or ondemand. |
1586 | That this is the case is already ensured in Kconfig | 1582 | That this is the case is already ensured in Kconfig |
1587 | */ | 1583 | */ |
1588 | #ifdef CONFIG_CPU_FREQ_GOV_PERFORMANCE | 1584 | #ifdef CONFIG_CPU_FREQ_GOV_PERFORMANCE |
1589 | struct cpufreq_governor *gov = &cpufreq_gov_performance; | 1585 | struct cpufreq_governor *gov = &cpufreq_gov_performance; |
1590 | #else | 1586 | #else |
1591 | struct cpufreq_governor *gov = NULL; | 1587 | struct cpufreq_governor *gov = NULL; |
1592 | #endif | 1588 | #endif |
1593 | 1589 | ||
1594 | if (policy->governor->max_transition_latency && | 1590 | if (policy->governor->max_transition_latency && |
1595 | policy->cpuinfo.transition_latency > | 1591 | policy->cpuinfo.transition_latency > |
1596 | policy->governor->max_transition_latency) { | 1592 | policy->governor->max_transition_latency) { |
1597 | if (!gov) | 1593 | if (!gov) |
1598 | return -EINVAL; | 1594 | return -EINVAL; |
1599 | else { | 1595 | else { |
1600 | printk(KERN_WARNING "%s governor failed, too long" | 1596 | printk(KERN_WARNING "%s governor failed, too long" |
1601 | " transition latency of HW, fallback" | 1597 | " transition latency of HW, fallback" |
1602 | " to %s governor\n", | 1598 | " to %s governor\n", |
1603 | policy->governor->name, | 1599 | policy->governor->name, |
1604 | gov->name); | 1600 | gov->name); |
1605 | policy->governor = gov; | 1601 | policy->governor = gov; |
1606 | } | 1602 | } |
1607 | } | 1603 | } |
1608 | 1604 | ||
1609 | if (!try_module_get(policy->governor->owner)) | 1605 | if (!try_module_get(policy->governor->owner)) |
1610 | return -EINVAL; | 1606 | return -EINVAL; |
1611 | 1607 | ||
1612 | dprintk("__cpufreq_governor for CPU %u, event %u\n", | 1608 | dprintk("__cpufreq_governor for CPU %u, event %u\n", |
1613 | policy->cpu, event); | 1609 | policy->cpu, event); |
1614 | ret = policy->governor->governor(policy, event); | 1610 | ret = policy->governor->governor(policy, event); |
1615 | 1611 | ||
1616 | /* we keep one module reference alive for | 1612 | /* we keep one module reference alive for |
1617 | each CPU governed by this CPU */ | 1613 | each CPU governed by this CPU */ |
1618 | if ((event != CPUFREQ_GOV_START) || ret) | 1614 | if ((event != CPUFREQ_GOV_START) || ret) |
1619 | module_put(policy->governor->owner); | 1615 | module_put(policy->governor->owner); |
1620 | if ((event == CPUFREQ_GOV_STOP) && !ret) | 1616 | if ((event == CPUFREQ_GOV_STOP) && !ret) |
1621 | module_put(policy->governor->owner); | 1617 | module_put(policy->governor->owner); |
1622 | 1618 | ||
1623 | return ret; | 1619 | return ret; |
1624 | } | 1620 | } |
1625 | 1621 | ||
1626 | 1622 | ||
1627 | int cpufreq_register_governor(struct cpufreq_governor *governor) | 1623 | int cpufreq_register_governor(struct cpufreq_governor *governor) |
1628 | { | 1624 | { |
1629 | int err; | 1625 | int err; |
1630 | 1626 | ||
1631 | if (!governor) | 1627 | if (!governor) |
1632 | return -EINVAL; | 1628 | return -EINVAL; |
1633 | 1629 | ||
1634 | mutex_lock(&cpufreq_governor_mutex); | 1630 | mutex_lock(&cpufreq_governor_mutex); |
1635 | 1631 | ||
1636 | err = -EBUSY; | 1632 | err = -EBUSY; |
1637 | if (__find_governor(governor->name) == NULL) { | 1633 | if (__find_governor(governor->name) == NULL) { |
1638 | err = 0; | 1634 | err = 0; |
1639 | list_add(&governor->governor_list, &cpufreq_governor_list); | 1635 | list_add(&governor->governor_list, &cpufreq_governor_list); |
1640 | } | 1636 | } |
1641 | 1637 | ||
1642 | mutex_unlock(&cpufreq_governor_mutex); | 1638 | mutex_unlock(&cpufreq_governor_mutex); |
1643 | return err; | 1639 | return err; |
1644 | } | 1640 | } |
1645 | EXPORT_SYMBOL_GPL(cpufreq_register_governor); | 1641 | EXPORT_SYMBOL_GPL(cpufreq_register_governor); |
1646 | 1642 | ||
1647 | 1643 | ||
1648 | void cpufreq_unregister_governor(struct cpufreq_governor *governor) | 1644 | void cpufreq_unregister_governor(struct cpufreq_governor *governor) |
1649 | { | 1645 | { |
1650 | #ifdef CONFIG_HOTPLUG_CPU | 1646 | #ifdef CONFIG_HOTPLUG_CPU |
1651 | int cpu; | 1647 | int cpu; |
1652 | #endif | 1648 | #endif |
1653 | 1649 | ||
1654 | if (!governor) | 1650 | if (!governor) |
1655 | return; | 1651 | return; |
1656 | 1652 | ||
1657 | #ifdef CONFIG_HOTPLUG_CPU | 1653 | #ifdef CONFIG_HOTPLUG_CPU |
1658 | for_each_present_cpu(cpu) { | 1654 | for_each_present_cpu(cpu) { |
1659 | if (cpu_online(cpu)) | 1655 | if (cpu_online(cpu)) |
1660 | continue; | 1656 | continue; |
1661 | if (!strcmp(per_cpu(cpufreq_cpu_governor, cpu), governor->name)) | 1657 | if (!strcmp(per_cpu(cpufreq_cpu_governor, cpu), governor->name)) |
1662 | strcpy(per_cpu(cpufreq_cpu_governor, cpu), "\0"); | 1658 | strcpy(per_cpu(cpufreq_cpu_governor, cpu), "\0"); |
1663 | } | 1659 | } |
1664 | #endif | 1660 | #endif |
1665 | 1661 | ||
1666 | mutex_lock(&cpufreq_governor_mutex); | 1662 | mutex_lock(&cpufreq_governor_mutex); |
1667 | list_del(&governor->governor_list); | 1663 | list_del(&governor->governor_list); |
1668 | mutex_unlock(&cpufreq_governor_mutex); | 1664 | mutex_unlock(&cpufreq_governor_mutex); |
1669 | return; | 1665 | return; |
1670 | } | 1666 | } |
1671 | EXPORT_SYMBOL_GPL(cpufreq_unregister_governor); | 1667 | EXPORT_SYMBOL_GPL(cpufreq_unregister_governor); |
1672 | 1668 | ||
1673 | 1669 | ||
1674 | 1670 | ||
1675 | /********************************************************************* | 1671 | /********************************************************************* |
1676 | * POLICY INTERFACE * | 1672 | * POLICY INTERFACE * |
1677 | *********************************************************************/ | 1673 | *********************************************************************/ |
1678 | 1674 | ||
1679 | /** | 1675 | /** |
1680 | * cpufreq_get_policy - get the current cpufreq_policy | 1676 | * cpufreq_get_policy - get the current cpufreq_policy |
1681 | * @policy: struct cpufreq_policy into which the current cpufreq_policy | 1677 | * @policy: struct cpufreq_policy into which the current cpufreq_policy |
1682 | * is written | 1678 | * is written |
1683 | * | 1679 | * |
1684 | * Reads the current cpufreq policy. | 1680 | * Reads the current cpufreq policy. |
1685 | */ | 1681 | */ |
1686 | int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu) | 1682 | int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu) |
1687 | { | 1683 | { |
1688 | struct cpufreq_policy *cpu_policy; | 1684 | struct cpufreq_policy *cpu_policy; |
1689 | if (!policy) | 1685 | if (!policy) |
1690 | return -EINVAL; | 1686 | return -EINVAL; |
1691 | 1687 | ||
1692 | cpu_policy = cpufreq_cpu_get(cpu); | 1688 | cpu_policy = cpufreq_cpu_get(cpu); |
1693 | if (!cpu_policy) | 1689 | if (!cpu_policy) |
1694 | return -EINVAL; | 1690 | return -EINVAL; |
1695 | 1691 | ||
1696 | memcpy(policy, cpu_policy, sizeof(struct cpufreq_policy)); | 1692 | memcpy(policy, cpu_policy, sizeof(struct cpufreq_policy)); |
1697 | 1693 | ||
1698 | cpufreq_cpu_put(cpu_policy); | 1694 | cpufreq_cpu_put(cpu_policy); |
1699 | return 0; | 1695 | return 0; |
1700 | } | 1696 | } |
1701 | EXPORT_SYMBOL(cpufreq_get_policy); | 1697 | EXPORT_SYMBOL(cpufreq_get_policy); |
1702 | 1698 | ||
1703 | 1699 | ||
1704 | /* | 1700 | /* |
1705 | * data : current policy. | 1701 | * data : current policy. |
1706 | * policy : policy to be set. | 1702 | * policy : policy to be set. |
1707 | */ | 1703 | */ |
1708 | static int __cpufreq_set_policy(struct cpufreq_policy *data, | 1704 | static int __cpufreq_set_policy(struct cpufreq_policy *data, |
1709 | struct cpufreq_policy *policy) | 1705 | struct cpufreq_policy *policy) |
1710 | { | 1706 | { |
1711 | int ret = 0; | 1707 | int ret = 0; |
1712 | 1708 | ||
1713 | cpufreq_debug_disable_ratelimit(); | 1709 | cpufreq_debug_disable_ratelimit(); |
1714 | dprintk("setting new policy for CPU %u: %u - %u kHz\n", policy->cpu, | 1710 | dprintk("setting new policy for CPU %u: %u - %u kHz\n", policy->cpu, |
1715 | policy->min, policy->max); | 1711 | policy->min, policy->max); |
1716 | 1712 | ||
1717 | memcpy(&policy->cpuinfo, &data->cpuinfo, | 1713 | memcpy(&policy->cpuinfo, &data->cpuinfo, |
1718 | sizeof(struct cpufreq_cpuinfo)); | 1714 | sizeof(struct cpufreq_cpuinfo)); |
1719 | 1715 | ||
1720 | if (policy->min > data->max || policy->max < data->min) { | 1716 | if (policy->min > data->max || policy->max < data->min) { |
1721 | ret = -EINVAL; | 1717 | ret = -EINVAL; |
1722 | goto error_out; | 1718 | goto error_out; |
1723 | } | 1719 | } |
1724 | 1720 | ||
1725 | /* verify the cpu speed can be set within this limit */ | 1721 | /* verify the cpu speed can be set within this limit */ |
1726 | ret = cpufreq_driver->verify(policy); | 1722 | ret = cpufreq_driver->verify(policy); |
1727 | if (ret) | 1723 | if (ret) |
1728 | goto error_out; | 1724 | goto error_out; |
1729 | 1725 | ||
1730 | /* adjust if necessary - all reasons */ | 1726 | /* adjust if necessary - all reasons */ |
1731 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, | 1727 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, |
1732 | CPUFREQ_ADJUST, policy); | 1728 | CPUFREQ_ADJUST, policy); |
1733 | 1729 | ||
1734 | /* adjust if necessary - hardware incompatibility*/ | 1730 | /* adjust if necessary - hardware incompatibility*/ |
1735 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, | 1731 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, |
1736 | CPUFREQ_INCOMPATIBLE, policy); | 1732 | CPUFREQ_INCOMPATIBLE, policy); |
1737 | 1733 | ||
1738 | /* verify the cpu speed can be set within this limit, | 1734 | /* verify the cpu speed can be set within this limit, |
1739 | which might be different to the first one */ | 1735 | which might be different to the first one */ |
1740 | ret = cpufreq_driver->verify(policy); | 1736 | ret = cpufreq_driver->verify(policy); |
1741 | if (ret) | 1737 | if (ret) |
1742 | goto error_out; | 1738 | goto error_out; |
1743 | 1739 | ||
1744 | /* notification of the new policy */ | 1740 | /* notification of the new policy */ |
1745 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, | 1741 | blocking_notifier_call_chain(&cpufreq_policy_notifier_list, |
1746 | CPUFREQ_NOTIFY, policy); | 1742 | CPUFREQ_NOTIFY, policy); |
1747 | 1743 | ||
1748 | data->min = policy->min; | 1744 | data->min = policy->min; |
1749 | data->max = policy->max; | 1745 | data->max = policy->max; |
1750 | 1746 | ||
1751 | dprintk("new min and max freqs are %u - %u kHz\n", | 1747 | dprintk("new min and max freqs are %u - %u kHz\n", |
1752 | data->min, data->max); | 1748 | data->min, data->max); |
1753 | 1749 | ||
1754 | if (cpufreq_driver->setpolicy) { | 1750 | if (cpufreq_driver->setpolicy) { |
1755 | data->policy = policy->policy; | 1751 | data->policy = policy->policy; |
1756 | dprintk("setting range\n"); | 1752 | dprintk("setting range\n"); |
1757 | ret = cpufreq_driver->setpolicy(policy); | 1753 | ret = cpufreq_driver->setpolicy(policy); |
1758 | } else { | 1754 | } else { |
1759 | if (policy->governor != data->governor) { | 1755 | if (policy->governor != data->governor) { |
1760 | /* save old, working values */ | 1756 | /* save old, working values */ |
1761 | struct cpufreq_governor *old_gov = data->governor; | 1757 | struct cpufreq_governor *old_gov = data->governor; |
1762 | 1758 | ||
1763 | dprintk("governor switch\n"); | 1759 | dprintk("governor switch\n"); |
1764 | 1760 | ||
1765 | /* end old governor */ | 1761 | /* end old governor */ |
1766 | if (data->governor) | 1762 | if (data->governor) |
1767 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); | 1763 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); |
1768 | 1764 | ||
1769 | /* start new governor */ | 1765 | /* start new governor */ |
1770 | data->governor = policy->governor; | 1766 | data->governor = policy->governor; |
1771 | if (__cpufreq_governor(data, CPUFREQ_GOV_START)) { | 1767 | if (__cpufreq_governor(data, CPUFREQ_GOV_START)) { |
1772 | /* new governor failed, so re-start old one */ | 1768 | /* new governor failed, so re-start old one */ |
1773 | dprintk("starting governor %s failed\n", | 1769 | dprintk("starting governor %s failed\n", |
1774 | data->governor->name); | 1770 | data->governor->name); |
1775 | if (old_gov) { | 1771 | if (old_gov) { |
1776 | data->governor = old_gov; | 1772 | data->governor = old_gov; |
1777 | __cpufreq_governor(data, | 1773 | __cpufreq_governor(data, |
1778 | CPUFREQ_GOV_START); | 1774 | CPUFREQ_GOV_START); |
1779 | } | 1775 | } |
1780 | ret = -EINVAL; | 1776 | ret = -EINVAL; |
1781 | goto error_out; | 1777 | goto error_out; |
1782 | } | 1778 | } |
1783 | /* might be a policy change, too, so fall through */ | 1779 | /* might be a policy change, too, so fall through */ |
1784 | } | 1780 | } |
1785 | dprintk("governor: change or update limits\n"); | 1781 | dprintk("governor: change or update limits\n"); |
1786 | __cpufreq_governor(data, CPUFREQ_GOV_LIMITS); | 1782 | __cpufreq_governor(data, CPUFREQ_GOV_LIMITS); |
1787 | } | 1783 | } |
1788 | 1784 | ||
1789 | error_out: | 1785 | error_out: |
1790 | cpufreq_debug_enable_ratelimit(); | 1786 | cpufreq_debug_enable_ratelimit(); |
1791 | return ret; | 1787 | return ret; |
1792 | } | 1788 | } |
1793 | 1789 | ||
1794 | /** | 1790 | /** |
1795 | * cpufreq_update_policy - re-evaluate an existing cpufreq policy | 1791 | * cpufreq_update_policy - re-evaluate an existing cpufreq policy |
1796 | * @cpu: CPU which shall be re-evaluated | 1792 | * @cpu: CPU which shall be re-evaluated |
1797 | * | 1793 | * |
1798 | * Usefull for policy notifiers which have different necessities | 1794 | * Usefull for policy notifiers which have different necessities |
1799 | * at different times. | 1795 | * at different times. |
1800 | */ | 1796 | */ |
1801 | int cpufreq_update_policy(unsigned int cpu) | 1797 | int cpufreq_update_policy(unsigned int cpu) |
1802 | { | 1798 | { |
1803 | struct cpufreq_policy *data = cpufreq_cpu_get(cpu); | 1799 | struct cpufreq_policy *data = cpufreq_cpu_get(cpu); |
1804 | struct cpufreq_policy policy; | 1800 | struct cpufreq_policy policy; |
1805 | int ret; | 1801 | int ret; |
1806 | 1802 | ||
1807 | if (!data) { | 1803 | if (!data) { |
1808 | ret = -ENODEV; | 1804 | ret = -ENODEV; |
1809 | goto no_policy; | 1805 | goto no_policy; |
1810 | } | 1806 | } |
1811 | 1807 | ||
1812 | if (unlikely(lock_policy_rwsem_write(cpu))) { | 1808 | if (unlikely(lock_policy_rwsem_write(cpu))) { |
1813 | ret = -EINVAL; | 1809 | ret = -EINVAL; |
1814 | goto fail; | 1810 | goto fail; |
1815 | } | 1811 | } |
1816 | 1812 | ||
1817 | dprintk("updating policy for CPU %u\n", cpu); | 1813 | dprintk("updating policy for CPU %u\n", cpu); |
1818 | memcpy(&policy, data, sizeof(struct cpufreq_policy)); | 1814 | memcpy(&policy, data, sizeof(struct cpufreq_policy)); |
1819 | policy.min = data->user_policy.min; | 1815 | policy.min = data->user_policy.min; |
1820 | policy.max = data->user_policy.max; | 1816 | policy.max = data->user_policy.max; |
1821 | policy.policy = data->user_policy.policy; | 1817 | policy.policy = data->user_policy.policy; |
1822 | policy.governor = data->user_policy.governor; | 1818 | policy.governor = data->user_policy.governor; |
1823 | 1819 | ||
1824 | /* BIOS might change freq behind our back | 1820 | /* BIOS might change freq behind our back |
1825 | -> ask driver for current freq and notify governors about a change */ | 1821 | -> ask driver for current freq and notify governors about a change */ |
1826 | if (cpufreq_driver->get) { | 1822 | if (cpufreq_driver->get) { |
1827 | policy.cur = cpufreq_driver->get(cpu); | 1823 | policy.cur = cpufreq_driver->get(cpu); |
1828 | if (!data->cur) { | 1824 | if (!data->cur) { |
1829 | dprintk("Driver did not initialize current freq"); | 1825 | dprintk("Driver did not initialize current freq"); |
1830 | data->cur = policy.cur; | 1826 | data->cur = policy.cur; |
1831 | } else { | 1827 | } else { |
1832 | if (data->cur != policy.cur) | 1828 | if (data->cur != policy.cur) |
1833 | cpufreq_out_of_sync(cpu, data->cur, | 1829 | cpufreq_out_of_sync(cpu, data->cur, |
1834 | policy.cur); | 1830 | policy.cur); |
1835 | } | 1831 | } |
1836 | } | 1832 | } |
1837 | 1833 | ||
1838 | ret = __cpufreq_set_policy(data, &policy); | 1834 | ret = __cpufreq_set_policy(data, &policy); |
1839 | 1835 | ||
1840 | unlock_policy_rwsem_write(cpu); | 1836 | unlock_policy_rwsem_write(cpu); |
1841 | 1837 | ||
1842 | fail: | 1838 | fail: |
1843 | cpufreq_cpu_put(data); | 1839 | cpufreq_cpu_put(data); |
1844 | no_policy: | 1840 | no_policy: |
1845 | return ret; | 1841 | return ret; |
1846 | } | 1842 | } |
1847 | EXPORT_SYMBOL(cpufreq_update_policy); | 1843 | EXPORT_SYMBOL(cpufreq_update_policy); |
1848 | 1844 | ||
1849 | static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb, | 1845 | static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb, |
1850 | unsigned long action, void *hcpu) | 1846 | unsigned long action, void *hcpu) |
1851 | { | 1847 | { |
1852 | unsigned int cpu = (unsigned long)hcpu; | 1848 | unsigned int cpu = (unsigned long)hcpu; |
1853 | struct sys_device *sys_dev; | 1849 | struct sys_device *sys_dev; |
1854 | 1850 | ||
1855 | sys_dev = get_cpu_sysdev(cpu); | 1851 | sys_dev = get_cpu_sysdev(cpu); |
1856 | if (sys_dev) { | 1852 | if (sys_dev) { |
1857 | switch (action) { | 1853 | switch (action) { |
1858 | case CPU_ONLINE: | 1854 | case CPU_ONLINE: |
1859 | case CPU_ONLINE_FROZEN: | 1855 | case CPU_ONLINE_FROZEN: |
1860 | cpufreq_add_dev(sys_dev); | 1856 | cpufreq_add_dev(sys_dev); |
1861 | break; | 1857 | break; |
1862 | case CPU_DOWN_PREPARE: | 1858 | case CPU_DOWN_PREPARE: |
1863 | case CPU_DOWN_PREPARE_FROZEN: | 1859 | case CPU_DOWN_PREPARE_FROZEN: |
1864 | if (unlikely(lock_policy_rwsem_write(cpu))) | 1860 | if (unlikely(lock_policy_rwsem_write(cpu))) |
1865 | BUG(); | 1861 | BUG(); |
1866 | 1862 | ||
1867 | __cpufreq_remove_dev(sys_dev); | 1863 | __cpufreq_remove_dev(sys_dev); |
1868 | break; | 1864 | break; |
1869 | case CPU_DOWN_FAILED: | 1865 | case CPU_DOWN_FAILED: |
1870 | case CPU_DOWN_FAILED_FROZEN: | 1866 | case CPU_DOWN_FAILED_FROZEN: |
1871 | cpufreq_add_dev(sys_dev); | 1867 | cpufreq_add_dev(sys_dev); |
1872 | break; | 1868 | break; |
1873 | } | 1869 | } |
1874 | } | 1870 | } |
1875 | return NOTIFY_OK; | 1871 | return NOTIFY_OK; |
1876 | } | 1872 | } |
1877 | 1873 | ||
1878 | static struct notifier_block __refdata cpufreq_cpu_notifier = | 1874 | static struct notifier_block __refdata cpufreq_cpu_notifier = |
1879 | { | 1875 | { |
1880 | .notifier_call = cpufreq_cpu_callback, | 1876 | .notifier_call = cpufreq_cpu_callback, |
1881 | }; | 1877 | }; |
1882 | 1878 | ||
1883 | /********************************************************************* | 1879 | /********************************************************************* |
1884 | * REGISTER / UNREGISTER CPUFREQ DRIVER * | 1880 | * REGISTER / UNREGISTER CPUFREQ DRIVER * |
1885 | *********************************************************************/ | 1881 | *********************************************************************/ |
1886 | 1882 | ||
1887 | /** | 1883 | /** |
1888 | * cpufreq_register_driver - register a CPU Frequency driver | 1884 | * cpufreq_register_driver - register a CPU Frequency driver |
1889 | * @driver_data: A struct cpufreq_driver containing the values# | 1885 | * @driver_data: A struct cpufreq_driver containing the values# |
1890 | * submitted by the CPU Frequency driver. | 1886 | * submitted by the CPU Frequency driver. |
1891 | * | 1887 | * |
1892 | * Registers a CPU Frequency driver to this core code. This code | 1888 | * Registers a CPU Frequency driver to this core code. This code |
1893 | * returns zero on success, -EBUSY when another driver got here first | 1889 | * returns zero on success, -EBUSY when another driver got here first |
1894 | * (and isn't unregistered in the meantime). | 1890 | * (and isn't unregistered in the meantime). |
1895 | * | 1891 | * |
1896 | */ | 1892 | */ |
1897 | int cpufreq_register_driver(struct cpufreq_driver *driver_data) | 1893 | int cpufreq_register_driver(struct cpufreq_driver *driver_data) |
1898 | { | 1894 | { |
1899 | unsigned long flags; | 1895 | unsigned long flags; |
1900 | int ret; | 1896 | int ret; |
1901 | 1897 | ||
1902 | if (!driver_data || !driver_data->verify || !driver_data->init || | 1898 | if (!driver_data || !driver_data->verify || !driver_data->init || |
1903 | ((!driver_data->setpolicy) && (!driver_data->target))) | 1899 | ((!driver_data->setpolicy) && (!driver_data->target))) |
1904 | return -EINVAL; | 1900 | return -EINVAL; |
1905 | 1901 | ||
1906 | dprintk("trying to register driver %s\n", driver_data->name); | 1902 | dprintk("trying to register driver %s\n", driver_data->name); |
1907 | 1903 | ||
1908 | if (driver_data->setpolicy) | 1904 | if (driver_data->setpolicy) |
1909 | driver_data->flags |= CPUFREQ_CONST_LOOPS; | 1905 | driver_data->flags |= CPUFREQ_CONST_LOOPS; |
1910 | 1906 | ||
1911 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1907 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
1912 | if (cpufreq_driver) { | 1908 | if (cpufreq_driver) { |
1913 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1909 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1914 | return -EBUSY; | 1910 | return -EBUSY; |
1915 | } | 1911 | } |
1916 | cpufreq_driver = driver_data; | 1912 | cpufreq_driver = driver_data; |
1917 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1913 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1918 | 1914 | ||
1919 | ret = sysdev_driver_register(&cpu_sysdev_class, | 1915 | ret = sysdev_driver_register(&cpu_sysdev_class, |
1920 | &cpufreq_sysdev_driver); | 1916 | &cpufreq_sysdev_driver); |
1921 | 1917 | ||
1922 | if ((!ret) && !(cpufreq_driver->flags & CPUFREQ_STICKY)) { | 1918 | if ((!ret) && !(cpufreq_driver->flags & CPUFREQ_STICKY)) { |
1923 | int i; | 1919 | int i; |
1924 | ret = -ENODEV; | 1920 | ret = -ENODEV; |
1925 | 1921 | ||
1926 | /* check for at least one working CPU */ | 1922 | /* check for at least one working CPU */ |
1927 | for (i = 0; i < nr_cpu_ids; i++) | 1923 | for (i = 0; i < nr_cpu_ids; i++) |
1928 | if (cpu_possible(i) && per_cpu(cpufreq_cpu_data, i)) { | 1924 | if (cpu_possible(i) && per_cpu(cpufreq_cpu_data, i)) { |
1929 | ret = 0; | 1925 | ret = 0; |
1930 | break; | 1926 | break; |
1931 | } | 1927 | } |
1932 | 1928 | ||
1933 | /* if all ->init() calls failed, unregister */ | 1929 | /* if all ->init() calls failed, unregister */ |
1934 | if (ret) { | 1930 | if (ret) { |
1935 | dprintk("no CPU initialized for driver %s\n", | 1931 | dprintk("no CPU initialized for driver %s\n", |
1936 | driver_data->name); | 1932 | driver_data->name); |
1937 | sysdev_driver_unregister(&cpu_sysdev_class, | 1933 | sysdev_driver_unregister(&cpu_sysdev_class, |
1938 | &cpufreq_sysdev_driver); | 1934 | &cpufreq_sysdev_driver); |
1939 | 1935 | ||
1940 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1936 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
1941 | cpufreq_driver = NULL; | 1937 | cpufreq_driver = NULL; |
1942 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1938 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1943 | } | 1939 | } |
1944 | } | 1940 | } |
1945 | 1941 | ||
1946 | if (!ret) { | 1942 | if (!ret) { |
1947 | register_hotcpu_notifier(&cpufreq_cpu_notifier); | 1943 | register_hotcpu_notifier(&cpufreq_cpu_notifier); |
1948 | dprintk("driver %s up and running\n", driver_data->name); | 1944 | dprintk("driver %s up and running\n", driver_data->name); |
1949 | cpufreq_debug_enable_ratelimit(); | 1945 | cpufreq_debug_enable_ratelimit(); |
1950 | } | 1946 | } |
1951 | 1947 | ||
1952 | return ret; | 1948 | return ret; |
1953 | } | 1949 | } |
1954 | EXPORT_SYMBOL_GPL(cpufreq_register_driver); | 1950 | EXPORT_SYMBOL_GPL(cpufreq_register_driver); |
1955 | 1951 | ||
1956 | 1952 | ||
1957 | /** | 1953 | /** |
1958 | * cpufreq_unregister_driver - unregister the current CPUFreq driver | 1954 | * cpufreq_unregister_driver - unregister the current CPUFreq driver |
1959 | * | 1955 | * |
1960 | * Unregister the current CPUFreq driver. Only call this if you have | 1956 | * Unregister the current CPUFreq driver. Only call this if you have |
1961 | * the right to do so, i.e. if you have succeeded in initialising before! | 1957 | * the right to do so, i.e. if you have succeeded in initialising before! |
1962 | * Returns zero if successful, and -EINVAL if the cpufreq_driver is | 1958 | * Returns zero if successful, and -EINVAL if the cpufreq_driver is |
1963 | * currently not initialised. | 1959 | * currently not initialised. |
1964 | */ | 1960 | */ |
1965 | int cpufreq_unregister_driver(struct cpufreq_driver *driver) | 1961 | int cpufreq_unregister_driver(struct cpufreq_driver *driver) |
1966 | { | 1962 | { |
1967 | unsigned long flags; | 1963 | unsigned long flags; |
1968 | 1964 | ||
1969 | cpufreq_debug_disable_ratelimit(); | 1965 | cpufreq_debug_disable_ratelimit(); |
1970 | 1966 | ||
1971 | if (!cpufreq_driver || (driver != cpufreq_driver)) { | 1967 | if (!cpufreq_driver || (driver != cpufreq_driver)) { |
1972 | cpufreq_debug_enable_ratelimit(); | 1968 | cpufreq_debug_enable_ratelimit(); |
1973 | return -EINVAL; | 1969 | return -EINVAL; |
1974 | } | 1970 | } |
1975 | 1971 | ||
1976 | dprintk("unregistering driver %s\n", driver->name); | 1972 | dprintk("unregistering driver %s\n", driver->name); |
1977 | 1973 | ||
1978 | sysdev_driver_unregister(&cpu_sysdev_class, &cpufreq_sysdev_driver); | 1974 | sysdev_driver_unregister(&cpu_sysdev_class, &cpufreq_sysdev_driver); |
1979 | unregister_hotcpu_notifier(&cpufreq_cpu_notifier); | 1975 | unregister_hotcpu_notifier(&cpufreq_cpu_notifier); |
1980 | 1976 | ||
1981 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1977 | spin_lock_irqsave(&cpufreq_driver_lock, flags); |
1982 | cpufreq_driver = NULL; | 1978 | cpufreq_driver = NULL; |
1983 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1979 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1984 | 1980 | ||
1985 | return 0; | 1981 | return 0; |
1986 | } | 1982 | } |
1987 | EXPORT_SYMBOL_GPL(cpufreq_unregister_driver); | 1983 | EXPORT_SYMBOL_GPL(cpufreq_unregister_driver); |
1988 | 1984 | ||
1989 | static int __init cpufreq_core_init(void) | 1985 | static int __init cpufreq_core_init(void) |
1990 | { | 1986 | { |
1991 | int cpu; | 1987 | int cpu; |
1992 | 1988 | ||
1993 | for_each_possible_cpu(cpu) { | 1989 | for_each_possible_cpu(cpu) { |
1994 | per_cpu(cpufreq_policy_cpu, cpu) = -1; | 1990 | per_cpu(cpufreq_policy_cpu, cpu) = -1; |
1995 | init_rwsem(&per_cpu(cpu_policy_rwsem, cpu)); | 1991 | init_rwsem(&per_cpu(cpu_policy_rwsem, cpu)); |
1996 | } | 1992 | } |
1997 | 1993 | ||
1998 | cpufreq_global_kobject = kobject_create_and_add("cpufreq", | 1994 | cpufreq_global_kobject = kobject_create_and_add("cpufreq", |
1999 | &cpu_sysdev_class.kset.kobj); | 1995 | &cpu_sysdev_class.kset.kobj); |
2000 | BUG_ON(!cpufreq_global_kobject); | 1996 | BUG_ON(!cpufreq_global_kobject); |
2001 | 1997 | ||
2002 | return 0; | 1998 | return 0; |
2003 | } | 1999 | } |
2004 | core_initcall(cpufreq_core_init); | 2000 | core_initcall(cpufreq_core_init); |
2005 | 2001 |
include/linux/cpufreq.h
1 | /* | 1 | /* |
2 | * linux/include/linux/cpufreq.h | 2 | * linux/include/linux/cpufreq.h |
3 | * | 3 | * |
4 | * Copyright (C) 2001 Russell King | 4 | * Copyright (C) 2001 Russell King |
5 | * (C) 2002 - 2003 Dominik Brodowski <linux@brodo.de> | 5 | * (C) 2002 - 2003 Dominik Brodowski <linux@brodo.de> |
6 | * | 6 | * |
7 | * This program is free software; you can redistribute it and/or modify | 7 | * This program is free software; you can redistribute it and/or modify |
8 | * it under the terms of the GNU General Public License version 2 as | 8 | * it under the terms of the GNU General Public License version 2 as |
9 | * published by the Free Software Foundation. | 9 | * published by the Free Software Foundation. |
10 | */ | 10 | */ |
11 | #ifndef _LINUX_CPUFREQ_H | 11 | #ifndef _LINUX_CPUFREQ_H |
12 | #define _LINUX_CPUFREQ_H | 12 | #define _LINUX_CPUFREQ_H |
13 | 13 | ||
14 | #include <linux/mutex.h> | 14 | #include <linux/mutex.h> |
15 | #include <linux/notifier.h> | 15 | #include <linux/notifier.h> |
16 | #include <linux/threads.h> | 16 | #include <linux/threads.h> |
17 | #include <linux/device.h> | 17 | #include <linux/device.h> |
18 | #include <linux/kobject.h> | 18 | #include <linux/kobject.h> |
19 | #include <linux/sysfs.h> | 19 | #include <linux/sysfs.h> |
20 | #include <linux/completion.h> | 20 | #include <linux/completion.h> |
21 | #include <linux/workqueue.h> | 21 | #include <linux/workqueue.h> |
22 | #include <linux/cpumask.h> | 22 | #include <linux/cpumask.h> |
23 | #include <asm/div64.h> | 23 | #include <asm/div64.h> |
24 | 24 | ||
25 | #define CPUFREQ_NAME_LEN 16 | 25 | #define CPUFREQ_NAME_LEN 16 |
26 | 26 | ||
27 | 27 | ||
28 | /********************************************************************* | 28 | /********************************************************************* |
29 | * CPUFREQ NOTIFIER INTERFACE * | 29 | * CPUFREQ NOTIFIER INTERFACE * |
30 | *********************************************************************/ | 30 | *********************************************************************/ |
31 | 31 | ||
32 | #define CPUFREQ_TRANSITION_NOTIFIER (0) | 32 | #define CPUFREQ_TRANSITION_NOTIFIER (0) |
33 | #define CPUFREQ_POLICY_NOTIFIER (1) | 33 | #define CPUFREQ_POLICY_NOTIFIER (1) |
34 | 34 | ||
35 | #ifdef CONFIG_CPU_FREQ | 35 | #ifdef CONFIG_CPU_FREQ |
36 | int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); | 36 | int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); |
37 | int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list); | 37 | int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list); |
38 | #else /* CONFIG_CPU_FREQ */ | 38 | #else /* CONFIG_CPU_FREQ */ |
39 | static inline int cpufreq_register_notifier(struct notifier_block *nb, | 39 | static inline int cpufreq_register_notifier(struct notifier_block *nb, |
40 | unsigned int list) | 40 | unsigned int list) |
41 | { | 41 | { |
42 | return 0; | 42 | return 0; |
43 | } | 43 | } |
44 | static inline int cpufreq_unregister_notifier(struct notifier_block *nb, | 44 | static inline int cpufreq_unregister_notifier(struct notifier_block *nb, |
45 | unsigned int list) | 45 | unsigned int list) |
46 | { | 46 | { |
47 | return 0; | 47 | return 0; |
48 | } | 48 | } |
49 | #endif /* CONFIG_CPU_FREQ */ | 49 | #endif /* CONFIG_CPU_FREQ */ |
50 | 50 | ||
51 | /* if (cpufreq_driver->target) exists, the ->governor decides what frequency | 51 | /* if (cpufreq_driver->target) exists, the ->governor decides what frequency |
52 | * within the limits is used. If (cpufreq_driver->setpolicy> exists, these | 52 | * within the limits is used. If (cpufreq_driver->setpolicy> exists, these |
53 | * two generic policies are available: | 53 | * two generic policies are available: |
54 | */ | 54 | */ |
55 | 55 | ||
56 | #define CPUFREQ_POLICY_POWERSAVE (1) | 56 | #define CPUFREQ_POLICY_POWERSAVE (1) |
57 | #define CPUFREQ_POLICY_PERFORMANCE (2) | 57 | #define CPUFREQ_POLICY_PERFORMANCE (2) |
58 | 58 | ||
59 | /* Frequency values here are CPU kHz so that hardware which doesn't run | 59 | /* Frequency values here are CPU kHz so that hardware which doesn't run |
60 | * with some frequencies can complain without having to guess what per | 60 | * with some frequencies can complain without having to guess what per |
61 | * cent / per mille means. | 61 | * cent / per mille means. |
62 | * Maximum transition latency is in nanoseconds - if it's unknown, | 62 | * Maximum transition latency is in nanoseconds - if it's unknown, |
63 | * CPUFREQ_ETERNAL shall be used. | 63 | * CPUFREQ_ETERNAL shall be used. |
64 | */ | 64 | */ |
65 | 65 | ||
66 | struct cpufreq_governor; | 66 | struct cpufreq_governor; |
67 | 67 | ||
68 | /* /sys/devices/system/cpu/cpufreq: entry point for global variables */ | 68 | /* /sys/devices/system/cpu/cpufreq: entry point for global variables */ |
69 | extern struct kobject *cpufreq_global_kobject; | 69 | extern struct kobject *cpufreq_global_kobject; |
70 | 70 | ||
71 | #define CPUFREQ_ETERNAL (-1) | 71 | #define CPUFREQ_ETERNAL (-1) |
72 | struct cpufreq_cpuinfo { | 72 | struct cpufreq_cpuinfo { |
73 | unsigned int max_freq; | 73 | unsigned int max_freq; |
74 | unsigned int min_freq; | 74 | unsigned int min_freq; |
75 | unsigned int transition_latency; /* in 10^(-9) s = nanoseconds */ | 75 | unsigned int transition_latency; /* in 10^(-9) s = nanoseconds */ |
76 | }; | 76 | }; |
77 | 77 | ||
78 | struct cpufreq_real_policy { | 78 | struct cpufreq_real_policy { |
79 | unsigned int min; /* in kHz */ | 79 | unsigned int min; /* in kHz */ |
80 | unsigned int max; /* in kHz */ | 80 | unsigned int max; /* in kHz */ |
81 | unsigned int policy; /* see above */ | 81 | unsigned int policy; /* see above */ |
82 | struct cpufreq_governor *governor; /* see below */ | 82 | struct cpufreq_governor *governor; /* see below */ |
83 | }; | 83 | }; |
84 | 84 | ||
85 | struct cpufreq_policy { | 85 | struct cpufreq_policy { |
86 | cpumask_var_t cpus; /* CPUs requiring sw coordination */ | 86 | cpumask_var_t cpus; /* CPUs requiring sw coordination */ |
87 | cpumask_var_t related_cpus; /* CPUs with any coordination */ | 87 | cpumask_var_t related_cpus; /* CPUs with any coordination */ |
88 | unsigned int shared_type; /* ANY or ALL affected CPUs | 88 | unsigned int shared_type; /* ANY or ALL affected CPUs |
89 | should set cpufreq */ | 89 | should set cpufreq */ |
90 | unsigned int cpu; /* cpu nr of registered CPU */ | 90 | unsigned int cpu; /* cpu nr of registered CPU */ |
91 | struct cpufreq_cpuinfo cpuinfo;/* see above */ | 91 | struct cpufreq_cpuinfo cpuinfo;/* see above */ |
92 | 92 | ||
93 | unsigned int min; /* in kHz */ | 93 | unsigned int min; /* in kHz */ |
94 | unsigned int max; /* in kHz */ | 94 | unsigned int max; /* in kHz */ |
95 | unsigned int cur; /* in kHz, only needed if cpufreq | 95 | unsigned int cur; /* in kHz, only needed if cpufreq |
96 | * governors are used */ | 96 | * governors are used */ |
97 | unsigned int policy; /* see above */ | 97 | unsigned int policy; /* see above */ |
98 | struct cpufreq_governor *governor; /* see below */ | 98 | struct cpufreq_governor *governor; /* see below */ |
99 | 99 | ||
100 | struct work_struct update; /* if update_policy() needs to be | 100 | struct work_struct update; /* if update_policy() needs to be |
101 | * called, but you're in IRQ context */ | 101 | * called, but you're in IRQ context */ |
102 | 102 | ||
103 | struct cpufreq_real_policy user_policy; | 103 | struct cpufreq_real_policy user_policy; |
104 | 104 | ||
105 | struct kobject kobj; | 105 | struct kobject kobj; |
106 | struct completion kobj_unregister; | 106 | struct completion kobj_unregister; |
107 | }; | 107 | }; |
108 | 108 | ||
109 | #define CPUFREQ_ADJUST (0) | 109 | #define CPUFREQ_ADJUST (0) |
110 | #define CPUFREQ_INCOMPATIBLE (1) | 110 | #define CPUFREQ_INCOMPATIBLE (1) |
111 | #define CPUFREQ_NOTIFY (2) | 111 | #define CPUFREQ_NOTIFY (2) |
112 | #define CPUFREQ_START (3) | 112 | #define CPUFREQ_START (3) |
113 | 113 | ||
114 | #define CPUFREQ_SHARED_TYPE_NONE (0) /* None */ | 114 | #define CPUFREQ_SHARED_TYPE_NONE (0) /* None */ |
115 | #define CPUFREQ_SHARED_TYPE_HW (1) /* HW does needed coordination */ | 115 | #define CPUFREQ_SHARED_TYPE_HW (1) /* HW does needed coordination */ |
116 | #define CPUFREQ_SHARED_TYPE_ALL (2) /* All dependent CPUs should set freq */ | 116 | #define CPUFREQ_SHARED_TYPE_ALL (2) /* All dependent CPUs should set freq */ |
117 | #define CPUFREQ_SHARED_TYPE_ANY (3) /* Freq can be set from any dependent CPU*/ | 117 | #define CPUFREQ_SHARED_TYPE_ANY (3) /* Freq can be set from any dependent CPU*/ |
118 | 118 | ||
119 | /******************** cpufreq transition notifiers *******************/ | 119 | /******************** cpufreq transition notifiers *******************/ |
120 | 120 | ||
121 | #define CPUFREQ_PRECHANGE (0) | 121 | #define CPUFREQ_PRECHANGE (0) |
122 | #define CPUFREQ_POSTCHANGE (1) | 122 | #define CPUFREQ_POSTCHANGE (1) |
123 | #define CPUFREQ_RESUMECHANGE (8) | 123 | #define CPUFREQ_RESUMECHANGE (8) |
124 | #define CPUFREQ_SUSPENDCHANGE (9) | 124 | #define CPUFREQ_SUSPENDCHANGE (9) |
125 | 125 | ||
126 | struct cpufreq_freqs { | 126 | struct cpufreq_freqs { |
127 | unsigned int cpu; /* cpu nr */ | 127 | unsigned int cpu; /* cpu nr */ |
128 | unsigned int old; | 128 | unsigned int old; |
129 | unsigned int new; | 129 | unsigned int new; |
130 | u8 flags; /* flags of cpufreq_driver, see below. */ | 130 | u8 flags; /* flags of cpufreq_driver, see below. */ |
131 | }; | 131 | }; |
132 | 132 | ||
133 | 133 | ||
134 | /** | 134 | /** |
135 | * cpufreq_scale - "old * mult / div" calculation for large values (32-bit-arch safe) | 135 | * cpufreq_scale - "old * mult / div" calculation for large values (32-bit-arch safe) |
136 | * @old: old value | 136 | * @old: old value |
137 | * @div: divisor | 137 | * @div: divisor |
138 | * @mult: multiplier | 138 | * @mult: multiplier |
139 | * | 139 | * |
140 | * | 140 | * |
141 | * new = old * mult / div | 141 | * new = old * mult / div |
142 | */ | 142 | */ |
143 | static inline unsigned long cpufreq_scale(unsigned long old, u_int div, u_int mult) | 143 | static inline unsigned long cpufreq_scale(unsigned long old, u_int div, u_int mult) |
144 | { | 144 | { |
145 | #if BITS_PER_LONG == 32 | 145 | #if BITS_PER_LONG == 32 |
146 | 146 | ||
147 | u64 result = ((u64) old) * ((u64) mult); | 147 | u64 result = ((u64) old) * ((u64) mult); |
148 | do_div(result, div); | 148 | do_div(result, div); |
149 | return (unsigned long) result; | 149 | return (unsigned long) result; |
150 | 150 | ||
151 | #elif BITS_PER_LONG == 64 | 151 | #elif BITS_PER_LONG == 64 |
152 | 152 | ||
153 | unsigned long result = old * ((u64) mult); | 153 | unsigned long result = old * ((u64) mult); |
154 | result /= div; | 154 | result /= div; |
155 | return result; | 155 | return result; |
156 | 156 | ||
157 | #endif | 157 | #endif |
158 | }; | 158 | }; |
159 | 159 | ||
160 | /********************************************************************* | 160 | /********************************************************************* |
161 | * CPUFREQ GOVERNORS * | 161 | * CPUFREQ GOVERNORS * |
162 | *********************************************************************/ | 162 | *********************************************************************/ |
163 | 163 | ||
164 | #define CPUFREQ_GOV_START 1 | 164 | #define CPUFREQ_GOV_START 1 |
165 | #define CPUFREQ_GOV_STOP 2 | 165 | #define CPUFREQ_GOV_STOP 2 |
166 | #define CPUFREQ_GOV_LIMITS 3 | 166 | #define CPUFREQ_GOV_LIMITS 3 |
167 | 167 | ||
168 | struct cpufreq_governor { | 168 | struct cpufreq_governor { |
169 | char name[CPUFREQ_NAME_LEN]; | 169 | char name[CPUFREQ_NAME_LEN]; |
170 | int (*governor) (struct cpufreq_policy *policy, | 170 | int (*governor) (struct cpufreq_policy *policy, |
171 | unsigned int event); | 171 | unsigned int event); |
172 | ssize_t (*show_setspeed) (struct cpufreq_policy *policy, | 172 | ssize_t (*show_setspeed) (struct cpufreq_policy *policy, |
173 | char *buf); | 173 | char *buf); |
174 | int (*store_setspeed) (struct cpufreq_policy *policy, | 174 | int (*store_setspeed) (struct cpufreq_policy *policy, |
175 | unsigned int freq); | 175 | unsigned int freq); |
176 | unsigned int max_transition_latency; /* HW must be able to switch to | 176 | unsigned int max_transition_latency; /* HW must be able to switch to |
177 | next freq faster than this value in nano secs or we | 177 | next freq faster than this value in nano secs or we |
178 | will fallback to performance governor */ | 178 | will fallback to performance governor */ |
179 | struct list_head governor_list; | 179 | struct list_head governor_list; |
180 | struct module *owner; | 180 | struct module *owner; |
181 | }; | 181 | }; |
182 | 182 | ||
183 | /* pass a target to the cpufreq driver | 183 | /* pass a target to the cpufreq driver |
184 | */ | 184 | */ |
185 | extern int cpufreq_driver_target(struct cpufreq_policy *policy, | 185 | extern int cpufreq_driver_target(struct cpufreq_policy *policy, |
186 | unsigned int target_freq, | 186 | unsigned int target_freq, |
187 | unsigned int relation); | 187 | unsigned int relation); |
188 | extern int __cpufreq_driver_target(struct cpufreq_policy *policy, | 188 | extern int __cpufreq_driver_target(struct cpufreq_policy *policy, |
189 | unsigned int target_freq, | 189 | unsigned int target_freq, |
190 | unsigned int relation); | 190 | unsigned int relation); |
191 | 191 | ||
192 | 192 | ||
193 | extern int __cpufreq_driver_getavg(struct cpufreq_policy *policy, | 193 | extern int __cpufreq_driver_getavg(struct cpufreq_policy *policy, |
194 | unsigned int cpu); | 194 | unsigned int cpu); |
195 | 195 | ||
196 | int cpufreq_register_governor(struct cpufreq_governor *governor); | 196 | int cpufreq_register_governor(struct cpufreq_governor *governor); |
197 | void cpufreq_unregister_governor(struct cpufreq_governor *governor); | 197 | void cpufreq_unregister_governor(struct cpufreq_governor *governor); |
198 | 198 | ||
199 | int lock_policy_rwsem_read(int cpu); | ||
200 | int lock_policy_rwsem_write(int cpu); | ||
201 | void unlock_policy_rwsem_read(int cpu); | ||
202 | void unlock_policy_rwsem_write(int cpu); | ||
203 | |||
204 | 199 | ||
205 | /********************************************************************* | 200 | /********************************************************************* |
206 | * CPUFREQ DRIVER INTERFACE * | 201 | * CPUFREQ DRIVER INTERFACE * |
207 | *********************************************************************/ | 202 | *********************************************************************/ |
208 | 203 | ||
209 | #define CPUFREQ_RELATION_L 0 /* lowest frequency at or above target */ | 204 | #define CPUFREQ_RELATION_L 0 /* lowest frequency at or above target */ |
210 | #define CPUFREQ_RELATION_H 1 /* highest frequency below or at target */ | 205 | #define CPUFREQ_RELATION_H 1 /* highest frequency below or at target */ |
211 | 206 | ||
212 | struct freq_attr; | 207 | struct freq_attr; |
213 | 208 | ||
214 | struct cpufreq_driver { | 209 | struct cpufreq_driver { |
215 | struct module *owner; | 210 | struct module *owner; |
216 | char name[CPUFREQ_NAME_LEN]; | 211 | char name[CPUFREQ_NAME_LEN]; |
217 | u8 flags; | 212 | u8 flags; |
218 | 213 | ||
219 | /* needed by all drivers */ | 214 | /* needed by all drivers */ |
220 | int (*init) (struct cpufreq_policy *policy); | 215 | int (*init) (struct cpufreq_policy *policy); |
221 | int (*verify) (struct cpufreq_policy *policy); | 216 | int (*verify) (struct cpufreq_policy *policy); |
222 | 217 | ||
223 | /* define one out of two */ | 218 | /* define one out of two */ |
224 | int (*setpolicy) (struct cpufreq_policy *policy); | 219 | int (*setpolicy) (struct cpufreq_policy *policy); |
225 | int (*target) (struct cpufreq_policy *policy, | 220 | int (*target) (struct cpufreq_policy *policy, |
226 | unsigned int target_freq, | 221 | unsigned int target_freq, |
227 | unsigned int relation); | 222 | unsigned int relation); |
228 | 223 | ||
229 | /* should be defined, if possible */ | 224 | /* should be defined, if possible */ |
230 | unsigned int (*get) (unsigned int cpu); | 225 | unsigned int (*get) (unsigned int cpu); |
231 | 226 | ||
232 | /* optional */ | 227 | /* optional */ |
233 | unsigned int (*getavg) (struct cpufreq_policy *policy, | 228 | unsigned int (*getavg) (struct cpufreq_policy *policy, |
234 | unsigned int cpu); | 229 | unsigned int cpu); |
235 | int (*bios_limit) (int cpu, unsigned int *limit); | 230 | int (*bios_limit) (int cpu, unsigned int *limit); |
236 | 231 | ||
237 | int (*exit) (struct cpufreq_policy *policy); | 232 | int (*exit) (struct cpufreq_policy *policy); |
238 | int (*suspend) (struct cpufreq_policy *policy, pm_message_t pmsg); | 233 | int (*suspend) (struct cpufreq_policy *policy, pm_message_t pmsg); |
239 | int (*resume) (struct cpufreq_policy *policy); | 234 | int (*resume) (struct cpufreq_policy *policy); |
240 | struct freq_attr **attr; | 235 | struct freq_attr **attr; |
241 | }; | 236 | }; |
242 | 237 | ||
243 | /* flags */ | 238 | /* flags */ |
244 | 239 | ||
245 | #define CPUFREQ_STICKY 0x01 /* the driver isn't removed even if | 240 | #define CPUFREQ_STICKY 0x01 /* the driver isn't removed even if |
246 | * all ->init() calls failed */ | 241 | * all ->init() calls failed */ |
247 | #define CPUFREQ_CONST_LOOPS 0x02 /* loops_per_jiffy or other kernel | 242 | #define CPUFREQ_CONST_LOOPS 0x02 /* loops_per_jiffy or other kernel |
248 | * "constants" aren't affected by | 243 | * "constants" aren't affected by |
249 | * frequency transitions */ | 244 | * frequency transitions */ |
250 | #define CPUFREQ_PM_NO_WARN 0x04 /* don't warn on suspend/resume speed | 245 | #define CPUFREQ_PM_NO_WARN 0x04 /* don't warn on suspend/resume speed |
251 | * mismatches */ | 246 | * mismatches */ |
252 | 247 | ||
253 | int cpufreq_register_driver(struct cpufreq_driver *driver_data); | 248 | int cpufreq_register_driver(struct cpufreq_driver *driver_data); |
254 | int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); | 249 | int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); |
255 | 250 | ||
256 | 251 | ||
257 | void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state); | 252 | void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state); |
258 | 253 | ||
259 | 254 | ||
260 | static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy, unsigned int min, unsigned int max) | 255 | static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy, unsigned int min, unsigned int max) |
261 | { | 256 | { |
262 | if (policy->min < min) | 257 | if (policy->min < min) |
263 | policy->min = min; | 258 | policy->min = min; |
264 | if (policy->max < min) | 259 | if (policy->max < min) |
265 | policy->max = min; | 260 | policy->max = min; |
266 | if (policy->min > max) | 261 | if (policy->min > max) |
267 | policy->min = max; | 262 | policy->min = max; |
268 | if (policy->max > max) | 263 | if (policy->max > max) |
269 | policy->max = max; | 264 | policy->max = max; |
270 | if (policy->min > policy->max) | 265 | if (policy->min > policy->max) |
271 | policy->min = policy->max; | 266 | policy->min = policy->max; |
272 | return; | 267 | return; |
273 | } | 268 | } |
274 | 269 | ||
275 | struct freq_attr { | 270 | struct freq_attr { |
276 | struct attribute attr; | 271 | struct attribute attr; |
277 | ssize_t (*show)(struct cpufreq_policy *, char *); | 272 | ssize_t (*show)(struct cpufreq_policy *, char *); |
278 | ssize_t (*store)(struct cpufreq_policy *, const char *, size_t count); | 273 | ssize_t (*store)(struct cpufreq_policy *, const char *, size_t count); |
279 | }; | 274 | }; |
280 | 275 | ||
281 | #define cpufreq_freq_attr_ro(_name) \ | 276 | #define cpufreq_freq_attr_ro(_name) \ |
282 | static struct freq_attr _name = \ | 277 | static struct freq_attr _name = \ |
283 | __ATTR(_name, 0444, show_##_name, NULL) | 278 | __ATTR(_name, 0444, show_##_name, NULL) |
284 | 279 | ||
285 | #define cpufreq_freq_attr_ro_perm(_name, _perm) \ | 280 | #define cpufreq_freq_attr_ro_perm(_name, _perm) \ |
286 | static struct freq_attr _name = \ | 281 | static struct freq_attr _name = \ |
287 | __ATTR(_name, _perm, show_##_name, NULL) | 282 | __ATTR(_name, _perm, show_##_name, NULL) |
288 | 283 | ||
289 | #define cpufreq_freq_attr_ro_old(_name) \ | 284 | #define cpufreq_freq_attr_ro_old(_name) \ |
290 | static struct freq_attr _name##_old = \ | 285 | static struct freq_attr _name##_old = \ |
291 | __ATTR(_name, 0444, show_##_name##_old, NULL) | 286 | __ATTR(_name, 0444, show_##_name##_old, NULL) |
292 | 287 | ||
293 | #define cpufreq_freq_attr_rw(_name) \ | 288 | #define cpufreq_freq_attr_rw(_name) \ |
294 | static struct freq_attr _name = \ | 289 | static struct freq_attr _name = \ |
295 | __ATTR(_name, 0644, show_##_name, store_##_name) | 290 | __ATTR(_name, 0644, show_##_name, store_##_name) |
296 | 291 | ||
297 | #define cpufreq_freq_attr_rw_old(_name) \ | 292 | #define cpufreq_freq_attr_rw_old(_name) \ |
298 | static struct freq_attr _name##_old = \ | 293 | static struct freq_attr _name##_old = \ |
299 | __ATTR(_name, 0644, show_##_name##_old, store_##_name##_old) | 294 | __ATTR(_name, 0644, show_##_name##_old, store_##_name##_old) |
300 | 295 | ||
301 | 296 | ||
302 | struct global_attr { | 297 | struct global_attr { |
303 | struct attribute attr; | 298 | struct attribute attr; |
304 | ssize_t (*show)(struct kobject *kobj, | 299 | ssize_t (*show)(struct kobject *kobj, |
305 | struct attribute *attr, char *buf); | 300 | struct attribute *attr, char *buf); |
306 | ssize_t (*store)(struct kobject *a, struct attribute *b, | 301 | ssize_t (*store)(struct kobject *a, struct attribute *b, |
307 | const char *c, size_t count); | 302 | const char *c, size_t count); |
308 | }; | 303 | }; |
309 | 304 | ||
310 | #define define_one_global_ro(_name) \ | 305 | #define define_one_global_ro(_name) \ |
311 | static struct global_attr _name = \ | 306 | static struct global_attr _name = \ |
312 | __ATTR(_name, 0444, show_##_name, NULL) | 307 | __ATTR(_name, 0444, show_##_name, NULL) |
313 | 308 | ||
314 | #define define_one_global_rw(_name) \ | 309 | #define define_one_global_rw(_name) \ |
315 | static struct global_attr _name = \ | 310 | static struct global_attr _name = \ |
316 | __ATTR(_name, 0644, show_##_name, store_##_name) | 311 | __ATTR(_name, 0644, show_##_name, store_##_name) |
317 | 312 | ||
318 | 313 | ||
319 | /********************************************************************* | 314 | /********************************************************************* |
320 | * CPUFREQ 2.6. INTERFACE * | 315 | * CPUFREQ 2.6. INTERFACE * |
321 | *********************************************************************/ | 316 | *********************************************************************/ |
322 | int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu); | 317 | int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu); |
323 | int cpufreq_update_policy(unsigned int cpu); | 318 | int cpufreq_update_policy(unsigned int cpu); |
324 | 319 | ||
325 | #ifdef CONFIG_CPU_FREQ | 320 | #ifdef CONFIG_CPU_FREQ |
326 | /* query the current CPU frequency (in kHz). If zero, cpufreq couldn't detect it */ | 321 | /* query the current CPU frequency (in kHz). If zero, cpufreq couldn't detect it */ |
327 | unsigned int cpufreq_get(unsigned int cpu); | 322 | unsigned int cpufreq_get(unsigned int cpu); |
328 | #else | 323 | #else |
329 | static inline unsigned int cpufreq_get(unsigned int cpu) | 324 | static inline unsigned int cpufreq_get(unsigned int cpu) |
330 | { | 325 | { |
331 | return 0; | 326 | return 0; |
332 | } | 327 | } |
333 | #endif | 328 | #endif |
334 | 329 | ||
335 | /* query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it */ | 330 | /* query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it */ |
336 | #ifdef CONFIG_CPU_FREQ | 331 | #ifdef CONFIG_CPU_FREQ |
337 | unsigned int cpufreq_quick_get(unsigned int cpu); | 332 | unsigned int cpufreq_quick_get(unsigned int cpu); |
338 | #else | 333 | #else |
339 | static inline unsigned int cpufreq_quick_get(unsigned int cpu) | 334 | static inline unsigned int cpufreq_quick_get(unsigned int cpu) |
340 | { | 335 | { |
341 | return 0; | 336 | return 0; |
342 | } | 337 | } |
343 | #endif | 338 | #endif |
344 | 339 | ||
345 | 340 | ||
346 | /********************************************************************* | 341 | /********************************************************************* |
347 | * CPUFREQ DEFAULT GOVERNOR * | 342 | * CPUFREQ DEFAULT GOVERNOR * |
348 | *********************************************************************/ | 343 | *********************************************************************/ |
349 | 344 | ||
350 | 345 | ||
351 | /* | 346 | /* |
352 | Performance governor is fallback governor if any other gov failed to | 347 | Performance governor is fallback governor if any other gov failed to |
353 | auto load due latency restrictions | 348 | auto load due latency restrictions |
354 | */ | 349 | */ |
355 | #ifdef CONFIG_CPU_FREQ_GOV_PERFORMANCE | 350 | #ifdef CONFIG_CPU_FREQ_GOV_PERFORMANCE |
356 | extern struct cpufreq_governor cpufreq_gov_performance; | 351 | extern struct cpufreq_governor cpufreq_gov_performance; |
357 | #endif | 352 | #endif |
358 | #ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE | 353 | #ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE |
359 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_performance) | 354 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_performance) |
360 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE) | 355 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE) |
361 | extern struct cpufreq_governor cpufreq_gov_powersave; | 356 | extern struct cpufreq_governor cpufreq_gov_powersave; |
362 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_powersave) | 357 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_powersave) |
363 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE) | 358 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE) |
364 | extern struct cpufreq_governor cpufreq_gov_userspace; | 359 | extern struct cpufreq_governor cpufreq_gov_userspace; |
365 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_userspace) | 360 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_userspace) |
366 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND) | 361 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND) |
367 | extern struct cpufreq_governor cpufreq_gov_ondemand; | 362 | extern struct cpufreq_governor cpufreq_gov_ondemand; |
368 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_ondemand) | 363 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_ondemand) |
369 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE) | 364 | #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE) |
370 | extern struct cpufreq_governor cpufreq_gov_conservative; | 365 | extern struct cpufreq_governor cpufreq_gov_conservative; |
371 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_conservative) | 366 | #define CPUFREQ_DEFAULT_GOVERNOR (&cpufreq_gov_conservative) |
372 | #endif | 367 | #endif |
373 | 368 | ||
374 | 369 | ||
375 | /********************************************************************* | 370 | /********************************************************************* |
376 | * FREQUENCY TABLE HELPERS * | 371 | * FREQUENCY TABLE HELPERS * |
377 | *********************************************************************/ | 372 | *********************************************************************/ |
378 | 373 | ||
379 | #define CPUFREQ_ENTRY_INVALID ~0 | 374 | #define CPUFREQ_ENTRY_INVALID ~0 |
380 | #define CPUFREQ_TABLE_END ~1 | 375 | #define CPUFREQ_TABLE_END ~1 |
381 | 376 | ||
382 | struct cpufreq_frequency_table { | 377 | struct cpufreq_frequency_table { |
383 | unsigned int index; /* any */ | 378 | unsigned int index; /* any */ |
384 | unsigned int frequency; /* kHz - doesn't need to be in ascending | 379 | unsigned int frequency; /* kHz - doesn't need to be in ascending |
385 | * order */ | 380 | * order */ |
386 | }; | 381 | }; |
387 | 382 | ||
388 | int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, | 383 | int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, |
389 | struct cpufreq_frequency_table *table); | 384 | struct cpufreq_frequency_table *table); |
390 | 385 | ||
391 | int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, | 386 | int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, |
392 | struct cpufreq_frequency_table *table); | 387 | struct cpufreq_frequency_table *table); |
393 | 388 | ||
394 | int cpufreq_frequency_table_target(struct cpufreq_policy *policy, | 389 | int cpufreq_frequency_table_target(struct cpufreq_policy *policy, |
395 | struct cpufreq_frequency_table *table, | 390 | struct cpufreq_frequency_table *table, |
396 | unsigned int target_freq, | 391 | unsigned int target_freq, |
397 | unsigned int relation, | 392 | unsigned int relation, |
398 | unsigned int *index); | 393 | unsigned int *index); |
399 | 394 | ||
400 | /* the following 3 funtions are for cpufreq core use only */ | 395 | /* the following 3 funtions are for cpufreq core use only */ |
401 | struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu); | 396 | struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu); |
402 | struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu); | 397 | struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu); |
403 | void cpufreq_cpu_put (struct cpufreq_policy *data); | 398 | void cpufreq_cpu_put (struct cpufreq_policy *data); |
404 | 399 | ||
405 | /* the following are really really optional */ | 400 | /* the following are really really optional */ |
406 | extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs; | 401 | extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs; |
407 | 402 | ||
408 | void cpufreq_frequency_table_get_attr(struct cpufreq_frequency_table *table, | 403 | void cpufreq_frequency_table_get_attr(struct cpufreq_frequency_table *table, |
409 | unsigned int cpu); | 404 | unsigned int cpu); |
410 | 405 | ||
411 | void cpufreq_frequency_table_put_attr(unsigned int cpu); | 406 | void cpufreq_frequency_table_put_attr(unsigned int cpu); |
412 | 407 | ||
413 | 408 | ||
414 | /********************************************************************* | 409 | /********************************************************************* |
415 | * UNIFIED DEBUG HELPERS * | 410 | * UNIFIED DEBUG HELPERS * |
416 | *********************************************************************/ | 411 | *********************************************************************/ |
417 | 412 | ||
418 | #define CPUFREQ_DEBUG_CORE 1 | 413 | #define CPUFREQ_DEBUG_CORE 1 |
419 | #define CPUFREQ_DEBUG_DRIVER 2 | 414 | #define CPUFREQ_DEBUG_DRIVER 2 |
420 | #define CPUFREQ_DEBUG_GOVERNOR 4 | 415 | #define CPUFREQ_DEBUG_GOVERNOR 4 |
421 | 416 | ||
422 | #ifdef CONFIG_CPU_FREQ_DEBUG | 417 | #ifdef CONFIG_CPU_FREQ_DEBUG |
423 | 418 | ||
424 | extern void cpufreq_debug_printk(unsigned int type, const char *prefix, | 419 | extern void cpufreq_debug_printk(unsigned int type, const char *prefix, |
425 | const char *fmt, ...); | 420 | const char *fmt, ...); |
426 | 421 | ||
427 | #else | 422 | #else |
428 | 423 | ||
429 | #define cpufreq_debug_printk(msg...) do { } while(0) | 424 | #define cpufreq_debug_printk(msg...) do { } while(0) |
430 | 425 | ||
431 | #endif /* CONFIG_CPU_FREQ_DEBUG */ | 426 | #endif /* CONFIG_CPU_FREQ_DEBUG */ |
432 | 427 | ||
433 | #endif /* _LINUX_CPUFREQ_H */ | 428 | #endif /* _LINUX_CPUFREQ_H */ |
434 | 429 |