Commit 35b2a113cb0298d4f9a1263338b456094a414057

Authored by Johannes Berg
Committed by John W. Linville
1 parent d38069d1e3

wireless: remove wext sysfs

The only user of this was hal prior to its 0.5.12
release which happened over two years ago, so I'm
sure this can be removed without issues.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>

Showing 3 changed files with 0 additions and 96 deletions Inline Diff

Documentation/feature-removal-schedule.txt
1 The following is a list of files and features that are going to be 1 The following is a list of files and features that are going to be
2 removed in the kernel source tree. Every entry should contain what 2 removed in the kernel source tree. Every entry should contain what
3 exactly is going away, why it is happening, and who is going to be doing 3 exactly is going away, why it is happening, and who is going to be doing
4 the work. When the feature is removed from the kernel, it should also 4 the work. When the feature is removed from the kernel, it should also
5 be removed from this file. The suggested deprecation period is 3 releases. 5 be removed from this file. The suggested deprecation period is 3 releases.
6 6
7 --------------------------- 7 ---------------------------
8 8
9 What: ddebug_query="query" boot cmdline param 9 What: ddebug_query="query" boot cmdline param
10 When: v3.8 10 When: v3.8
11 Why: obsoleted by dyndbg="query" and module.dyndbg="query" 11 Why: obsoleted by dyndbg="query" and module.dyndbg="query"
12 Who: Jim Cromie <jim.cromie@gmail.com>, Jason Baron <jbaron@redhat.com> 12 Who: Jim Cromie <jim.cromie@gmail.com>, Jason Baron <jbaron@redhat.com>
13 13
14 --------------------------- 14 ---------------------------
15 15
16 What: CONFIG_APM_CPU_IDLE, and its ability to call APM BIOS in idle 16 What: CONFIG_APM_CPU_IDLE, and its ability to call APM BIOS in idle
17 When: 2012 17 When: 2012
18 Why: This optional sub-feature of APM is of dubious reliability, 18 Why: This optional sub-feature of APM is of dubious reliability,
19 and ancient APM laptops are likely better served by calling HLT. 19 and ancient APM laptops are likely better served by calling HLT.
20 Deleting CONFIG_APM_CPU_IDLE allows x86 to stop exporting 20 Deleting CONFIG_APM_CPU_IDLE allows x86 to stop exporting
21 the pm_idle function pointer to modules. 21 the pm_idle function pointer to modules.
22 Who: Len Brown <len.brown@intel.com> 22 Who: Len Brown <len.brown@intel.com>
23 23
24 ---------------------------- 24 ----------------------------
25 25
26 What: x86_32 "no-hlt" cmdline param 26 What: x86_32 "no-hlt" cmdline param
27 When: 2012 27 When: 2012
28 Why: remove a branch from idle path, simplify code used by everybody. 28 Why: remove a branch from idle path, simplify code used by everybody.
29 This option disabled the use of HLT in idle and machine_halt() 29 This option disabled the use of HLT in idle and machine_halt()
30 for hardware that was flakey 15-years ago. Today we have 30 for hardware that was flakey 15-years ago. Today we have
31 "idle=poll" that removed HLT from idle, and so if such a machine 31 "idle=poll" that removed HLT from idle, and so if such a machine
32 is still running the upstream kernel, "idle=poll" is likely sufficient. 32 is still running the upstream kernel, "idle=poll" is likely sufficient.
33 Who: Len Brown <len.brown@intel.com> 33 Who: Len Brown <len.brown@intel.com>
34 34
35 ---------------------------- 35 ----------------------------
36 36
37 What: x86 "idle=mwait" cmdline param 37 What: x86 "idle=mwait" cmdline param
38 When: 2012 38 When: 2012
39 Why: simplify x86 idle code 39 Why: simplify x86 idle code
40 Who: Len Brown <len.brown@intel.com> 40 Who: Len Brown <len.brown@intel.com>
41 41
42 ---------------------------- 42 ----------------------------
43 43
44 What: PRISM54 44 What: PRISM54
45 When: 2.6.34 45 When: 2.6.34
46 46
47 Why: prism54 FullMAC PCI / Cardbus devices used to be supported only by the 47 Why: prism54 FullMAC PCI / Cardbus devices used to be supported only by the
48 prism54 wireless driver. After Intersil stopped selling these 48 prism54 wireless driver. After Intersil stopped selling these
49 devices in preference for the newer more flexible SoftMAC devices 49 devices in preference for the newer more flexible SoftMAC devices
50 a SoftMAC device driver was required and prism54 did not support 50 a SoftMAC device driver was required and prism54 did not support
51 them. The p54pci driver now exists and has been present in the kernel for 51 them. The p54pci driver now exists and has been present in the kernel for
52 a while. This driver supports both SoftMAC devices and FullMAC devices. 52 a while. This driver supports both SoftMAC devices and FullMAC devices.
53 The main difference between these devices was the amount of memory which 53 The main difference between these devices was the amount of memory which
54 could be used for the firmware. The SoftMAC devices support a smaller 54 could be used for the firmware. The SoftMAC devices support a smaller
55 amount of memory. Because of this the SoftMAC firmware fits into FullMAC 55 amount of memory. Because of this the SoftMAC firmware fits into FullMAC
56 devices's memory. p54pci supports not only PCI / Cardbus but also USB 56 devices's memory. p54pci supports not only PCI / Cardbus but also USB
57 and SPI. Since p54pci supports all devices prism54 supports 57 and SPI. Since p54pci supports all devices prism54 supports
58 you will have a conflict. I'm not quite sure how distributions are 58 you will have a conflict. I'm not quite sure how distributions are
59 handling this conflict right now. prism54 was kept around due to 59 handling this conflict right now. prism54 was kept around due to
60 claims users may experience issues when using the SoftMAC driver. 60 claims users may experience issues when using the SoftMAC driver.
61 Time has passed users have not reported issues. If you use prism54 61 Time has passed users have not reported issues. If you use prism54
62 and for whatever reason you cannot use p54pci please let us know! 62 and for whatever reason you cannot use p54pci please let us know!
63 E-mail us at: linux-wireless@vger.kernel.org 63 E-mail us at: linux-wireless@vger.kernel.org
64 64
65 For more information see the p54 wiki page: 65 For more information see the p54 wiki page:
66 66
67 http://wireless.kernel.org/en/users/Drivers/p54 67 http://wireless.kernel.org/en/users/Drivers/p54
68 68
69 Who: Luis R. Rodriguez <lrodriguez@atheros.com> 69 Who: Luis R. Rodriguez <lrodriguez@atheros.com>
70 70
71 --------------------------- 71 ---------------------------
72 72
73 What: IRQF_SAMPLE_RANDOM 73 What: IRQF_SAMPLE_RANDOM
74 Check: IRQF_SAMPLE_RANDOM 74 Check: IRQF_SAMPLE_RANDOM
75 When: July 2009 75 When: July 2009
76 76
77 Why: Many of IRQF_SAMPLE_RANDOM users are technically bogus as entropy 77 Why: Many of IRQF_SAMPLE_RANDOM users are technically bogus as entropy
78 sources in the kernel's current entropy model. To resolve this, every 78 sources in the kernel's current entropy model. To resolve this, every
79 input point to the kernel's entropy pool needs to better document the 79 input point to the kernel's entropy pool needs to better document the
80 type of entropy source it actually is. This will be replaced with 80 type of entropy source it actually is. This will be replaced with
81 additional add_*_randomness functions in drivers/char/random.c 81 additional add_*_randomness functions in drivers/char/random.c
82 82
83 Who: Robin Getz <rgetz@blackfin.uclinux.org> & Matt Mackall <mpm@selenic.com> 83 Who: Robin Getz <rgetz@blackfin.uclinux.org> & Matt Mackall <mpm@selenic.com>
84 84
85 --------------------------- 85 ---------------------------
86 86
87 What: The ieee80211_regdom module parameter 87 What: The ieee80211_regdom module parameter
88 When: March 2010 / desktop catchup 88 When: March 2010 / desktop catchup
89 89
90 Why: This was inherited by the CONFIG_WIRELESS_OLD_REGULATORY code, 90 Why: This was inherited by the CONFIG_WIRELESS_OLD_REGULATORY code,
91 and currently serves as an option for users to define an 91 and currently serves as an option for users to define an
92 ISO / IEC 3166 alpha2 code for the country they are currently 92 ISO / IEC 3166 alpha2 code for the country they are currently
93 present in. Although there are userspace API replacements for this 93 present in. Although there are userspace API replacements for this
94 through nl80211 distributions haven't yet caught up with implementing 94 through nl80211 distributions haven't yet caught up with implementing
95 decent alternatives through standard GUIs. Although available as an 95 decent alternatives through standard GUIs. Although available as an
96 option through iw or wpa_supplicant its just a matter of time before 96 option through iw or wpa_supplicant its just a matter of time before
97 distributions pick up good GUI options for this. The ideal solution 97 distributions pick up good GUI options for this. The ideal solution
98 would actually consist of intelligent designs which would do this for 98 would actually consist of intelligent designs which would do this for
99 the user automatically even when travelling through different countries. 99 the user automatically even when travelling through different countries.
100 Until then we leave this module parameter as a compromise. 100 Until then we leave this module parameter as a compromise.
101 101
102 When userspace improves with reasonable widely-available alternatives for 102 When userspace improves with reasonable widely-available alternatives for
103 this we will no longer need this module parameter. This entry hopes that 103 this we will no longer need this module parameter. This entry hopes that
104 by the super-futuristically looking date of "March 2010" we will have 104 by the super-futuristically looking date of "March 2010" we will have
105 such replacements widely available. 105 such replacements widely available.
106 106
107 Who: Luis R. Rodriguez <lrodriguez@atheros.com> 107 Who: Luis R. Rodriguez <lrodriguez@atheros.com>
108 108
109 --------------------------- 109 ---------------------------
110 110
111 What: dev->power.power_state 111 What: dev->power.power_state
112 When: July 2007 112 When: July 2007
113 Why: Broken design for runtime control over driver power states, confusing 113 Why: Broken design for runtime control over driver power states, confusing
114 driver-internal runtime power management with: mechanisms to support 114 driver-internal runtime power management with: mechanisms to support
115 system-wide sleep state transitions; event codes that distinguish 115 system-wide sleep state transitions; event codes that distinguish
116 different phases of swsusp "sleep" transitions; and userspace policy 116 different phases of swsusp "sleep" transitions; and userspace policy
117 inputs. This framework was never widely used, and most attempts to 117 inputs. This framework was never widely used, and most attempts to
118 use it were broken. Drivers should instead be exposing domain-specific 118 use it were broken. Drivers should instead be exposing domain-specific
119 interfaces either to kernel or to userspace. 119 interfaces either to kernel or to userspace.
120 Who: Pavel Machek <pavel@ucw.cz> 120 Who: Pavel Machek <pavel@ucw.cz>
121 121
122 --------------------------- 122 ---------------------------
123 123
124 What: /proc/<pid>/oom_adj 124 What: /proc/<pid>/oom_adj
125 When: August 2012 125 When: August 2012
126 Why: /proc/<pid>/oom_adj allows userspace to influence the oom killer's 126 Why: /proc/<pid>/oom_adj allows userspace to influence the oom killer's
127 badness heuristic used to determine which task to kill when the kernel 127 badness heuristic used to determine which task to kill when the kernel
128 is out of memory. 128 is out of memory.
129 129
130 The badness heuristic has since been rewritten since the introduction of 130 The badness heuristic has since been rewritten since the introduction of
131 this tunable such that its meaning is deprecated. The value was 131 this tunable such that its meaning is deprecated. The value was
132 implemented as a bitshift on a score generated by the badness() 132 implemented as a bitshift on a score generated by the badness()
133 function that did not have any precise units of measure. With the 133 function that did not have any precise units of measure. With the
134 rewrite, the score is given as a proportion of available memory to the 134 rewrite, the score is given as a proportion of available memory to the
135 task allocating pages, so using a bitshift which grows the score 135 task allocating pages, so using a bitshift which grows the score
136 exponentially is, thus, impossible to tune with fine granularity. 136 exponentially is, thus, impossible to tune with fine granularity.
137 137
138 A much more powerful interface, /proc/<pid>/oom_score_adj, was 138 A much more powerful interface, /proc/<pid>/oom_score_adj, was
139 introduced with the oom killer rewrite that allows users to increase or 139 introduced with the oom killer rewrite that allows users to increase or
140 decrease the badness score linearly. This interface will replace 140 decrease the badness score linearly. This interface will replace
141 /proc/<pid>/oom_adj. 141 /proc/<pid>/oom_adj.
142 142
143 A warning will be emitted to the kernel log if an application uses this 143 A warning will be emitted to the kernel log if an application uses this
144 deprecated interface. After it is printed once, future warnings will be 144 deprecated interface. After it is printed once, future warnings will be
145 suppressed until the kernel is rebooted. 145 suppressed until the kernel is rebooted.
146 146
147 --------------------------- 147 ---------------------------
148 148
149 What: remove EXPORT_SYMBOL(kernel_thread) 149 What: remove EXPORT_SYMBOL(kernel_thread)
150 When: August 2006 150 When: August 2006
151 Files: arch/*/kernel/*_ksyms.c 151 Files: arch/*/kernel/*_ksyms.c
152 Check: kernel_thread 152 Check: kernel_thread
153 Why: kernel_thread is a low-level implementation detail. Drivers should 153 Why: kernel_thread is a low-level implementation detail. Drivers should
154 use the <linux/kthread.h> API instead which shields them from 154 use the <linux/kthread.h> API instead which shields them from
155 implementation details and provides a higherlevel interface that 155 implementation details and provides a higherlevel interface that
156 prevents bugs and code duplication 156 prevents bugs and code duplication
157 Who: Christoph Hellwig <hch@lst.de> 157 Who: Christoph Hellwig <hch@lst.de>
158 158
159 --------------------------- 159 ---------------------------
160 160
161 What: Unused EXPORT_SYMBOL/EXPORT_SYMBOL_GPL exports 161 What: Unused EXPORT_SYMBOL/EXPORT_SYMBOL_GPL exports
162 (temporary transition config option provided until then) 162 (temporary transition config option provided until then)
163 The transition config option will also be removed at the same time. 163 The transition config option will also be removed at the same time.
164 When: before 2.6.19 164 When: before 2.6.19
165 Why: Unused symbols are both increasing the size of the kernel binary 165 Why: Unused symbols are both increasing the size of the kernel binary
166 and are often a sign of "wrong API" 166 and are often a sign of "wrong API"
167 Who: Arjan van de Ven <arjan@linux.intel.com> 167 Who: Arjan van de Ven <arjan@linux.intel.com>
168 168
169 --------------------------- 169 ---------------------------
170 170
171 What: PHYSDEVPATH, PHYSDEVBUS, PHYSDEVDRIVER in the uevent environment 171 What: PHYSDEVPATH, PHYSDEVBUS, PHYSDEVDRIVER in the uevent environment
172 When: October 2008 172 When: October 2008
173 Why: The stacking of class devices makes these values misleading and 173 Why: The stacking of class devices makes these values misleading and
174 inconsistent. 174 inconsistent.
175 Class devices should not carry any of these properties, and bus 175 Class devices should not carry any of these properties, and bus
176 devices have SUBSYTEM and DRIVER as a replacement. 176 devices have SUBSYTEM and DRIVER as a replacement.
177 Who: Kay Sievers <kay.sievers@suse.de> 177 Who: Kay Sievers <kay.sievers@suse.de>
178 178
179 --------------------------- 179 ---------------------------
180 180
181 What: ACPI procfs interface 181 What: ACPI procfs interface
182 When: July 2008 182 When: July 2008
183 Why: ACPI sysfs conversion should be finished by January 2008. 183 Why: ACPI sysfs conversion should be finished by January 2008.
184 ACPI procfs interface will be removed in July 2008 so that 184 ACPI procfs interface will be removed in July 2008 so that
185 there is enough time for the user space to catch up. 185 there is enough time for the user space to catch up.
186 Who: Zhang Rui <rui.zhang@intel.com> 186 Who: Zhang Rui <rui.zhang@intel.com>
187 187
188 --------------------------- 188 ---------------------------
189 189
190 What: CONFIG_ACPI_PROCFS_POWER 190 What: CONFIG_ACPI_PROCFS_POWER
191 When: 2.6.39 191 When: 2.6.39
192 Why: sysfs I/F for ACPI power devices, including AC and Battery, 192 Why: sysfs I/F for ACPI power devices, including AC and Battery,
193 has been working in upstream kernel since 2.6.24, Sep 2007. 193 has been working in upstream kernel since 2.6.24, Sep 2007.
194 In 2.6.37, we make the sysfs I/F always built in and this option 194 In 2.6.37, we make the sysfs I/F always built in and this option
195 disabled by default. 195 disabled by default.
196 Remove this option and the ACPI power procfs interface in 2.6.39. 196 Remove this option and the ACPI power procfs interface in 2.6.39.
197 Who: Zhang Rui <rui.zhang@intel.com> 197 Who: Zhang Rui <rui.zhang@intel.com>
198 198
199 --------------------------- 199 ---------------------------
200 200
201 What: /proc/acpi/event 201 What: /proc/acpi/event
202 When: February 2008 202 When: February 2008
203 Why: /proc/acpi/event has been replaced by events via the input layer 203 Why: /proc/acpi/event has been replaced by events via the input layer
204 and netlink since 2.6.23. 204 and netlink since 2.6.23.
205 Who: Len Brown <len.brown@intel.com> 205 Who: Len Brown <len.brown@intel.com>
206 206
207 --------------------------- 207 ---------------------------
208 208
209 What: i386/x86_64 bzImage symlinks 209 What: i386/x86_64 bzImage symlinks
210 When: April 2010 210 When: April 2010
211 211
212 Why: The i386/x86_64 merge provides a symlink to the old bzImage 212 Why: The i386/x86_64 merge provides a symlink to the old bzImage
213 location so not yet updated user space tools, e.g. package 213 location so not yet updated user space tools, e.g. package
214 scripts, do not break. 214 scripts, do not break.
215 Who: Thomas Gleixner <tglx@linutronix.de> 215 Who: Thomas Gleixner <tglx@linutronix.de>
216 216
217 --------------------------- 217 ---------------------------
218 218
219 What: GPIO autorequest on gpio_direction_{input,output}() in gpiolib 219 What: GPIO autorequest on gpio_direction_{input,output}() in gpiolib
220 When: February 2010 220 When: February 2010
221 Why: All callers should use explicit gpio_request()/gpio_free(). 221 Why: All callers should use explicit gpio_request()/gpio_free().
222 The autorequest mechanism in gpiolib was provided mostly as a 222 The autorequest mechanism in gpiolib was provided mostly as a
223 migration aid for legacy GPIO interfaces (for SOC based GPIOs). 223 migration aid for legacy GPIO interfaces (for SOC based GPIOs).
224 Those users have now largely migrated. Platforms implementing 224 Those users have now largely migrated. Platforms implementing
225 the GPIO interfaces without using gpiolib will see no changes. 225 the GPIO interfaces without using gpiolib will see no changes.
226 Who: David Brownell <dbrownell@users.sourceforge.net> 226 Who: David Brownell <dbrownell@users.sourceforge.net>
227 --------------------------- 227 ---------------------------
228 228
229 What: b43 support for firmware revision < 410 229 What: b43 support for firmware revision < 410
230 When: The schedule was July 2008, but it was decided that we are going to keep the 230 When: The schedule was July 2008, but it was decided that we are going to keep the
231 code as long as there are no major maintanance headaches. 231 code as long as there are no major maintanance headaches.
232 So it _could_ be removed _any_ time now, if it conflicts with something new. 232 So it _could_ be removed _any_ time now, if it conflicts with something new.
233 Why: The support code for the old firmware hurts code readability/maintainability 233 Why: The support code for the old firmware hurts code readability/maintainability
234 and slightly hurts runtime performance. Bugfixes for the old firmware 234 and slightly hurts runtime performance. Bugfixes for the old firmware
235 are not provided by Broadcom anymore. 235 are not provided by Broadcom anymore.
236 Who: Michael Buesch <m@bues.ch> 236 Who: Michael Buesch <m@bues.ch>
237 237
238 --------------------------- 238 ---------------------------
239 239
240 What: Ability for non root users to shm_get hugetlb pages based on mlock 240 What: Ability for non root users to shm_get hugetlb pages based on mlock
241 resource limits 241 resource limits
242 When: 2.6.31 242 When: 2.6.31
243 Why: Non root users need to be part of /proc/sys/vm/hugetlb_shm_group or 243 Why: Non root users need to be part of /proc/sys/vm/hugetlb_shm_group or
244 have CAP_IPC_LOCK to be able to allocate shm segments backed by 244 have CAP_IPC_LOCK to be able to allocate shm segments backed by
245 huge pages. The mlock based rlimit check to allow shm hugetlb is 245 huge pages. The mlock based rlimit check to allow shm hugetlb is
246 inconsistent with mmap based allocations. Hence it is being 246 inconsistent with mmap based allocations. Hence it is being
247 deprecated. 247 deprecated.
248 Who: Ravikiran Thirumalai <kiran@scalex86.org> 248 Who: Ravikiran Thirumalai <kiran@scalex86.org>
249 249
250 --------------------------- 250 ---------------------------
251 251
252 What: Code that is now under CONFIG_WIRELESS_EXT_SYSFS
253 (in net/core/net-sysfs.c)
254 When: 3.5
255 Why: Over 1K .text/.data size reduction, data is available in other
256 ways (ioctls)
257 Who: Johannes Berg <johannes@sipsolutions.net>
258
259 ---------------------------
260
261 What: sysfs ui for changing p4-clockmod parameters 252 What: sysfs ui for changing p4-clockmod parameters
262 When: September 2009 253 When: September 2009
263 Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and 254 Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and
264 e088e4c9cdb618675874becb91b2fd581ee707e6. 255 e088e4c9cdb618675874becb91b2fd581ee707e6.
265 Removal is subject to fixing any remaining bugs in ACPI which may 256 Removal is subject to fixing any remaining bugs in ACPI which may
266 cause the thermal throttling not to happen at the right time. 257 cause the thermal throttling not to happen at the right time.
267 Who: Dave Jones <davej@redhat.com>, Matthew Garrett <mjg@redhat.com> 258 Who: Dave Jones <davej@redhat.com>, Matthew Garrett <mjg@redhat.com>
268 259
269 ----------------------------- 260 -----------------------------
270 261
271 What: fakephp and associated sysfs files in /sys/bus/pci/slots/ 262 What: fakephp and associated sysfs files in /sys/bus/pci/slots/
272 When: 2011 263 When: 2011
273 Why: In 2.6.27, the semantics of /sys/bus/pci/slots was redefined to 264 Why: In 2.6.27, the semantics of /sys/bus/pci/slots was redefined to
274 represent a machine's physical PCI slots. The change in semantics 265 represent a machine's physical PCI slots. The change in semantics
275 had userspace implications, as the hotplug core no longer allowed 266 had userspace implications, as the hotplug core no longer allowed
276 drivers to create multiple sysfs files per physical slot (required 267 drivers to create multiple sysfs files per physical slot (required
277 for multi-function devices, e.g.). fakephp was seen as a developer's 268 for multi-function devices, e.g.). fakephp was seen as a developer's
278 tool only, and its interface changed. Too late, we learned that 269 tool only, and its interface changed. Too late, we learned that
279 there were some users of the fakephp interface. 270 there were some users of the fakephp interface.
280 271
281 In 2.6.30, the original fakephp interface was restored. At the same 272 In 2.6.30, the original fakephp interface was restored. At the same
282 time, the PCI core gained the ability that fakephp provided, namely 273 time, the PCI core gained the ability that fakephp provided, namely
283 function-level hot-remove and hot-add. 274 function-level hot-remove and hot-add.
284 275
285 Since the PCI core now provides the same functionality, exposed in: 276 Since the PCI core now provides the same functionality, exposed in:
286 277
287 /sys/bus/pci/rescan 278 /sys/bus/pci/rescan
288 /sys/bus/pci/devices/.../remove 279 /sys/bus/pci/devices/.../remove
289 /sys/bus/pci/devices/.../rescan 280 /sys/bus/pci/devices/.../rescan
290 281
291 there is no functional reason to maintain fakephp as well. 282 there is no functional reason to maintain fakephp as well.
292 283
293 We will keep the existing module so that 'modprobe fakephp' will 284 We will keep the existing module so that 'modprobe fakephp' will
294 present the old /sys/bus/pci/slots/... interface for compatibility, 285 present the old /sys/bus/pci/slots/... interface for compatibility,
295 but users are urged to migrate their applications to the API above. 286 but users are urged to migrate their applications to the API above.
296 287
297 After a reasonable transition period, we will remove the legacy 288 After a reasonable transition period, we will remove the legacy
298 fakephp interface. 289 fakephp interface.
299 Who: Alex Chiang <achiang@hp.com> 290 Who: Alex Chiang <achiang@hp.com>
300 291
301 --------------------------- 292 ---------------------------
302 293
303 What: CONFIG_RFKILL_INPUT 294 What: CONFIG_RFKILL_INPUT
304 When: 2.6.33 295 When: 2.6.33
305 Why: Should be implemented in userspace, policy daemon. 296 Why: Should be implemented in userspace, policy daemon.
306 Who: Johannes Berg <johannes@sipsolutions.net> 297 Who: Johannes Berg <johannes@sipsolutions.net>
307 298
308 ---------------------------- 299 ----------------------------
309 300
310 What: sound-slot/service-* module aliases and related clutters in 301 What: sound-slot/service-* module aliases and related clutters in
311 sound/sound_core.c 302 sound/sound_core.c
312 When: August 2010 303 When: August 2010
313 Why: OSS sound_core grabs all legacy minors (0-255) of SOUND_MAJOR 304 Why: OSS sound_core grabs all legacy minors (0-255) of SOUND_MAJOR
314 (14) and requests modules using custom sound-slot/service-* 305 (14) and requests modules using custom sound-slot/service-*
315 module aliases. The only benefit of doing this is allowing 306 module aliases. The only benefit of doing this is allowing
316 use of custom module aliases which might as well be considered 307 use of custom module aliases which might as well be considered
317 a bug at this point. This preemptive claiming prevents 308 a bug at this point. This preemptive claiming prevents
318 alternative OSS implementations. 309 alternative OSS implementations.
319 310
320 Till the feature is removed, the kernel will be requesting 311 Till the feature is removed, the kernel will be requesting
321 both sound-slot/service-* and the standard char-major-* module 312 both sound-slot/service-* and the standard char-major-* module
322 aliases and allow turning off the pre-claiming selectively via 313 aliases and allow turning off the pre-claiming selectively via
323 CONFIG_SOUND_OSS_CORE_PRECLAIM and soundcore.preclaim_oss 314 CONFIG_SOUND_OSS_CORE_PRECLAIM and soundcore.preclaim_oss
324 kernel parameter. 315 kernel parameter.
325 316
326 After the transition phase is complete, both the custom module 317 After the transition phase is complete, both the custom module
327 aliases and switches to disable it will go away. This removal 318 aliases and switches to disable it will go away. This removal
328 will also allow making ALSA OSS emulation independent of 319 will also allow making ALSA OSS emulation independent of
329 sound_core. The dependency will be broken then too. 320 sound_core. The dependency will be broken then too.
330 Who: Tejun Heo <tj@kernel.org> 321 Who: Tejun Heo <tj@kernel.org>
331 322
332 ---------------------------- 323 ----------------------------
333 324
334 What: sysfs-class-rfkill state file 325 What: sysfs-class-rfkill state file
335 When: Feb 2014 326 When: Feb 2014
336 Files: net/rfkill/core.c 327 Files: net/rfkill/core.c
337 Why: Documented as obsolete since Feb 2010. This file is limited to 3 328 Why: Documented as obsolete since Feb 2010. This file is limited to 3
338 states while the rfkill drivers can have 4 states. 329 states while the rfkill drivers can have 4 states.
339 Who: anybody or Florian Mickler <florian@mickler.org> 330 Who: anybody or Florian Mickler <florian@mickler.org>
340 331
341 ---------------------------- 332 ----------------------------
342 333
343 What: sysfs-class-rfkill claim file 334 What: sysfs-class-rfkill claim file
344 When: Feb 2012 335 When: Feb 2012
345 Files: net/rfkill/core.c 336 Files: net/rfkill/core.c
346 Why: It is not possible to claim an rfkill driver since 2007. This is 337 Why: It is not possible to claim an rfkill driver since 2007. This is
347 Documented as obsolete since Feb 2010. 338 Documented as obsolete since Feb 2010.
348 Who: anybody or Florian Mickler <florian@mickler.org> 339 Who: anybody or Florian Mickler <florian@mickler.org>
349 340
350 ---------------------------- 341 ----------------------------
351 342
352 What: iwlwifi 50XX module parameters 343 What: iwlwifi 50XX module parameters
353 When: 3.0 344 When: 3.0
354 Why: The "..50" modules parameters were used to configure 5000 series and 345 Why: The "..50" modules parameters were used to configure 5000 series and
355 up devices; different set of module parameters also available for 4965 346 up devices; different set of module parameters also available for 4965
356 with same functionalities. Consolidate both set into single place 347 with same functionalities. Consolidate both set into single place
357 in drivers/net/wireless/iwlwifi/iwl-agn.c 348 in drivers/net/wireless/iwlwifi/iwl-agn.c
358 349
359 Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> 350 Who: Wey-Yi Guy <wey-yi.w.guy@intel.com>
360 351
361 ---------------------------- 352 ----------------------------
362 353
363 What: iwl4965 alias support 354 What: iwl4965 alias support
364 When: 3.0 355 When: 3.0
365 Why: Internal alias support has been present in module-init-tools for some 356 Why: Internal alias support has been present in module-init-tools for some
366 time, the MODULE_ALIAS("iwl4965") boilerplate aliases can be removed 357 time, the MODULE_ALIAS("iwl4965") boilerplate aliases can be removed
367 with no impact. 358 with no impact.
368 359
369 Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> 360 Who: Wey-Yi Guy <wey-yi.w.guy@intel.com>
370 361
371 --------------------------- 362 ---------------------------
372 363
373 What: xt_NOTRACK 364 What: xt_NOTRACK
374 Files: net/netfilter/xt_NOTRACK.c 365 Files: net/netfilter/xt_NOTRACK.c
375 When: April 2011 366 When: April 2011
376 Why: Superseded by xt_CT 367 Why: Superseded by xt_CT
377 Who: Netfilter developer team <netfilter-devel@vger.kernel.org> 368 Who: Netfilter developer team <netfilter-devel@vger.kernel.org>
378 369
379 ---------------------------- 370 ----------------------------
380 371
381 What: IRQF_DISABLED 372 What: IRQF_DISABLED
382 When: 2.6.36 373 When: 2.6.36
383 Why: The flag is a NOOP as we run interrupt handlers with interrupts disabled 374 Why: The flag is a NOOP as we run interrupt handlers with interrupts disabled
384 Who: Thomas Gleixner <tglx@linutronix.de> 375 Who: Thomas Gleixner <tglx@linutronix.de>
385 376
386 ---------------------------- 377 ----------------------------
387 378
388 What: PCI DMA unmap state API 379 What: PCI DMA unmap state API
389 When: August 2012 380 When: August 2012
390 Why: PCI DMA unmap state API (include/linux/pci-dma.h) was replaced 381 Why: PCI DMA unmap state API (include/linux/pci-dma.h) was replaced
391 with DMA unmap state API (DMA unmap state API can be used for 382 with DMA unmap state API (DMA unmap state API can be used for
392 any bus). 383 any bus).
393 Who: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> 384 Who: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
394 385
395 ---------------------------- 386 ----------------------------
396 387
397 What: iwlwifi disable_hw_scan module parameters 388 What: iwlwifi disable_hw_scan module parameters
398 When: 3.0 389 When: 3.0
399 Why: Hareware scan is the prefer method for iwlwifi devices for 390 Why: Hareware scan is the prefer method for iwlwifi devices for
400 scanning operation. Remove software scan support for all the 391 scanning operation. Remove software scan support for all the
401 iwlwifi devices. 392 iwlwifi devices.
402 393
403 Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> 394 Who: Wey-Yi Guy <wey-yi.w.guy@intel.com>
404 395
405 ---------------------------- 396 ----------------------------
406 397
407 What: Legacy, non-standard chassis intrusion detection interface. 398 What: Legacy, non-standard chassis intrusion detection interface.
408 When: June 2011 399 When: June 2011
409 Why: The adm9240, w83792d and w83793 hardware monitoring drivers have 400 Why: The adm9240, w83792d and w83793 hardware monitoring drivers have
410 legacy interfaces for chassis intrusion detection. A standard 401 legacy interfaces for chassis intrusion detection. A standard
411 interface has been added to each driver, so the legacy interface 402 interface has been added to each driver, so the legacy interface
412 can be removed. 403 can be removed.
413 Who: Jean Delvare <khali@linux-fr.org> 404 Who: Jean Delvare <khali@linux-fr.org>
414 405
415 ---------------------------- 406 ----------------------------
416 407
417 What: xt_connlimit rev 0 408 What: xt_connlimit rev 0
418 When: 2012 409 When: 2012
419 Who: Jan Engelhardt <jengelh@medozas.de> 410 Who: Jan Engelhardt <jengelh@medozas.de>
420 Files: net/netfilter/xt_connlimit.c 411 Files: net/netfilter/xt_connlimit.c
421 412
422 ---------------------------- 413 ----------------------------
423 414
424 What: ipt_addrtype match include file 415 What: ipt_addrtype match include file
425 When: 2012 416 When: 2012
426 Why: superseded by xt_addrtype 417 Why: superseded by xt_addrtype
427 Who: Florian Westphal <fw@strlen.de> 418 Who: Florian Westphal <fw@strlen.de>
428 Files: include/linux/netfilter_ipv4/ipt_addrtype.h 419 Files: include/linux/netfilter_ipv4/ipt_addrtype.h
429 420
430 ---------------------------- 421 ----------------------------
431 422
432 What: i2c_driver.attach_adapter 423 What: i2c_driver.attach_adapter
433 i2c_driver.detach_adapter 424 i2c_driver.detach_adapter
434 When: September 2011 425 When: September 2011
435 Why: These legacy callbacks should no longer be used as i2c-core offers 426 Why: These legacy callbacks should no longer be used as i2c-core offers
436 a variety of preferable alternative ways to instantiate I2C devices. 427 a variety of preferable alternative ways to instantiate I2C devices.
437 Who: Jean Delvare <khali@linux-fr.org> 428 Who: Jean Delvare <khali@linux-fr.org>
438 429
439 ---------------------------- 430 ----------------------------
440 431
441 What: Opening a radio device node will no longer automatically switch the 432 What: Opening a radio device node will no longer automatically switch the
442 tuner mode from tv to radio. 433 tuner mode from tv to radio.
443 When: 3.3 434 When: 3.3
444 Why: Just opening a V4L device should not change the state of the hardware 435 Why: Just opening a V4L device should not change the state of the hardware
445 like that. It's very unexpected and against the V4L spec. Instead, you 436 like that. It's very unexpected and against the V4L spec. Instead, you
446 switch to radio mode by calling VIDIOC_S_FREQUENCY. This is the second 437 switch to radio mode by calling VIDIOC_S_FREQUENCY. This is the second
447 and last step of the move to consistent handling of tv and radio tuners. 438 and last step of the move to consistent handling of tv and radio tuners.
448 Who: Hans Verkuil <hans.verkuil@cisco.com> 439 Who: Hans Verkuil <hans.verkuil@cisco.com>
449 440
450 ---------------------------- 441 ----------------------------
451 442
452 What: g_file_storage driver 443 What: g_file_storage driver
453 When: 3.8 444 When: 3.8
454 Why: This driver has been superseded by g_mass_storage. 445 Why: This driver has been superseded by g_mass_storage.
455 Who: Alan Stern <stern@rowland.harvard.edu> 446 Who: Alan Stern <stern@rowland.harvard.edu>
456 447
457 ---------------------------- 448 ----------------------------
458 449
459 What: threeg and interface sysfs files in /sys/devices/platform/acer-wmi 450 What: threeg and interface sysfs files in /sys/devices/platform/acer-wmi
460 When: 2012 451 When: 2012
461 Why: In 3.0, we can now autodetect internal 3G device and already have 452 Why: In 3.0, we can now autodetect internal 3G device and already have
462 the threeg rfkill device. So, we plan to remove threeg sysfs support 453 the threeg rfkill device. So, we plan to remove threeg sysfs support
463 for it's no longer necessary. 454 for it's no longer necessary.
464 455
465 We also plan to remove interface sysfs file that exposed which ACPI-WMI 456 We also plan to remove interface sysfs file that exposed which ACPI-WMI
466 interface that was used by acer-wmi driver. It will replaced by 457 interface that was used by acer-wmi driver. It will replaced by
467 information log when acer-wmi initial. 458 information log when acer-wmi initial.
468 Who: Lee, Chun-Yi <jlee@novell.com> 459 Who: Lee, Chun-Yi <jlee@novell.com>
469 460
470 --------------------------- 461 ---------------------------
471 462
472 What: /sys/devices/platform/_UDC_/udc/_UDC_/is_dualspeed file and 463 What: /sys/devices/platform/_UDC_/udc/_UDC_/is_dualspeed file and
473 is_dualspeed line in /sys/devices/platform/ci13xxx_*/udc/device file. 464 is_dualspeed line in /sys/devices/platform/ci13xxx_*/udc/device file.
474 When: 3.8 465 When: 3.8
475 Why: The is_dualspeed file is superseded by maximum_speed in the same 466 Why: The is_dualspeed file is superseded by maximum_speed in the same
476 directory and is_dualspeed line in device file is superseded by 467 directory and is_dualspeed line in device file is superseded by
477 max_speed line in the same file. 468 max_speed line in the same file.
478 469
479 The maximum_speed/max_speed specifies maximum speed supported by UDC. 470 The maximum_speed/max_speed specifies maximum speed supported by UDC.
480 To check if dualspeeed is supported, check if the value is >= 3. 471 To check if dualspeeed is supported, check if the value is >= 3.
481 Various possible speeds are defined in <linux/usb/ch9.h>. 472 Various possible speeds are defined in <linux/usb/ch9.h>.
482 Who: Michal Nazarewicz <mina86@mina86.com> 473 Who: Michal Nazarewicz <mina86@mina86.com>
483 474
484 ---------------------------- 475 ----------------------------
485 476
486 What: The XFS nodelaylog mount option 477 What: The XFS nodelaylog mount option
487 When: 3.3 478 When: 3.3
488 Why: The delaylog mode that has been the default since 2.6.39 has proven 479 Why: The delaylog mode that has been the default since 2.6.39 has proven
489 stable, and the old code is in the way of additional improvements in 480 stable, and the old code is in the way of additional improvements in
490 the log code. 481 the log code.
491 Who: Christoph Hellwig <hch@lst.de> 482 Who: Christoph Hellwig <hch@lst.de>
492 483
493 ---------------------------- 484 ----------------------------
494 485
495 What: iwlagn alias support 486 What: iwlagn alias support
496 When: 3.5 487 When: 3.5
497 Why: The iwlagn module has been renamed iwlwifi. The alias will be around 488 Why: The iwlagn module has been renamed iwlwifi. The alias will be around
498 for backward compatibility for several cycles and then dropped. 489 for backward compatibility for several cycles and then dropped.
499 Who: Don Fry <donald.h.fry@intel.com> 490 Who: Don Fry <donald.h.fry@intel.com>
500 491
501 ---------------------------- 492 ----------------------------
502 493
503 What: pci_scan_bus_parented() 494 What: pci_scan_bus_parented()
504 When: 3.5 495 When: 3.5
505 Why: The pci_scan_bus_parented() interface creates a new root bus. The 496 Why: The pci_scan_bus_parented() interface creates a new root bus. The
506 bus is created with default resources (ioport_resource and 497 bus is created with default resources (ioport_resource and
507 iomem_resource) that are always wrong, so we rely on arch code to 498 iomem_resource) that are always wrong, so we rely on arch code to
508 correct them later. Callers of pci_scan_bus_parented() should 499 correct them later. Callers of pci_scan_bus_parented() should
509 convert to using pci_scan_root_bus() so they can supply a list of 500 convert to using pci_scan_root_bus() so they can supply a list of
510 bus resources when the bus is created. 501 bus resources when the bus is created.
511 Who: Bjorn Helgaas <bhelgaas@google.com> 502 Who: Bjorn Helgaas <bhelgaas@google.com>
512 503
513 ---------------------------- 504 ----------------------------
514 505
515 What: Low Performance USB Block driver ("CONFIG_BLK_DEV_UB") 506 What: Low Performance USB Block driver ("CONFIG_BLK_DEV_UB")
516 When: 3.6 507 When: 3.6
517 Why: This driver provides support for USB storage devices like "USB 508 Why: This driver provides support for USB storage devices like "USB
518 sticks". As of now, it is deactivated in Debian, Fedora and 509 sticks". As of now, it is deactivated in Debian, Fedora and
519 Ubuntu. All current users can switch over to usb-storage 510 Ubuntu. All current users can switch over to usb-storage
520 (CONFIG_USB_STORAGE) which only drawback is the additional SCSI 511 (CONFIG_USB_STORAGE) which only drawback is the additional SCSI
521 stack. 512 stack.
522 Who: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> 513 Who: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
523 514
524 ---------------------------- 515 ----------------------------
525 516
526 What: kmap_atomic(page, km_type) 517 What: kmap_atomic(page, km_type)
527 When: 3.5 518 When: 3.5
528 Why: The old kmap_atomic() with two arguments is deprecated, we only 519 Why: The old kmap_atomic() with two arguments is deprecated, we only
529 keep it for backward compatibility for few cycles and then drop it. 520 keep it for backward compatibility for few cycles and then drop it.
530 Who: Cong Wang <amwang@redhat.com> 521 Who: Cong Wang <amwang@redhat.com>
531 522
532 ---------------------------- 523 ----------------------------
533 524
534 What: get_robust_list syscall 525 What: get_robust_list syscall
535 When: 2013 526 When: 2013
536 Why: There appear to be no production users of the get_robust_list syscall, 527 Why: There appear to be no production users of the get_robust_list syscall,
537 and it runs the risk of leaking address locations, allowing the bypass 528 and it runs the risk of leaking address locations, allowing the bypass
538 of ASLR. It was only ever intended for debugging, so it should be 529 of ASLR. It was only ever intended for debugging, so it should be
539 removed. 530 removed.
540 Who: Kees Cook <keescook@chromium.org> 531 Who: Kees Cook <keescook@chromium.org>
541 532
542 ---------------------------- 533 ----------------------------
543 534
544 What: Removing the pn544 raw driver. 535 What: Removing the pn544 raw driver.
545 When: 3.6 536 When: 3.6
546 Why: With the introduction of the NFC HCI and SHDL kernel layers, pn544.c 537 Why: With the introduction of the NFC HCI and SHDL kernel layers, pn544.c
547 is being replaced by pn544_hci.c which is accessible through the netlink 538 is being replaced by pn544_hci.c which is accessible through the netlink
548 and socket NFC APIs. Moreover, pn544.c is outdated and does not seem to 539 and socket NFC APIs. Moreover, pn544.c is outdated and does not seem to
549 work properly with the latest Android stacks. 540 work properly with the latest Android stacks.
550 Having 2 drivers for the same hardware is confusing and as such we 541 Having 2 drivers for the same hardware is confusing and as such we
551 should only keep the one following the kernel NFC APIs. 542 should only keep the one following the kernel NFC APIs.
552 Who: Samuel Ortiz <sameo@linux.intel.com> 543 Who: Samuel Ortiz <sameo@linux.intel.com>
553 544
554 ---------------------------- 545 ----------------------------
555 546
556 What: setitimer accepts user NULL pointer (value) 547 What: setitimer accepts user NULL pointer (value)
557 When: 3.6 548 When: 3.6
558 Why: setitimer is not returning -EFAULT if user pointer is NULL. This 549 Why: setitimer is not returning -EFAULT if user pointer is NULL. This
559 violates the spec. 550 violates the spec.
560 Who: Sasikantha Babu <sasikanth.v19@gmail.com> 551 Who: Sasikantha Babu <sasikanth.v19@gmail.com>
561 552
562 ---------------------------- 553 ----------------------------
563 554
564 What: remove bogus DV presets V4L2_DV_1080I29_97, V4L2_DV_1080I30 and 555 What: remove bogus DV presets V4L2_DV_1080I29_97, V4L2_DV_1080I30 and
565 V4L2_DV_1080I25 556 V4L2_DV_1080I25
566 When: 3.6 557 When: 3.6
567 Why: These HDTV formats do not exist and were added by a confused mind 558 Why: These HDTV formats do not exist and were added by a confused mind
568 (that was me, to be precise...) 559 (that was me, to be precise...)
569 Who: Hans Verkuil <hans.verkuil@cisco.com> 560 Who: Hans Verkuil <hans.verkuil@cisco.com>
570 561
571 ---------------------------- 562 ----------------------------
572 563
573 What: V4L2_CID_HCENTER, V4L2_CID_VCENTER V4L2 controls 564 What: V4L2_CID_HCENTER, V4L2_CID_VCENTER V4L2 controls
574 When: 3.7 565 When: 3.7
575 Why: The V4L2_CID_VCENTER, V4L2_CID_HCENTER controls have been deprecated 566 Why: The V4L2_CID_VCENTER, V4L2_CID_HCENTER controls have been deprecated
576 for about 4 years and they are not used by any mainline driver. 567 for about 4 years and they are not used by any mainline driver.
577 There are newer controls (V4L2_CID_PAN*, V4L2_CID_TILT*) that provide 568 There are newer controls (V4L2_CID_PAN*, V4L2_CID_TILT*) that provide
578 similar functionality. 569 similar functionality.
579 Who: Sylwester Nawrocki <sylvester.nawrocki@gmail.com> 570 Who: Sylwester Nawrocki <sylvester.nawrocki@gmail.com>
580 571
581 ---------------------------- 572 ----------------------------
582 573
583 What: cgroup option updates via remount 574 What: cgroup option updates via remount
584 When: March 2013 575 When: March 2013
585 Why: Remount currently allows changing bound subsystems and 576 Why: Remount currently allows changing bound subsystems and
586 release_agent. Rebinding is hardly useful as it only works 577 release_agent. Rebinding is hardly useful as it only works
587 when the hierarchy is empty and release_agent itself should be 578 when the hierarchy is empty and release_agent itself should be
588 replaced with conventional fsnotify. 579 replaced with conventional fsnotify.
589 580
590 ---------------------------- 581 ----------------------------
591 582
592 What: KVM debugfs statistics 583 What: KVM debugfs statistics
593 When: 2013 584 When: 2013
594 Why: KVM tracepoints provide mostly equivalent information in a much more 585 Why: KVM tracepoints provide mostly equivalent information in a much more
595 flexible fashion. 586 flexible fashion.
596 587
597 ---------------------------- 588 ----------------------------
598 589
599 What: at91-mci driver ("CONFIG_MMC_AT91") 590 What: at91-mci driver ("CONFIG_MMC_AT91")
600 When: 3.7 591 When: 3.7
601 Why: There are two mci drivers: at91-mci and atmel-mci. The PDC support 592 Why: There are two mci drivers: at91-mci and atmel-mci. The PDC support
602 was added to atmel-mci as a first step to support more chips. 593 was added to atmel-mci as a first step to support more chips.
603 Then at91-mci was kept only for old IP versions (on at91rm9200 and 594 Then at91-mci was kept only for old IP versions (on at91rm9200 and
604 at91sam9261). The support of these IP versions has just been added 595 at91sam9261). The support of these IP versions has just been added
605 to atmel-mci, so atmel-mci can be used for all chips. 596 to atmel-mci, so atmel-mci can be used for all chips.
606 Who: Ludovic Desroches <ludovic.desroches@atmel.com> 597 Who: Ludovic Desroches <ludovic.desroches@atmel.com>
607 598
608 ---------------------------- 599 ----------------------------
609 600
610 What: net/wanrouter/ 601 What: net/wanrouter/
611 When: June 2013 602 When: June 2013
612 Why: Unsupported/unmaintained/unused since 2.6 603 Why: Unsupported/unmaintained/unused since 2.6
613 604
614 ---------------------------- 605 ----------------------------
615 606
net/core/net-sysfs.c
1 /* 1 /*
2 * net-sysfs.c - network device class and attributes 2 * net-sysfs.c - network device class and attributes
3 * 3 *
4 * Copyright (c) 2003 Stephen Hemminger <shemminger@osdl.org> 4 * Copyright (c) 2003 Stephen Hemminger <shemminger@osdl.org>
5 * 5 *
6 * This program is free software; you can redistribute it and/or 6 * This program is free software; you can redistribute it and/or
7 * modify it under the terms of the GNU General Public License 7 * modify it under the terms of the GNU General Public License
8 * as published by the Free Software Foundation; either version 8 * as published by the Free Software Foundation; either version
9 * 2 of the License, or (at your option) any later version. 9 * 2 of the License, or (at your option) any later version.
10 */ 10 */
11 11
12 #include <linux/capability.h> 12 #include <linux/capability.h>
13 #include <linux/kernel.h> 13 #include <linux/kernel.h>
14 #include <linux/netdevice.h> 14 #include <linux/netdevice.h>
15 #include <linux/if_arp.h> 15 #include <linux/if_arp.h>
16 #include <linux/slab.h> 16 #include <linux/slab.h>
17 #include <linux/nsproxy.h> 17 #include <linux/nsproxy.h>
18 #include <net/sock.h> 18 #include <net/sock.h>
19 #include <net/net_namespace.h> 19 #include <net/net_namespace.h>
20 #include <linux/rtnetlink.h> 20 #include <linux/rtnetlink.h>
21 #include <linux/wireless.h> 21 #include <linux/wireless.h>
22 #include <linux/vmalloc.h> 22 #include <linux/vmalloc.h>
23 #include <linux/export.h> 23 #include <linux/export.h>
24 #include <linux/jiffies.h> 24 #include <linux/jiffies.h>
25 #include <net/wext.h> 25 #include <net/wext.h>
26 26
27 #include "net-sysfs.h" 27 #include "net-sysfs.h"
28 28
29 #ifdef CONFIG_SYSFS 29 #ifdef CONFIG_SYSFS
30 static const char fmt_hex[] = "%#x\n"; 30 static const char fmt_hex[] = "%#x\n";
31 static const char fmt_long_hex[] = "%#lx\n"; 31 static const char fmt_long_hex[] = "%#lx\n";
32 static const char fmt_dec[] = "%d\n"; 32 static const char fmt_dec[] = "%d\n";
33 static const char fmt_udec[] = "%u\n"; 33 static const char fmt_udec[] = "%u\n";
34 static const char fmt_ulong[] = "%lu\n"; 34 static const char fmt_ulong[] = "%lu\n";
35 static const char fmt_u64[] = "%llu\n"; 35 static const char fmt_u64[] = "%llu\n";
36 36
37 static inline int dev_isalive(const struct net_device *dev) 37 static inline int dev_isalive(const struct net_device *dev)
38 { 38 {
39 return dev->reg_state <= NETREG_REGISTERED; 39 return dev->reg_state <= NETREG_REGISTERED;
40 } 40 }
41 41
42 /* use same locking rules as GIF* ioctl's */ 42 /* use same locking rules as GIF* ioctl's */
43 static ssize_t netdev_show(const struct device *dev, 43 static ssize_t netdev_show(const struct device *dev,
44 struct device_attribute *attr, char *buf, 44 struct device_attribute *attr, char *buf,
45 ssize_t (*format)(const struct net_device *, char *)) 45 ssize_t (*format)(const struct net_device *, char *))
46 { 46 {
47 struct net_device *net = to_net_dev(dev); 47 struct net_device *net = to_net_dev(dev);
48 ssize_t ret = -EINVAL; 48 ssize_t ret = -EINVAL;
49 49
50 read_lock(&dev_base_lock); 50 read_lock(&dev_base_lock);
51 if (dev_isalive(net)) 51 if (dev_isalive(net))
52 ret = (*format)(net, buf); 52 ret = (*format)(net, buf);
53 read_unlock(&dev_base_lock); 53 read_unlock(&dev_base_lock);
54 54
55 return ret; 55 return ret;
56 } 56 }
57 57
58 /* generate a show function for simple field */ 58 /* generate a show function for simple field */
59 #define NETDEVICE_SHOW(field, format_string) \ 59 #define NETDEVICE_SHOW(field, format_string) \
60 static ssize_t format_##field(const struct net_device *net, char *buf) \ 60 static ssize_t format_##field(const struct net_device *net, char *buf) \
61 { \ 61 { \
62 return sprintf(buf, format_string, net->field); \ 62 return sprintf(buf, format_string, net->field); \
63 } \ 63 } \
64 static ssize_t show_##field(struct device *dev, \ 64 static ssize_t show_##field(struct device *dev, \
65 struct device_attribute *attr, char *buf) \ 65 struct device_attribute *attr, char *buf) \
66 { \ 66 { \
67 return netdev_show(dev, attr, buf, format_##field); \ 67 return netdev_show(dev, attr, buf, format_##field); \
68 } 68 }
69 69
70 70
71 /* use same locking and permission rules as SIF* ioctl's */ 71 /* use same locking and permission rules as SIF* ioctl's */
72 static ssize_t netdev_store(struct device *dev, struct device_attribute *attr, 72 static ssize_t netdev_store(struct device *dev, struct device_attribute *attr,
73 const char *buf, size_t len, 73 const char *buf, size_t len,
74 int (*set)(struct net_device *, unsigned long)) 74 int (*set)(struct net_device *, unsigned long))
75 { 75 {
76 struct net_device *net = to_net_dev(dev); 76 struct net_device *net = to_net_dev(dev);
77 unsigned long new; 77 unsigned long new;
78 int ret = -EINVAL; 78 int ret = -EINVAL;
79 79
80 if (!capable(CAP_NET_ADMIN)) 80 if (!capable(CAP_NET_ADMIN))
81 return -EPERM; 81 return -EPERM;
82 82
83 ret = kstrtoul(buf, 0, &new); 83 ret = kstrtoul(buf, 0, &new);
84 if (ret) 84 if (ret)
85 goto err; 85 goto err;
86 86
87 if (!rtnl_trylock()) 87 if (!rtnl_trylock())
88 return restart_syscall(); 88 return restart_syscall();
89 89
90 if (dev_isalive(net)) { 90 if (dev_isalive(net)) {
91 if ((ret = (*set)(net, new)) == 0) 91 if ((ret = (*set)(net, new)) == 0)
92 ret = len; 92 ret = len;
93 } 93 }
94 rtnl_unlock(); 94 rtnl_unlock();
95 err: 95 err:
96 return ret; 96 return ret;
97 } 97 }
98 98
99 NETDEVICE_SHOW(dev_id, fmt_hex); 99 NETDEVICE_SHOW(dev_id, fmt_hex);
100 NETDEVICE_SHOW(addr_assign_type, fmt_dec); 100 NETDEVICE_SHOW(addr_assign_type, fmt_dec);
101 NETDEVICE_SHOW(addr_len, fmt_dec); 101 NETDEVICE_SHOW(addr_len, fmt_dec);
102 NETDEVICE_SHOW(iflink, fmt_dec); 102 NETDEVICE_SHOW(iflink, fmt_dec);
103 NETDEVICE_SHOW(ifindex, fmt_dec); 103 NETDEVICE_SHOW(ifindex, fmt_dec);
104 NETDEVICE_SHOW(type, fmt_dec); 104 NETDEVICE_SHOW(type, fmt_dec);
105 NETDEVICE_SHOW(link_mode, fmt_dec); 105 NETDEVICE_SHOW(link_mode, fmt_dec);
106 106
107 /* use same locking rules as GIFHWADDR ioctl's */ 107 /* use same locking rules as GIFHWADDR ioctl's */
108 static ssize_t show_address(struct device *dev, struct device_attribute *attr, 108 static ssize_t show_address(struct device *dev, struct device_attribute *attr,
109 char *buf) 109 char *buf)
110 { 110 {
111 struct net_device *net = to_net_dev(dev); 111 struct net_device *net = to_net_dev(dev);
112 ssize_t ret = -EINVAL; 112 ssize_t ret = -EINVAL;
113 113
114 read_lock(&dev_base_lock); 114 read_lock(&dev_base_lock);
115 if (dev_isalive(net)) 115 if (dev_isalive(net))
116 ret = sysfs_format_mac(buf, net->dev_addr, net->addr_len); 116 ret = sysfs_format_mac(buf, net->dev_addr, net->addr_len);
117 read_unlock(&dev_base_lock); 117 read_unlock(&dev_base_lock);
118 return ret; 118 return ret;
119 } 119 }
120 120
121 static ssize_t show_broadcast(struct device *dev, 121 static ssize_t show_broadcast(struct device *dev,
122 struct device_attribute *attr, char *buf) 122 struct device_attribute *attr, char *buf)
123 { 123 {
124 struct net_device *net = to_net_dev(dev); 124 struct net_device *net = to_net_dev(dev);
125 if (dev_isalive(net)) 125 if (dev_isalive(net))
126 return sysfs_format_mac(buf, net->broadcast, net->addr_len); 126 return sysfs_format_mac(buf, net->broadcast, net->addr_len);
127 return -EINVAL; 127 return -EINVAL;
128 } 128 }
129 129
130 static ssize_t show_carrier(struct device *dev, 130 static ssize_t show_carrier(struct device *dev,
131 struct device_attribute *attr, char *buf) 131 struct device_attribute *attr, char *buf)
132 { 132 {
133 struct net_device *netdev = to_net_dev(dev); 133 struct net_device *netdev = to_net_dev(dev);
134 if (netif_running(netdev)) { 134 if (netif_running(netdev)) {
135 return sprintf(buf, fmt_dec, !!netif_carrier_ok(netdev)); 135 return sprintf(buf, fmt_dec, !!netif_carrier_ok(netdev));
136 } 136 }
137 return -EINVAL; 137 return -EINVAL;
138 } 138 }
139 139
140 static ssize_t show_speed(struct device *dev, 140 static ssize_t show_speed(struct device *dev,
141 struct device_attribute *attr, char *buf) 141 struct device_attribute *attr, char *buf)
142 { 142 {
143 struct net_device *netdev = to_net_dev(dev); 143 struct net_device *netdev = to_net_dev(dev);
144 int ret = -EINVAL; 144 int ret = -EINVAL;
145 145
146 if (!rtnl_trylock()) 146 if (!rtnl_trylock())
147 return restart_syscall(); 147 return restart_syscall();
148 148
149 if (netif_running(netdev)) { 149 if (netif_running(netdev)) {
150 struct ethtool_cmd cmd; 150 struct ethtool_cmd cmd;
151 if (!__ethtool_get_settings(netdev, &cmd)) 151 if (!__ethtool_get_settings(netdev, &cmd))
152 ret = sprintf(buf, fmt_udec, ethtool_cmd_speed(&cmd)); 152 ret = sprintf(buf, fmt_udec, ethtool_cmd_speed(&cmd));
153 } 153 }
154 rtnl_unlock(); 154 rtnl_unlock();
155 return ret; 155 return ret;
156 } 156 }
157 157
158 static ssize_t show_duplex(struct device *dev, 158 static ssize_t show_duplex(struct device *dev,
159 struct device_attribute *attr, char *buf) 159 struct device_attribute *attr, char *buf)
160 { 160 {
161 struct net_device *netdev = to_net_dev(dev); 161 struct net_device *netdev = to_net_dev(dev);
162 int ret = -EINVAL; 162 int ret = -EINVAL;
163 163
164 if (!rtnl_trylock()) 164 if (!rtnl_trylock())
165 return restart_syscall(); 165 return restart_syscall();
166 166
167 if (netif_running(netdev)) { 167 if (netif_running(netdev)) {
168 struct ethtool_cmd cmd; 168 struct ethtool_cmd cmd;
169 if (!__ethtool_get_settings(netdev, &cmd)) 169 if (!__ethtool_get_settings(netdev, &cmd))
170 ret = sprintf(buf, "%s\n", 170 ret = sprintf(buf, "%s\n",
171 cmd.duplex ? "full" : "half"); 171 cmd.duplex ? "full" : "half");
172 } 172 }
173 rtnl_unlock(); 173 rtnl_unlock();
174 return ret; 174 return ret;
175 } 175 }
176 176
177 static ssize_t show_dormant(struct device *dev, 177 static ssize_t show_dormant(struct device *dev,
178 struct device_attribute *attr, char *buf) 178 struct device_attribute *attr, char *buf)
179 { 179 {
180 struct net_device *netdev = to_net_dev(dev); 180 struct net_device *netdev = to_net_dev(dev);
181 181
182 if (netif_running(netdev)) 182 if (netif_running(netdev))
183 return sprintf(buf, fmt_dec, !!netif_dormant(netdev)); 183 return sprintf(buf, fmt_dec, !!netif_dormant(netdev));
184 184
185 return -EINVAL; 185 return -EINVAL;
186 } 186 }
187 187
188 static const char *const operstates[] = { 188 static const char *const operstates[] = {
189 "unknown", 189 "unknown",
190 "notpresent", /* currently unused */ 190 "notpresent", /* currently unused */
191 "down", 191 "down",
192 "lowerlayerdown", 192 "lowerlayerdown",
193 "testing", /* currently unused */ 193 "testing", /* currently unused */
194 "dormant", 194 "dormant",
195 "up" 195 "up"
196 }; 196 };
197 197
198 static ssize_t show_operstate(struct device *dev, 198 static ssize_t show_operstate(struct device *dev,
199 struct device_attribute *attr, char *buf) 199 struct device_attribute *attr, char *buf)
200 { 200 {
201 const struct net_device *netdev = to_net_dev(dev); 201 const struct net_device *netdev = to_net_dev(dev);
202 unsigned char operstate; 202 unsigned char operstate;
203 203
204 read_lock(&dev_base_lock); 204 read_lock(&dev_base_lock);
205 operstate = netdev->operstate; 205 operstate = netdev->operstate;
206 if (!netif_running(netdev)) 206 if (!netif_running(netdev))
207 operstate = IF_OPER_DOWN; 207 operstate = IF_OPER_DOWN;
208 read_unlock(&dev_base_lock); 208 read_unlock(&dev_base_lock);
209 209
210 if (operstate >= ARRAY_SIZE(operstates)) 210 if (operstate >= ARRAY_SIZE(operstates))
211 return -EINVAL; /* should not happen */ 211 return -EINVAL; /* should not happen */
212 212
213 return sprintf(buf, "%s\n", operstates[operstate]); 213 return sprintf(buf, "%s\n", operstates[operstate]);
214 } 214 }
215 215
216 /* read-write attributes */ 216 /* read-write attributes */
217 NETDEVICE_SHOW(mtu, fmt_dec); 217 NETDEVICE_SHOW(mtu, fmt_dec);
218 218
219 static int change_mtu(struct net_device *net, unsigned long new_mtu) 219 static int change_mtu(struct net_device *net, unsigned long new_mtu)
220 { 220 {
221 return dev_set_mtu(net, (int) new_mtu); 221 return dev_set_mtu(net, (int) new_mtu);
222 } 222 }
223 223
224 static ssize_t store_mtu(struct device *dev, struct device_attribute *attr, 224 static ssize_t store_mtu(struct device *dev, struct device_attribute *attr,
225 const char *buf, size_t len) 225 const char *buf, size_t len)
226 { 226 {
227 return netdev_store(dev, attr, buf, len, change_mtu); 227 return netdev_store(dev, attr, buf, len, change_mtu);
228 } 228 }
229 229
230 NETDEVICE_SHOW(flags, fmt_hex); 230 NETDEVICE_SHOW(flags, fmt_hex);
231 231
232 static int change_flags(struct net_device *net, unsigned long new_flags) 232 static int change_flags(struct net_device *net, unsigned long new_flags)
233 { 233 {
234 return dev_change_flags(net, (unsigned int) new_flags); 234 return dev_change_flags(net, (unsigned int) new_flags);
235 } 235 }
236 236
237 static ssize_t store_flags(struct device *dev, struct device_attribute *attr, 237 static ssize_t store_flags(struct device *dev, struct device_attribute *attr,
238 const char *buf, size_t len) 238 const char *buf, size_t len)
239 { 239 {
240 return netdev_store(dev, attr, buf, len, change_flags); 240 return netdev_store(dev, attr, buf, len, change_flags);
241 } 241 }
242 242
243 NETDEVICE_SHOW(tx_queue_len, fmt_ulong); 243 NETDEVICE_SHOW(tx_queue_len, fmt_ulong);
244 244
245 static int change_tx_queue_len(struct net_device *net, unsigned long new_len) 245 static int change_tx_queue_len(struct net_device *net, unsigned long new_len)
246 { 246 {
247 net->tx_queue_len = new_len; 247 net->tx_queue_len = new_len;
248 return 0; 248 return 0;
249 } 249 }
250 250
251 static ssize_t store_tx_queue_len(struct device *dev, 251 static ssize_t store_tx_queue_len(struct device *dev,
252 struct device_attribute *attr, 252 struct device_attribute *attr,
253 const char *buf, size_t len) 253 const char *buf, size_t len)
254 { 254 {
255 return netdev_store(dev, attr, buf, len, change_tx_queue_len); 255 return netdev_store(dev, attr, buf, len, change_tx_queue_len);
256 } 256 }
257 257
258 static ssize_t store_ifalias(struct device *dev, struct device_attribute *attr, 258 static ssize_t store_ifalias(struct device *dev, struct device_attribute *attr,
259 const char *buf, size_t len) 259 const char *buf, size_t len)
260 { 260 {
261 struct net_device *netdev = to_net_dev(dev); 261 struct net_device *netdev = to_net_dev(dev);
262 size_t count = len; 262 size_t count = len;
263 ssize_t ret; 263 ssize_t ret;
264 264
265 if (!capable(CAP_NET_ADMIN)) 265 if (!capable(CAP_NET_ADMIN))
266 return -EPERM; 266 return -EPERM;
267 267
268 /* ignore trailing newline */ 268 /* ignore trailing newline */
269 if (len > 0 && buf[len - 1] == '\n') 269 if (len > 0 && buf[len - 1] == '\n')
270 --count; 270 --count;
271 271
272 if (!rtnl_trylock()) 272 if (!rtnl_trylock())
273 return restart_syscall(); 273 return restart_syscall();
274 ret = dev_set_alias(netdev, buf, count); 274 ret = dev_set_alias(netdev, buf, count);
275 rtnl_unlock(); 275 rtnl_unlock();
276 276
277 return ret < 0 ? ret : len; 277 return ret < 0 ? ret : len;
278 } 278 }
279 279
280 static ssize_t show_ifalias(struct device *dev, 280 static ssize_t show_ifalias(struct device *dev,
281 struct device_attribute *attr, char *buf) 281 struct device_attribute *attr, char *buf)
282 { 282 {
283 const struct net_device *netdev = to_net_dev(dev); 283 const struct net_device *netdev = to_net_dev(dev);
284 ssize_t ret = 0; 284 ssize_t ret = 0;
285 285
286 if (!rtnl_trylock()) 286 if (!rtnl_trylock())
287 return restart_syscall(); 287 return restart_syscall();
288 if (netdev->ifalias) 288 if (netdev->ifalias)
289 ret = sprintf(buf, "%s\n", netdev->ifalias); 289 ret = sprintf(buf, "%s\n", netdev->ifalias);
290 rtnl_unlock(); 290 rtnl_unlock();
291 return ret; 291 return ret;
292 } 292 }
293 293
294 NETDEVICE_SHOW(group, fmt_dec); 294 NETDEVICE_SHOW(group, fmt_dec);
295 295
296 static int change_group(struct net_device *net, unsigned long new_group) 296 static int change_group(struct net_device *net, unsigned long new_group)
297 { 297 {
298 dev_set_group(net, (int) new_group); 298 dev_set_group(net, (int) new_group);
299 return 0; 299 return 0;
300 } 300 }
301 301
302 static ssize_t store_group(struct device *dev, struct device_attribute *attr, 302 static ssize_t store_group(struct device *dev, struct device_attribute *attr,
303 const char *buf, size_t len) 303 const char *buf, size_t len)
304 { 304 {
305 return netdev_store(dev, attr, buf, len, change_group); 305 return netdev_store(dev, attr, buf, len, change_group);
306 } 306 }
307 307
308 static struct device_attribute net_class_attributes[] = { 308 static struct device_attribute net_class_attributes[] = {
309 __ATTR(addr_assign_type, S_IRUGO, show_addr_assign_type, NULL), 309 __ATTR(addr_assign_type, S_IRUGO, show_addr_assign_type, NULL),
310 __ATTR(addr_len, S_IRUGO, show_addr_len, NULL), 310 __ATTR(addr_len, S_IRUGO, show_addr_len, NULL),
311 __ATTR(dev_id, S_IRUGO, show_dev_id, NULL), 311 __ATTR(dev_id, S_IRUGO, show_dev_id, NULL),
312 __ATTR(ifalias, S_IRUGO | S_IWUSR, show_ifalias, store_ifalias), 312 __ATTR(ifalias, S_IRUGO | S_IWUSR, show_ifalias, store_ifalias),
313 __ATTR(iflink, S_IRUGO, show_iflink, NULL), 313 __ATTR(iflink, S_IRUGO, show_iflink, NULL),
314 __ATTR(ifindex, S_IRUGO, show_ifindex, NULL), 314 __ATTR(ifindex, S_IRUGO, show_ifindex, NULL),
315 __ATTR(type, S_IRUGO, show_type, NULL), 315 __ATTR(type, S_IRUGO, show_type, NULL),
316 __ATTR(link_mode, S_IRUGO, show_link_mode, NULL), 316 __ATTR(link_mode, S_IRUGO, show_link_mode, NULL),
317 __ATTR(address, S_IRUGO, show_address, NULL), 317 __ATTR(address, S_IRUGO, show_address, NULL),
318 __ATTR(broadcast, S_IRUGO, show_broadcast, NULL), 318 __ATTR(broadcast, S_IRUGO, show_broadcast, NULL),
319 __ATTR(carrier, S_IRUGO, show_carrier, NULL), 319 __ATTR(carrier, S_IRUGO, show_carrier, NULL),
320 __ATTR(speed, S_IRUGO, show_speed, NULL), 320 __ATTR(speed, S_IRUGO, show_speed, NULL),
321 __ATTR(duplex, S_IRUGO, show_duplex, NULL), 321 __ATTR(duplex, S_IRUGO, show_duplex, NULL),
322 __ATTR(dormant, S_IRUGO, show_dormant, NULL), 322 __ATTR(dormant, S_IRUGO, show_dormant, NULL),
323 __ATTR(operstate, S_IRUGO, show_operstate, NULL), 323 __ATTR(operstate, S_IRUGO, show_operstate, NULL),
324 __ATTR(mtu, S_IRUGO | S_IWUSR, show_mtu, store_mtu), 324 __ATTR(mtu, S_IRUGO | S_IWUSR, show_mtu, store_mtu),
325 __ATTR(flags, S_IRUGO | S_IWUSR, show_flags, store_flags), 325 __ATTR(flags, S_IRUGO | S_IWUSR, show_flags, store_flags),
326 __ATTR(tx_queue_len, S_IRUGO | S_IWUSR, show_tx_queue_len, 326 __ATTR(tx_queue_len, S_IRUGO | S_IWUSR, show_tx_queue_len,
327 store_tx_queue_len), 327 store_tx_queue_len),
328 __ATTR(netdev_group, S_IRUGO | S_IWUSR, show_group, store_group), 328 __ATTR(netdev_group, S_IRUGO | S_IWUSR, show_group, store_group),
329 {} 329 {}
330 }; 330 };
331 331
332 /* Show a given an attribute in the statistics group */ 332 /* Show a given an attribute in the statistics group */
333 static ssize_t netstat_show(const struct device *d, 333 static ssize_t netstat_show(const struct device *d,
334 struct device_attribute *attr, char *buf, 334 struct device_attribute *attr, char *buf,
335 unsigned long offset) 335 unsigned long offset)
336 { 336 {
337 struct net_device *dev = to_net_dev(d); 337 struct net_device *dev = to_net_dev(d);
338 ssize_t ret = -EINVAL; 338 ssize_t ret = -EINVAL;
339 339
340 WARN_ON(offset > sizeof(struct rtnl_link_stats64) || 340 WARN_ON(offset > sizeof(struct rtnl_link_stats64) ||
341 offset % sizeof(u64) != 0); 341 offset % sizeof(u64) != 0);
342 342
343 read_lock(&dev_base_lock); 343 read_lock(&dev_base_lock);
344 if (dev_isalive(dev)) { 344 if (dev_isalive(dev)) {
345 struct rtnl_link_stats64 temp; 345 struct rtnl_link_stats64 temp;
346 const struct rtnl_link_stats64 *stats = dev_get_stats(dev, &temp); 346 const struct rtnl_link_stats64 *stats = dev_get_stats(dev, &temp);
347 347
348 ret = sprintf(buf, fmt_u64, *(u64 *)(((u8 *) stats) + offset)); 348 ret = sprintf(buf, fmt_u64, *(u64 *)(((u8 *) stats) + offset));
349 } 349 }
350 read_unlock(&dev_base_lock); 350 read_unlock(&dev_base_lock);
351 return ret; 351 return ret;
352 } 352 }
353 353
354 /* generate a read-only statistics attribute */ 354 /* generate a read-only statistics attribute */
355 #define NETSTAT_ENTRY(name) \ 355 #define NETSTAT_ENTRY(name) \
356 static ssize_t show_##name(struct device *d, \ 356 static ssize_t show_##name(struct device *d, \
357 struct device_attribute *attr, char *buf) \ 357 struct device_attribute *attr, char *buf) \
358 { \ 358 { \
359 return netstat_show(d, attr, buf, \ 359 return netstat_show(d, attr, buf, \
360 offsetof(struct rtnl_link_stats64, name)); \ 360 offsetof(struct rtnl_link_stats64, name)); \
361 } \ 361 } \
362 static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL) 362 static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL)
363 363
364 NETSTAT_ENTRY(rx_packets); 364 NETSTAT_ENTRY(rx_packets);
365 NETSTAT_ENTRY(tx_packets); 365 NETSTAT_ENTRY(tx_packets);
366 NETSTAT_ENTRY(rx_bytes); 366 NETSTAT_ENTRY(rx_bytes);
367 NETSTAT_ENTRY(tx_bytes); 367 NETSTAT_ENTRY(tx_bytes);
368 NETSTAT_ENTRY(rx_errors); 368 NETSTAT_ENTRY(rx_errors);
369 NETSTAT_ENTRY(tx_errors); 369 NETSTAT_ENTRY(tx_errors);
370 NETSTAT_ENTRY(rx_dropped); 370 NETSTAT_ENTRY(rx_dropped);
371 NETSTAT_ENTRY(tx_dropped); 371 NETSTAT_ENTRY(tx_dropped);
372 NETSTAT_ENTRY(multicast); 372 NETSTAT_ENTRY(multicast);
373 NETSTAT_ENTRY(collisions); 373 NETSTAT_ENTRY(collisions);
374 NETSTAT_ENTRY(rx_length_errors); 374 NETSTAT_ENTRY(rx_length_errors);
375 NETSTAT_ENTRY(rx_over_errors); 375 NETSTAT_ENTRY(rx_over_errors);
376 NETSTAT_ENTRY(rx_crc_errors); 376 NETSTAT_ENTRY(rx_crc_errors);
377 NETSTAT_ENTRY(rx_frame_errors); 377 NETSTAT_ENTRY(rx_frame_errors);
378 NETSTAT_ENTRY(rx_fifo_errors); 378 NETSTAT_ENTRY(rx_fifo_errors);
379 NETSTAT_ENTRY(rx_missed_errors); 379 NETSTAT_ENTRY(rx_missed_errors);
380 NETSTAT_ENTRY(tx_aborted_errors); 380 NETSTAT_ENTRY(tx_aborted_errors);
381 NETSTAT_ENTRY(tx_carrier_errors); 381 NETSTAT_ENTRY(tx_carrier_errors);
382 NETSTAT_ENTRY(tx_fifo_errors); 382 NETSTAT_ENTRY(tx_fifo_errors);
383 NETSTAT_ENTRY(tx_heartbeat_errors); 383 NETSTAT_ENTRY(tx_heartbeat_errors);
384 NETSTAT_ENTRY(tx_window_errors); 384 NETSTAT_ENTRY(tx_window_errors);
385 NETSTAT_ENTRY(rx_compressed); 385 NETSTAT_ENTRY(rx_compressed);
386 NETSTAT_ENTRY(tx_compressed); 386 NETSTAT_ENTRY(tx_compressed);
387 387
388 static struct attribute *netstat_attrs[] = { 388 static struct attribute *netstat_attrs[] = {
389 &dev_attr_rx_packets.attr, 389 &dev_attr_rx_packets.attr,
390 &dev_attr_tx_packets.attr, 390 &dev_attr_tx_packets.attr,
391 &dev_attr_rx_bytes.attr, 391 &dev_attr_rx_bytes.attr,
392 &dev_attr_tx_bytes.attr, 392 &dev_attr_tx_bytes.attr,
393 &dev_attr_rx_errors.attr, 393 &dev_attr_rx_errors.attr,
394 &dev_attr_tx_errors.attr, 394 &dev_attr_tx_errors.attr,
395 &dev_attr_rx_dropped.attr, 395 &dev_attr_rx_dropped.attr,
396 &dev_attr_tx_dropped.attr, 396 &dev_attr_tx_dropped.attr,
397 &dev_attr_multicast.attr, 397 &dev_attr_multicast.attr,
398 &dev_attr_collisions.attr, 398 &dev_attr_collisions.attr,
399 &dev_attr_rx_length_errors.attr, 399 &dev_attr_rx_length_errors.attr,
400 &dev_attr_rx_over_errors.attr, 400 &dev_attr_rx_over_errors.attr,
401 &dev_attr_rx_crc_errors.attr, 401 &dev_attr_rx_crc_errors.attr,
402 &dev_attr_rx_frame_errors.attr, 402 &dev_attr_rx_frame_errors.attr,
403 &dev_attr_rx_fifo_errors.attr, 403 &dev_attr_rx_fifo_errors.attr,
404 &dev_attr_rx_missed_errors.attr, 404 &dev_attr_rx_missed_errors.attr,
405 &dev_attr_tx_aborted_errors.attr, 405 &dev_attr_tx_aborted_errors.attr,
406 &dev_attr_tx_carrier_errors.attr, 406 &dev_attr_tx_carrier_errors.attr,
407 &dev_attr_tx_fifo_errors.attr, 407 &dev_attr_tx_fifo_errors.attr,
408 &dev_attr_tx_heartbeat_errors.attr, 408 &dev_attr_tx_heartbeat_errors.attr,
409 &dev_attr_tx_window_errors.attr, 409 &dev_attr_tx_window_errors.attr,
410 &dev_attr_rx_compressed.attr, 410 &dev_attr_rx_compressed.attr,
411 &dev_attr_tx_compressed.attr, 411 &dev_attr_tx_compressed.attr,
412 NULL 412 NULL
413 }; 413 };
414 414
415 415
416 static struct attribute_group netstat_group = { 416 static struct attribute_group netstat_group = {
417 .name = "statistics", 417 .name = "statistics",
418 .attrs = netstat_attrs, 418 .attrs = netstat_attrs,
419 }; 419 };
420
421 #ifdef CONFIG_WIRELESS_EXT_SYSFS
422 /* helper function that does all the locking etc for wireless stats */
423 static ssize_t wireless_show(struct device *d, char *buf,
424 ssize_t (*format)(const struct iw_statistics *,
425 char *))
426 {
427 struct net_device *dev = to_net_dev(d);
428 const struct iw_statistics *iw;
429 ssize_t ret = -EINVAL;
430
431 if (!rtnl_trylock())
432 return restart_syscall();
433 if (dev_isalive(dev)) {
434 iw = get_wireless_stats(dev);
435 if (iw)
436 ret = (*format)(iw, buf);
437 }
438 rtnl_unlock();
439
440 return ret;
441 }
442
443 /* show function template for wireless fields */
444 #define WIRELESS_SHOW(name, field, format_string) \
445 static ssize_t format_iw_##name(const struct iw_statistics *iw, char *buf) \
446 { \
447 return sprintf(buf, format_string, iw->field); \
448 } \
449 static ssize_t show_iw_##name(struct device *d, \
450 struct device_attribute *attr, char *buf) \
451 { \
452 return wireless_show(d, buf, format_iw_##name); \
453 } \
454 static DEVICE_ATTR(name, S_IRUGO, show_iw_##name, NULL)
455
456 WIRELESS_SHOW(status, status, fmt_hex);
457 WIRELESS_SHOW(link, qual.qual, fmt_dec);
458 WIRELESS_SHOW(level, qual.level, fmt_dec);
459 WIRELESS_SHOW(noise, qual.noise, fmt_dec);
460 WIRELESS_SHOW(nwid, discard.nwid, fmt_dec);
461 WIRELESS_SHOW(crypt, discard.code, fmt_dec);
462 WIRELESS_SHOW(fragment, discard.fragment, fmt_dec);
463 WIRELESS_SHOW(misc, discard.misc, fmt_dec);
464 WIRELESS_SHOW(retries, discard.retries, fmt_dec);
465 WIRELESS_SHOW(beacon, miss.beacon, fmt_dec);
466
467 static struct attribute *wireless_attrs[] = {
468 &dev_attr_status.attr,
469 &dev_attr_link.attr,
470 &dev_attr_level.attr,
471 &dev_attr_noise.attr,
472 &dev_attr_nwid.attr,
473 &dev_attr_crypt.attr,
474 &dev_attr_fragment.attr,
475 &dev_attr_retries.attr,
476 &dev_attr_misc.attr,
477 &dev_attr_beacon.attr,
478 NULL
479 };
480
481 static struct attribute_group wireless_group = {
482 .name = "wireless",
483 .attrs = wireless_attrs,
484 };
485 #endif
486 #endif /* CONFIG_SYSFS */ 420 #endif /* CONFIG_SYSFS */
487 421
488 #ifdef CONFIG_RPS 422 #ifdef CONFIG_RPS
489 /* 423 /*
490 * RX queue sysfs structures and functions. 424 * RX queue sysfs structures and functions.
491 */ 425 */
492 struct rx_queue_attribute { 426 struct rx_queue_attribute {
493 struct attribute attr; 427 struct attribute attr;
494 ssize_t (*show)(struct netdev_rx_queue *queue, 428 ssize_t (*show)(struct netdev_rx_queue *queue,
495 struct rx_queue_attribute *attr, char *buf); 429 struct rx_queue_attribute *attr, char *buf);
496 ssize_t (*store)(struct netdev_rx_queue *queue, 430 ssize_t (*store)(struct netdev_rx_queue *queue,
497 struct rx_queue_attribute *attr, const char *buf, size_t len); 431 struct rx_queue_attribute *attr, const char *buf, size_t len);
498 }; 432 };
499 #define to_rx_queue_attr(_attr) container_of(_attr, \ 433 #define to_rx_queue_attr(_attr) container_of(_attr, \
500 struct rx_queue_attribute, attr) 434 struct rx_queue_attribute, attr)
501 435
502 #define to_rx_queue(obj) container_of(obj, struct netdev_rx_queue, kobj) 436 #define to_rx_queue(obj) container_of(obj, struct netdev_rx_queue, kobj)
503 437
504 static ssize_t rx_queue_attr_show(struct kobject *kobj, struct attribute *attr, 438 static ssize_t rx_queue_attr_show(struct kobject *kobj, struct attribute *attr,
505 char *buf) 439 char *buf)
506 { 440 {
507 struct rx_queue_attribute *attribute = to_rx_queue_attr(attr); 441 struct rx_queue_attribute *attribute = to_rx_queue_attr(attr);
508 struct netdev_rx_queue *queue = to_rx_queue(kobj); 442 struct netdev_rx_queue *queue = to_rx_queue(kobj);
509 443
510 if (!attribute->show) 444 if (!attribute->show)
511 return -EIO; 445 return -EIO;
512 446
513 return attribute->show(queue, attribute, buf); 447 return attribute->show(queue, attribute, buf);
514 } 448 }
515 449
516 static ssize_t rx_queue_attr_store(struct kobject *kobj, struct attribute *attr, 450 static ssize_t rx_queue_attr_store(struct kobject *kobj, struct attribute *attr,
517 const char *buf, size_t count) 451 const char *buf, size_t count)
518 { 452 {
519 struct rx_queue_attribute *attribute = to_rx_queue_attr(attr); 453 struct rx_queue_attribute *attribute = to_rx_queue_attr(attr);
520 struct netdev_rx_queue *queue = to_rx_queue(kobj); 454 struct netdev_rx_queue *queue = to_rx_queue(kobj);
521 455
522 if (!attribute->store) 456 if (!attribute->store)
523 return -EIO; 457 return -EIO;
524 458
525 return attribute->store(queue, attribute, buf, count); 459 return attribute->store(queue, attribute, buf, count);
526 } 460 }
527 461
528 static const struct sysfs_ops rx_queue_sysfs_ops = { 462 static const struct sysfs_ops rx_queue_sysfs_ops = {
529 .show = rx_queue_attr_show, 463 .show = rx_queue_attr_show,
530 .store = rx_queue_attr_store, 464 .store = rx_queue_attr_store,
531 }; 465 };
532 466
533 static ssize_t show_rps_map(struct netdev_rx_queue *queue, 467 static ssize_t show_rps_map(struct netdev_rx_queue *queue,
534 struct rx_queue_attribute *attribute, char *buf) 468 struct rx_queue_attribute *attribute, char *buf)
535 { 469 {
536 struct rps_map *map; 470 struct rps_map *map;
537 cpumask_var_t mask; 471 cpumask_var_t mask;
538 size_t len = 0; 472 size_t len = 0;
539 int i; 473 int i;
540 474
541 if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) 475 if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
542 return -ENOMEM; 476 return -ENOMEM;
543 477
544 rcu_read_lock(); 478 rcu_read_lock();
545 map = rcu_dereference(queue->rps_map); 479 map = rcu_dereference(queue->rps_map);
546 if (map) 480 if (map)
547 for (i = 0; i < map->len; i++) 481 for (i = 0; i < map->len; i++)
548 cpumask_set_cpu(map->cpus[i], mask); 482 cpumask_set_cpu(map->cpus[i], mask);
549 483
550 len += cpumask_scnprintf(buf + len, PAGE_SIZE, mask); 484 len += cpumask_scnprintf(buf + len, PAGE_SIZE, mask);
551 if (PAGE_SIZE - len < 3) { 485 if (PAGE_SIZE - len < 3) {
552 rcu_read_unlock(); 486 rcu_read_unlock();
553 free_cpumask_var(mask); 487 free_cpumask_var(mask);
554 return -EINVAL; 488 return -EINVAL;
555 } 489 }
556 rcu_read_unlock(); 490 rcu_read_unlock();
557 491
558 free_cpumask_var(mask); 492 free_cpumask_var(mask);
559 len += sprintf(buf + len, "\n"); 493 len += sprintf(buf + len, "\n");
560 return len; 494 return len;
561 } 495 }
562 496
563 static ssize_t store_rps_map(struct netdev_rx_queue *queue, 497 static ssize_t store_rps_map(struct netdev_rx_queue *queue,
564 struct rx_queue_attribute *attribute, 498 struct rx_queue_attribute *attribute,
565 const char *buf, size_t len) 499 const char *buf, size_t len)
566 { 500 {
567 struct rps_map *old_map, *map; 501 struct rps_map *old_map, *map;
568 cpumask_var_t mask; 502 cpumask_var_t mask;
569 int err, cpu, i; 503 int err, cpu, i;
570 static DEFINE_SPINLOCK(rps_map_lock); 504 static DEFINE_SPINLOCK(rps_map_lock);
571 505
572 if (!capable(CAP_NET_ADMIN)) 506 if (!capable(CAP_NET_ADMIN))
573 return -EPERM; 507 return -EPERM;
574 508
575 if (!alloc_cpumask_var(&mask, GFP_KERNEL)) 509 if (!alloc_cpumask_var(&mask, GFP_KERNEL))
576 return -ENOMEM; 510 return -ENOMEM;
577 511
578 err = bitmap_parse(buf, len, cpumask_bits(mask), nr_cpumask_bits); 512 err = bitmap_parse(buf, len, cpumask_bits(mask), nr_cpumask_bits);
579 if (err) { 513 if (err) {
580 free_cpumask_var(mask); 514 free_cpumask_var(mask);
581 return err; 515 return err;
582 } 516 }
583 517
584 map = kzalloc(max_t(unsigned int, 518 map = kzalloc(max_t(unsigned int,
585 RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES), 519 RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES),
586 GFP_KERNEL); 520 GFP_KERNEL);
587 if (!map) { 521 if (!map) {
588 free_cpumask_var(mask); 522 free_cpumask_var(mask);
589 return -ENOMEM; 523 return -ENOMEM;
590 } 524 }
591 525
592 i = 0; 526 i = 0;
593 for_each_cpu_and(cpu, mask, cpu_online_mask) 527 for_each_cpu_and(cpu, mask, cpu_online_mask)
594 map->cpus[i++] = cpu; 528 map->cpus[i++] = cpu;
595 529
596 if (i) 530 if (i)
597 map->len = i; 531 map->len = i;
598 else { 532 else {
599 kfree(map); 533 kfree(map);
600 map = NULL; 534 map = NULL;
601 } 535 }
602 536
603 spin_lock(&rps_map_lock); 537 spin_lock(&rps_map_lock);
604 old_map = rcu_dereference_protected(queue->rps_map, 538 old_map = rcu_dereference_protected(queue->rps_map,
605 lockdep_is_held(&rps_map_lock)); 539 lockdep_is_held(&rps_map_lock));
606 rcu_assign_pointer(queue->rps_map, map); 540 rcu_assign_pointer(queue->rps_map, map);
607 spin_unlock(&rps_map_lock); 541 spin_unlock(&rps_map_lock);
608 542
609 if (map) 543 if (map)
610 static_key_slow_inc(&rps_needed); 544 static_key_slow_inc(&rps_needed);
611 if (old_map) { 545 if (old_map) {
612 kfree_rcu(old_map, rcu); 546 kfree_rcu(old_map, rcu);
613 static_key_slow_dec(&rps_needed); 547 static_key_slow_dec(&rps_needed);
614 } 548 }
615 free_cpumask_var(mask); 549 free_cpumask_var(mask);
616 return len; 550 return len;
617 } 551 }
618 552
619 static ssize_t show_rps_dev_flow_table_cnt(struct netdev_rx_queue *queue, 553 static ssize_t show_rps_dev_flow_table_cnt(struct netdev_rx_queue *queue,
620 struct rx_queue_attribute *attr, 554 struct rx_queue_attribute *attr,
621 char *buf) 555 char *buf)
622 { 556 {
623 struct rps_dev_flow_table *flow_table; 557 struct rps_dev_flow_table *flow_table;
624 unsigned long val = 0; 558 unsigned long val = 0;
625 559
626 rcu_read_lock(); 560 rcu_read_lock();
627 flow_table = rcu_dereference(queue->rps_flow_table); 561 flow_table = rcu_dereference(queue->rps_flow_table);
628 if (flow_table) 562 if (flow_table)
629 val = (unsigned long)flow_table->mask + 1; 563 val = (unsigned long)flow_table->mask + 1;
630 rcu_read_unlock(); 564 rcu_read_unlock();
631 565
632 return sprintf(buf, "%lu\n", val); 566 return sprintf(buf, "%lu\n", val);
633 } 567 }
634 568
635 static void rps_dev_flow_table_release_work(struct work_struct *work) 569 static void rps_dev_flow_table_release_work(struct work_struct *work)
636 { 570 {
637 struct rps_dev_flow_table *table = container_of(work, 571 struct rps_dev_flow_table *table = container_of(work,
638 struct rps_dev_flow_table, free_work); 572 struct rps_dev_flow_table, free_work);
639 573
640 vfree(table); 574 vfree(table);
641 } 575 }
642 576
643 static void rps_dev_flow_table_release(struct rcu_head *rcu) 577 static void rps_dev_flow_table_release(struct rcu_head *rcu)
644 { 578 {
645 struct rps_dev_flow_table *table = container_of(rcu, 579 struct rps_dev_flow_table *table = container_of(rcu,
646 struct rps_dev_flow_table, rcu); 580 struct rps_dev_flow_table, rcu);
647 581
648 INIT_WORK(&table->free_work, rps_dev_flow_table_release_work); 582 INIT_WORK(&table->free_work, rps_dev_flow_table_release_work);
649 schedule_work(&table->free_work); 583 schedule_work(&table->free_work);
650 } 584 }
651 585
652 static ssize_t store_rps_dev_flow_table_cnt(struct netdev_rx_queue *queue, 586 static ssize_t store_rps_dev_flow_table_cnt(struct netdev_rx_queue *queue,
653 struct rx_queue_attribute *attr, 587 struct rx_queue_attribute *attr,
654 const char *buf, size_t len) 588 const char *buf, size_t len)
655 { 589 {
656 unsigned long mask, count; 590 unsigned long mask, count;
657 struct rps_dev_flow_table *table, *old_table; 591 struct rps_dev_flow_table *table, *old_table;
658 static DEFINE_SPINLOCK(rps_dev_flow_lock); 592 static DEFINE_SPINLOCK(rps_dev_flow_lock);
659 int rc; 593 int rc;
660 594
661 if (!capable(CAP_NET_ADMIN)) 595 if (!capable(CAP_NET_ADMIN))
662 return -EPERM; 596 return -EPERM;
663 597
664 rc = kstrtoul(buf, 0, &count); 598 rc = kstrtoul(buf, 0, &count);
665 if (rc < 0) 599 if (rc < 0)
666 return rc; 600 return rc;
667 601
668 if (count) { 602 if (count) {
669 mask = count - 1; 603 mask = count - 1;
670 /* mask = roundup_pow_of_two(count) - 1; 604 /* mask = roundup_pow_of_two(count) - 1;
671 * without overflows... 605 * without overflows...
672 */ 606 */
673 while ((mask | (mask >> 1)) != mask) 607 while ((mask | (mask >> 1)) != mask)
674 mask |= (mask >> 1); 608 mask |= (mask >> 1);
675 /* On 64 bit arches, must check mask fits in table->mask (u32), 609 /* On 64 bit arches, must check mask fits in table->mask (u32),
676 * and on 32bit arches, must check RPS_DEV_FLOW_TABLE_SIZE(mask + 1) 610 * and on 32bit arches, must check RPS_DEV_FLOW_TABLE_SIZE(mask + 1)
677 * doesnt overflow. 611 * doesnt overflow.
678 */ 612 */
679 #if BITS_PER_LONG > 32 613 #if BITS_PER_LONG > 32
680 if (mask > (unsigned long)(u32)mask) 614 if (mask > (unsigned long)(u32)mask)
681 return -EINVAL; 615 return -EINVAL;
682 #else 616 #else
683 if (mask > (ULONG_MAX - RPS_DEV_FLOW_TABLE_SIZE(1)) 617 if (mask > (ULONG_MAX - RPS_DEV_FLOW_TABLE_SIZE(1))
684 / sizeof(struct rps_dev_flow)) { 618 / sizeof(struct rps_dev_flow)) {
685 /* Enforce a limit to prevent overflow */ 619 /* Enforce a limit to prevent overflow */
686 return -EINVAL; 620 return -EINVAL;
687 } 621 }
688 #endif 622 #endif
689 table = vmalloc(RPS_DEV_FLOW_TABLE_SIZE(mask + 1)); 623 table = vmalloc(RPS_DEV_FLOW_TABLE_SIZE(mask + 1));
690 if (!table) 624 if (!table)
691 return -ENOMEM; 625 return -ENOMEM;
692 626
693 table->mask = mask; 627 table->mask = mask;
694 for (count = 0; count <= mask; count++) 628 for (count = 0; count <= mask; count++)
695 table->flows[count].cpu = RPS_NO_CPU; 629 table->flows[count].cpu = RPS_NO_CPU;
696 } else 630 } else
697 table = NULL; 631 table = NULL;
698 632
699 spin_lock(&rps_dev_flow_lock); 633 spin_lock(&rps_dev_flow_lock);
700 old_table = rcu_dereference_protected(queue->rps_flow_table, 634 old_table = rcu_dereference_protected(queue->rps_flow_table,
701 lockdep_is_held(&rps_dev_flow_lock)); 635 lockdep_is_held(&rps_dev_flow_lock));
702 rcu_assign_pointer(queue->rps_flow_table, table); 636 rcu_assign_pointer(queue->rps_flow_table, table);
703 spin_unlock(&rps_dev_flow_lock); 637 spin_unlock(&rps_dev_flow_lock);
704 638
705 if (old_table) 639 if (old_table)
706 call_rcu(&old_table->rcu, rps_dev_flow_table_release); 640 call_rcu(&old_table->rcu, rps_dev_flow_table_release);
707 641
708 return len; 642 return len;
709 } 643 }
710 644
711 static struct rx_queue_attribute rps_cpus_attribute = 645 static struct rx_queue_attribute rps_cpus_attribute =
712 __ATTR(rps_cpus, S_IRUGO | S_IWUSR, show_rps_map, store_rps_map); 646 __ATTR(rps_cpus, S_IRUGO | S_IWUSR, show_rps_map, store_rps_map);
713 647
714 648
715 static struct rx_queue_attribute rps_dev_flow_table_cnt_attribute = 649 static struct rx_queue_attribute rps_dev_flow_table_cnt_attribute =
716 __ATTR(rps_flow_cnt, S_IRUGO | S_IWUSR, 650 __ATTR(rps_flow_cnt, S_IRUGO | S_IWUSR,
717 show_rps_dev_flow_table_cnt, store_rps_dev_flow_table_cnt); 651 show_rps_dev_flow_table_cnt, store_rps_dev_flow_table_cnt);
718 652
719 static struct attribute *rx_queue_default_attrs[] = { 653 static struct attribute *rx_queue_default_attrs[] = {
720 &rps_cpus_attribute.attr, 654 &rps_cpus_attribute.attr,
721 &rps_dev_flow_table_cnt_attribute.attr, 655 &rps_dev_flow_table_cnt_attribute.attr,
722 NULL 656 NULL
723 }; 657 };
724 658
725 static void rx_queue_release(struct kobject *kobj) 659 static void rx_queue_release(struct kobject *kobj)
726 { 660 {
727 struct netdev_rx_queue *queue = to_rx_queue(kobj); 661 struct netdev_rx_queue *queue = to_rx_queue(kobj);
728 struct rps_map *map; 662 struct rps_map *map;
729 struct rps_dev_flow_table *flow_table; 663 struct rps_dev_flow_table *flow_table;
730 664
731 665
732 map = rcu_dereference_protected(queue->rps_map, 1); 666 map = rcu_dereference_protected(queue->rps_map, 1);
733 if (map) { 667 if (map) {
734 RCU_INIT_POINTER(queue->rps_map, NULL); 668 RCU_INIT_POINTER(queue->rps_map, NULL);
735 kfree_rcu(map, rcu); 669 kfree_rcu(map, rcu);
736 } 670 }
737 671
738 flow_table = rcu_dereference_protected(queue->rps_flow_table, 1); 672 flow_table = rcu_dereference_protected(queue->rps_flow_table, 1);
739 if (flow_table) { 673 if (flow_table) {
740 RCU_INIT_POINTER(queue->rps_flow_table, NULL); 674 RCU_INIT_POINTER(queue->rps_flow_table, NULL);
741 call_rcu(&flow_table->rcu, rps_dev_flow_table_release); 675 call_rcu(&flow_table->rcu, rps_dev_flow_table_release);
742 } 676 }
743 677
744 memset(kobj, 0, sizeof(*kobj)); 678 memset(kobj, 0, sizeof(*kobj));
745 dev_put(queue->dev); 679 dev_put(queue->dev);
746 } 680 }
747 681
748 static struct kobj_type rx_queue_ktype = { 682 static struct kobj_type rx_queue_ktype = {
749 .sysfs_ops = &rx_queue_sysfs_ops, 683 .sysfs_ops = &rx_queue_sysfs_ops,
750 .release = rx_queue_release, 684 .release = rx_queue_release,
751 .default_attrs = rx_queue_default_attrs, 685 .default_attrs = rx_queue_default_attrs,
752 }; 686 };
753 687
754 static int rx_queue_add_kobject(struct net_device *net, int index) 688 static int rx_queue_add_kobject(struct net_device *net, int index)
755 { 689 {
756 struct netdev_rx_queue *queue = net->_rx + index; 690 struct netdev_rx_queue *queue = net->_rx + index;
757 struct kobject *kobj = &queue->kobj; 691 struct kobject *kobj = &queue->kobj;
758 int error = 0; 692 int error = 0;
759 693
760 kobj->kset = net->queues_kset; 694 kobj->kset = net->queues_kset;
761 error = kobject_init_and_add(kobj, &rx_queue_ktype, NULL, 695 error = kobject_init_and_add(kobj, &rx_queue_ktype, NULL,
762 "rx-%u", index); 696 "rx-%u", index);
763 if (error) { 697 if (error) {
764 kobject_put(kobj); 698 kobject_put(kobj);
765 return error; 699 return error;
766 } 700 }
767 701
768 kobject_uevent(kobj, KOBJ_ADD); 702 kobject_uevent(kobj, KOBJ_ADD);
769 dev_hold(queue->dev); 703 dev_hold(queue->dev);
770 704
771 return error; 705 return error;
772 } 706 }
773 #endif /* CONFIG_RPS */ 707 #endif /* CONFIG_RPS */
774 708
775 int 709 int
776 net_rx_queue_update_kobjects(struct net_device *net, int old_num, int new_num) 710 net_rx_queue_update_kobjects(struct net_device *net, int old_num, int new_num)
777 { 711 {
778 #ifdef CONFIG_RPS 712 #ifdef CONFIG_RPS
779 int i; 713 int i;
780 int error = 0; 714 int error = 0;
781 715
782 for (i = old_num; i < new_num; i++) { 716 for (i = old_num; i < new_num; i++) {
783 error = rx_queue_add_kobject(net, i); 717 error = rx_queue_add_kobject(net, i);
784 if (error) { 718 if (error) {
785 new_num = old_num; 719 new_num = old_num;
786 break; 720 break;
787 } 721 }
788 } 722 }
789 723
790 while (--i >= new_num) 724 while (--i >= new_num)
791 kobject_put(&net->_rx[i].kobj); 725 kobject_put(&net->_rx[i].kobj);
792 726
793 return error; 727 return error;
794 #else 728 #else
795 return 0; 729 return 0;
796 #endif 730 #endif
797 } 731 }
798 732
799 #ifdef CONFIG_SYSFS 733 #ifdef CONFIG_SYSFS
800 /* 734 /*
801 * netdev_queue sysfs structures and functions. 735 * netdev_queue sysfs structures and functions.
802 */ 736 */
803 struct netdev_queue_attribute { 737 struct netdev_queue_attribute {
804 struct attribute attr; 738 struct attribute attr;
805 ssize_t (*show)(struct netdev_queue *queue, 739 ssize_t (*show)(struct netdev_queue *queue,
806 struct netdev_queue_attribute *attr, char *buf); 740 struct netdev_queue_attribute *attr, char *buf);
807 ssize_t (*store)(struct netdev_queue *queue, 741 ssize_t (*store)(struct netdev_queue *queue,
808 struct netdev_queue_attribute *attr, const char *buf, size_t len); 742 struct netdev_queue_attribute *attr, const char *buf, size_t len);
809 }; 743 };
810 #define to_netdev_queue_attr(_attr) container_of(_attr, \ 744 #define to_netdev_queue_attr(_attr) container_of(_attr, \
811 struct netdev_queue_attribute, attr) 745 struct netdev_queue_attribute, attr)
812 746
813 #define to_netdev_queue(obj) container_of(obj, struct netdev_queue, kobj) 747 #define to_netdev_queue(obj) container_of(obj, struct netdev_queue, kobj)
814 748
815 static ssize_t netdev_queue_attr_show(struct kobject *kobj, 749 static ssize_t netdev_queue_attr_show(struct kobject *kobj,
816 struct attribute *attr, char *buf) 750 struct attribute *attr, char *buf)
817 { 751 {
818 struct netdev_queue_attribute *attribute = to_netdev_queue_attr(attr); 752 struct netdev_queue_attribute *attribute = to_netdev_queue_attr(attr);
819 struct netdev_queue *queue = to_netdev_queue(kobj); 753 struct netdev_queue *queue = to_netdev_queue(kobj);
820 754
821 if (!attribute->show) 755 if (!attribute->show)
822 return -EIO; 756 return -EIO;
823 757
824 return attribute->show(queue, attribute, buf); 758 return attribute->show(queue, attribute, buf);
825 } 759 }
826 760
827 static ssize_t netdev_queue_attr_store(struct kobject *kobj, 761 static ssize_t netdev_queue_attr_store(struct kobject *kobj,
828 struct attribute *attr, 762 struct attribute *attr,
829 const char *buf, size_t count) 763 const char *buf, size_t count)
830 { 764 {
831 struct netdev_queue_attribute *attribute = to_netdev_queue_attr(attr); 765 struct netdev_queue_attribute *attribute = to_netdev_queue_attr(attr);
832 struct netdev_queue *queue = to_netdev_queue(kobj); 766 struct netdev_queue *queue = to_netdev_queue(kobj);
833 767
834 if (!attribute->store) 768 if (!attribute->store)
835 return -EIO; 769 return -EIO;
836 770
837 return attribute->store(queue, attribute, buf, count); 771 return attribute->store(queue, attribute, buf, count);
838 } 772 }
839 773
840 static const struct sysfs_ops netdev_queue_sysfs_ops = { 774 static const struct sysfs_ops netdev_queue_sysfs_ops = {
841 .show = netdev_queue_attr_show, 775 .show = netdev_queue_attr_show,
842 .store = netdev_queue_attr_store, 776 .store = netdev_queue_attr_store,
843 }; 777 };
844 778
845 static ssize_t show_trans_timeout(struct netdev_queue *queue, 779 static ssize_t show_trans_timeout(struct netdev_queue *queue,
846 struct netdev_queue_attribute *attribute, 780 struct netdev_queue_attribute *attribute,
847 char *buf) 781 char *buf)
848 { 782 {
849 unsigned long trans_timeout; 783 unsigned long trans_timeout;
850 784
851 spin_lock_irq(&queue->_xmit_lock); 785 spin_lock_irq(&queue->_xmit_lock);
852 trans_timeout = queue->trans_timeout; 786 trans_timeout = queue->trans_timeout;
853 spin_unlock_irq(&queue->_xmit_lock); 787 spin_unlock_irq(&queue->_xmit_lock);
854 788
855 return sprintf(buf, "%lu", trans_timeout); 789 return sprintf(buf, "%lu", trans_timeout);
856 } 790 }
857 791
858 static struct netdev_queue_attribute queue_trans_timeout = 792 static struct netdev_queue_attribute queue_trans_timeout =
859 __ATTR(tx_timeout, S_IRUGO, show_trans_timeout, NULL); 793 __ATTR(tx_timeout, S_IRUGO, show_trans_timeout, NULL);
860 794
861 #ifdef CONFIG_BQL 795 #ifdef CONFIG_BQL
862 /* 796 /*
863 * Byte queue limits sysfs structures and functions. 797 * Byte queue limits sysfs structures and functions.
864 */ 798 */
865 static ssize_t bql_show(char *buf, unsigned int value) 799 static ssize_t bql_show(char *buf, unsigned int value)
866 { 800 {
867 return sprintf(buf, "%u\n", value); 801 return sprintf(buf, "%u\n", value);
868 } 802 }
869 803
870 static ssize_t bql_set(const char *buf, const size_t count, 804 static ssize_t bql_set(const char *buf, const size_t count,
871 unsigned int *pvalue) 805 unsigned int *pvalue)
872 { 806 {
873 unsigned int value; 807 unsigned int value;
874 int err; 808 int err;
875 809
876 if (!strcmp(buf, "max") || !strcmp(buf, "max\n")) 810 if (!strcmp(buf, "max") || !strcmp(buf, "max\n"))
877 value = DQL_MAX_LIMIT; 811 value = DQL_MAX_LIMIT;
878 else { 812 else {
879 err = kstrtouint(buf, 10, &value); 813 err = kstrtouint(buf, 10, &value);
880 if (err < 0) 814 if (err < 0)
881 return err; 815 return err;
882 if (value > DQL_MAX_LIMIT) 816 if (value > DQL_MAX_LIMIT)
883 return -EINVAL; 817 return -EINVAL;
884 } 818 }
885 819
886 *pvalue = value; 820 *pvalue = value;
887 821
888 return count; 822 return count;
889 } 823 }
890 824
891 static ssize_t bql_show_hold_time(struct netdev_queue *queue, 825 static ssize_t bql_show_hold_time(struct netdev_queue *queue,
892 struct netdev_queue_attribute *attr, 826 struct netdev_queue_attribute *attr,
893 char *buf) 827 char *buf)
894 { 828 {
895 struct dql *dql = &queue->dql; 829 struct dql *dql = &queue->dql;
896 830
897 return sprintf(buf, "%u\n", jiffies_to_msecs(dql->slack_hold_time)); 831 return sprintf(buf, "%u\n", jiffies_to_msecs(dql->slack_hold_time));
898 } 832 }
899 833
900 static ssize_t bql_set_hold_time(struct netdev_queue *queue, 834 static ssize_t bql_set_hold_time(struct netdev_queue *queue,
901 struct netdev_queue_attribute *attribute, 835 struct netdev_queue_attribute *attribute,
902 const char *buf, size_t len) 836 const char *buf, size_t len)
903 { 837 {
904 struct dql *dql = &queue->dql; 838 struct dql *dql = &queue->dql;
905 unsigned int value; 839 unsigned int value;
906 int err; 840 int err;
907 841
908 err = kstrtouint(buf, 10, &value); 842 err = kstrtouint(buf, 10, &value);
909 if (err < 0) 843 if (err < 0)
910 return err; 844 return err;
911 845
912 dql->slack_hold_time = msecs_to_jiffies(value); 846 dql->slack_hold_time = msecs_to_jiffies(value);
913 847
914 return len; 848 return len;
915 } 849 }
916 850
917 static struct netdev_queue_attribute bql_hold_time_attribute = 851 static struct netdev_queue_attribute bql_hold_time_attribute =
918 __ATTR(hold_time, S_IRUGO | S_IWUSR, bql_show_hold_time, 852 __ATTR(hold_time, S_IRUGO | S_IWUSR, bql_show_hold_time,
919 bql_set_hold_time); 853 bql_set_hold_time);
920 854
921 static ssize_t bql_show_inflight(struct netdev_queue *queue, 855 static ssize_t bql_show_inflight(struct netdev_queue *queue,
922 struct netdev_queue_attribute *attr, 856 struct netdev_queue_attribute *attr,
923 char *buf) 857 char *buf)
924 { 858 {
925 struct dql *dql = &queue->dql; 859 struct dql *dql = &queue->dql;
926 860
927 return sprintf(buf, "%u\n", dql->num_queued - dql->num_completed); 861 return sprintf(buf, "%u\n", dql->num_queued - dql->num_completed);
928 } 862 }
929 863
930 static struct netdev_queue_attribute bql_inflight_attribute = 864 static struct netdev_queue_attribute bql_inflight_attribute =
931 __ATTR(inflight, S_IRUGO, bql_show_inflight, NULL); 865 __ATTR(inflight, S_IRUGO, bql_show_inflight, NULL);
932 866
933 #define BQL_ATTR(NAME, FIELD) \ 867 #define BQL_ATTR(NAME, FIELD) \
934 static ssize_t bql_show_ ## NAME(struct netdev_queue *queue, \ 868 static ssize_t bql_show_ ## NAME(struct netdev_queue *queue, \
935 struct netdev_queue_attribute *attr, \ 869 struct netdev_queue_attribute *attr, \
936 char *buf) \ 870 char *buf) \
937 { \ 871 { \
938 return bql_show(buf, queue->dql.FIELD); \ 872 return bql_show(buf, queue->dql.FIELD); \
939 } \ 873 } \
940 \ 874 \
941 static ssize_t bql_set_ ## NAME(struct netdev_queue *queue, \ 875 static ssize_t bql_set_ ## NAME(struct netdev_queue *queue, \
942 struct netdev_queue_attribute *attr, \ 876 struct netdev_queue_attribute *attr, \
943 const char *buf, size_t len) \ 877 const char *buf, size_t len) \
944 { \ 878 { \
945 return bql_set(buf, len, &queue->dql.FIELD); \ 879 return bql_set(buf, len, &queue->dql.FIELD); \
946 } \ 880 } \
947 \ 881 \
948 static struct netdev_queue_attribute bql_ ## NAME ## _attribute = \ 882 static struct netdev_queue_attribute bql_ ## NAME ## _attribute = \
949 __ATTR(NAME, S_IRUGO | S_IWUSR, bql_show_ ## NAME, \ 883 __ATTR(NAME, S_IRUGO | S_IWUSR, bql_show_ ## NAME, \
950 bql_set_ ## NAME); 884 bql_set_ ## NAME);
951 885
952 BQL_ATTR(limit, limit) 886 BQL_ATTR(limit, limit)
953 BQL_ATTR(limit_max, max_limit) 887 BQL_ATTR(limit_max, max_limit)
954 BQL_ATTR(limit_min, min_limit) 888 BQL_ATTR(limit_min, min_limit)
955 889
956 static struct attribute *dql_attrs[] = { 890 static struct attribute *dql_attrs[] = {
957 &bql_limit_attribute.attr, 891 &bql_limit_attribute.attr,
958 &bql_limit_max_attribute.attr, 892 &bql_limit_max_attribute.attr,
959 &bql_limit_min_attribute.attr, 893 &bql_limit_min_attribute.attr,
960 &bql_hold_time_attribute.attr, 894 &bql_hold_time_attribute.attr,
961 &bql_inflight_attribute.attr, 895 &bql_inflight_attribute.attr,
962 NULL 896 NULL
963 }; 897 };
964 898
965 static struct attribute_group dql_group = { 899 static struct attribute_group dql_group = {
966 .name = "byte_queue_limits", 900 .name = "byte_queue_limits",
967 .attrs = dql_attrs, 901 .attrs = dql_attrs,
968 }; 902 };
969 #endif /* CONFIG_BQL */ 903 #endif /* CONFIG_BQL */
970 904
971 #ifdef CONFIG_XPS 905 #ifdef CONFIG_XPS
972 static inline unsigned int get_netdev_queue_index(struct netdev_queue *queue) 906 static inline unsigned int get_netdev_queue_index(struct netdev_queue *queue)
973 { 907 {
974 struct net_device *dev = queue->dev; 908 struct net_device *dev = queue->dev;
975 int i; 909 int i;
976 910
977 for (i = 0; i < dev->num_tx_queues; i++) 911 for (i = 0; i < dev->num_tx_queues; i++)
978 if (queue == &dev->_tx[i]) 912 if (queue == &dev->_tx[i])
979 break; 913 break;
980 914
981 BUG_ON(i >= dev->num_tx_queues); 915 BUG_ON(i >= dev->num_tx_queues);
982 916
983 return i; 917 return i;
984 } 918 }
985 919
986 920
987 static ssize_t show_xps_map(struct netdev_queue *queue, 921 static ssize_t show_xps_map(struct netdev_queue *queue,
988 struct netdev_queue_attribute *attribute, char *buf) 922 struct netdev_queue_attribute *attribute, char *buf)
989 { 923 {
990 struct net_device *dev = queue->dev; 924 struct net_device *dev = queue->dev;
991 struct xps_dev_maps *dev_maps; 925 struct xps_dev_maps *dev_maps;
992 cpumask_var_t mask; 926 cpumask_var_t mask;
993 unsigned long index; 927 unsigned long index;
994 size_t len = 0; 928 size_t len = 0;
995 int i; 929 int i;
996 930
997 if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) 931 if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
998 return -ENOMEM; 932 return -ENOMEM;
999 933
1000 index = get_netdev_queue_index(queue); 934 index = get_netdev_queue_index(queue);
1001 935
1002 rcu_read_lock(); 936 rcu_read_lock();
1003 dev_maps = rcu_dereference(dev->xps_maps); 937 dev_maps = rcu_dereference(dev->xps_maps);
1004 if (dev_maps) { 938 if (dev_maps) {
1005 for_each_possible_cpu(i) { 939 for_each_possible_cpu(i) {
1006 struct xps_map *map = 940 struct xps_map *map =
1007 rcu_dereference(dev_maps->cpu_map[i]); 941 rcu_dereference(dev_maps->cpu_map[i]);
1008 if (map) { 942 if (map) {
1009 int j; 943 int j;
1010 for (j = 0; j < map->len; j++) { 944 for (j = 0; j < map->len; j++) {
1011 if (map->queues[j] == index) { 945 if (map->queues[j] == index) {
1012 cpumask_set_cpu(i, mask); 946 cpumask_set_cpu(i, mask);
1013 break; 947 break;
1014 } 948 }
1015 } 949 }
1016 } 950 }
1017 } 951 }
1018 } 952 }
1019 rcu_read_unlock(); 953 rcu_read_unlock();
1020 954
1021 len += cpumask_scnprintf(buf + len, PAGE_SIZE, mask); 955 len += cpumask_scnprintf(buf + len, PAGE_SIZE, mask);
1022 if (PAGE_SIZE - len < 3) { 956 if (PAGE_SIZE - len < 3) {
1023 free_cpumask_var(mask); 957 free_cpumask_var(mask);
1024 return -EINVAL; 958 return -EINVAL;
1025 } 959 }
1026 960
1027 free_cpumask_var(mask); 961 free_cpumask_var(mask);
1028 len += sprintf(buf + len, "\n"); 962 len += sprintf(buf + len, "\n");
1029 return len; 963 return len;
1030 } 964 }
1031 965
1032 static DEFINE_MUTEX(xps_map_mutex); 966 static DEFINE_MUTEX(xps_map_mutex);
1033 #define xmap_dereference(P) \ 967 #define xmap_dereference(P) \
1034 rcu_dereference_protected((P), lockdep_is_held(&xps_map_mutex)) 968 rcu_dereference_protected((P), lockdep_is_held(&xps_map_mutex))
1035 969
1036 static void xps_queue_release(struct netdev_queue *queue) 970 static void xps_queue_release(struct netdev_queue *queue)
1037 { 971 {
1038 struct net_device *dev = queue->dev; 972 struct net_device *dev = queue->dev;
1039 struct xps_dev_maps *dev_maps; 973 struct xps_dev_maps *dev_maps;
1040 struct xps_map *map; 974 struct xps_map *map;
1041 unsigned long index; 975 unsigned long index;
1042 int i, pos, nonempty = 0; 976 int i, pos, nonempty = 0;
1043 977
1044 index = get_netdev_queue_index(queue); 978 index = get_netdev_queue_index(queue);
1045 979
1046 mutex_lock(&xps_map_mutex); 980 mutex_lock(&xps_map_mutex);
1047 dev_maps = xmap_dereference(dev->xps_maps); 981 dev_maps = xmap_dereference(dev->xps_maps);
1048 982
1049 if (dev_maps) { 983 if (dev_maps) {
1050 for_each_possible_cpu(i) { 984 for_each_possible_cpu(i) {
1051 map = xmap_dereference(dev_maps->cpu_map[i]); 985 map = xmap_dereference(dev_maps->cpu_map[i]);
1052 if (!map) 986 if (!map)
1053 continue; 987 continue;
1054 988
1055 for (pos = 0; pos < map->len; pos++) 989 for (pos = 0; pos < map->len; pos++)
1056 if (map->queues[pos] == index) 990 if (map->queues[pos] == index)
1057 break; 991 break;
1058 992
1059 if (pos < map->len) { 993 if (pos < map->len) {
1060 if (map->len > 1) 994 if (map->len > 1)
1061 map->queues[pos] = 995 map->queues[pos] =
1062 map->queues[--map->len]; 996 map->queues[--map->len];
1063 else { 997 else {
1064 RCU_INIT_POINTER(dev_maps->cpu_map[i], 998 RCU_INIT_POINTER(dev_maps->cpu_map[i],
1065 NULL); 999 NULL);
1066 kfree_rcu(map, rcu); 1000 kfree_rcu(map, rcu);
1067 map = NULL; 1001 map = NULL;
1068 } 1002 }
1069 } 1003 }
1070 if (map) 1004 if (map)
1071 nonempty = 1; 1005 nonempty = 1;
1072 } 1006 }
1073 1007
1074 if (!nonempty) { 1008 if (!nonempty) {
1075 RCU_INIT_POINTER(dev->xps_maps, NULL); 1009 RCU_INIT_POINTER(dev->xps_maps, NULL);
1076 kfree_rcu(dev_maps, rcu); 1010 kfree_rcu(dev_maps, rcu);
1077 } 1011 }
1078 } 1012 }
1079 mutex_unlock(&xps_map_mutex); 1013 mutex_unlock(&xps_map_mutex);
1080 } 1014 }
1081 1015
1082 static ssize_t store_xps_map(struct netdev_queue *queue, 1016 static ssize_t store_xps_map(struct netdev_queue *queue,
1083 struct netdev_queue_attribute *attribute, 1017 struct netdev_queue_attribute *attribute,
1084 const char *buf, size_t len) 1018 const char *buf, size_t len)
1085 { 1019 {
1086 struct net_device *dev = queue->dev; 1020 struct net_device *dev = queue->dev;
1087 cpumask_var_t mask; 1021 cpumask_var_t mask;
1088 int err, i, cpu, pos, map_len, alloc_len, need_set; 1022 int err, i, cpu, pos, map_len, alloc_len, need_set;
1089 unsigned long index; 1023 unsigned long index;
1090 struct xps_map *map, *new_map; 1024 struct xps_map *map, *new_map;
1091 struct xps_dev_maps *dev_maps, *new_dev_maps; 1025 struct xps_dev_maps *dev_maps, *new_dev_maps;
1092 int nonempty = 0; 1026 int nonempty = 0;
1093 int numa_node_id = -2; 1027 int numa_node_id = -2;
1094 1028
1095 if (!capable(CAP_NET_ADMIN)) 1029 if (!capable(CAP_NET_ADMIN))
1096 return -EPERM; 1030 return -EPERM;
1097 1031
1098 if (!alloc_cpumask_var(&mask, GFP_KERNEL)) 1032 if (!alloc_cpumask_var(&mask, GFP_KERNEL))
1099 return -ENOMEM; 1033 return -ENOMEM;
1100 1034
1101 index = get_netdev_queue_index(queue); 1035 index = get_netdev_queue_index(queue);
1102 1036
1103 err = bitmap_parse(buf, len, cpumask_bits(mask), nr_cpumask_bits); 1037 err = bitmap_parse(buf, len, cpumask_bits(mask), nr_cpumask_bits);
1104 if (err) { 1038 if (err) {
1105 free_cpumask_var(mask); 1039 free_cpumask_var(mask);
1106 return err; 1040 return err;
1107 } 1041 }
1108 1042
1109 new_dev_maps = kzalloc(max_t(unsigned int, 1043 new_dev_maps = kzalloc(max_t(unsigned int,
1110 XPS_DEV_MAPS_SIZE, L1_CACHE_BYTES), GFP_KERNEL); 1044 XPS_DEV_MAPS_SIZE, L1_CACHE_BYTES), GFP_KERNEL);
1111 if (!new_dev_maps) { 1045 if (!new_dev_maps) {
1112 free_cpumask_var(mask); 1046 free_cpumask_var(mask);
1113 return -ENOMEM; 1047 return -ENOMEM;
1114 } 1048 }
1115 1049
1116 mutex_lock(&xps_map_mutex); 1050 mutex_lock(&xps_map_mutex);
1117 1051
1118 dev_maps = xmap_dereference(dev->xps_maps); 1052 dev_maps = xmap_dereference(dev->xps_maps);
1119 1053
1120 for_each_possible_cpu(cpu) { 1054 for_each_possible_cpu(cpu) {
1121 map = dev_maps ? 1055 map = dev_maps ?
1122 xmap_dereference(dev_maps->cpu_map[cpu]) : NULL; 1056 xmap_dereference(dev_maps->cpu_map[cpu]) : NULL;
1123 new_map = map; 1057 new_map = map;
1124 if (map) { 1058 if (map) {
1125 for (pos = 0; pos < map->len; pos++) 1059 for (pos = 0; pos < map->len; pos++)
1126 if (map->queues[pos] == index) 1060 if (map->queues[pos] == index)
1127 break; 1061 break;
1128 map_len = map->len; 1062 map_len = map->len;
1129 alloc_len = map->alloc_len; 1063 alloc_len = map->alloc_len;
1130 } else 1064 } else
1131 pos = map_len = alloc_len = 0; 1065 pos = map_len = alloc_len = 0;
1132 1066
1133 need_set = cpumask_test_cpu(cpu, mask) && cpu_online(cpu); 1067 need_set = cpumask_test_cpu(cpu, mask) && cpu_online(cpu);
1134 #ifdef CONFIG_NUMA 1068 #ifdef CONFIG_NUMA
1135 if (need_set) { 1069 if (need_set) {
1136 if (numa_node_id == -2) 1070 if (numa_node_id == -2)
1137 numa_node_id = cpu_to_node(cpu); 1071 numa_node_id = cpu_to_node(cpu);
1138 else if (numa_node_id != cpu_to_node(cpu)) 1072 else if (numa_node_id != cpu_to_node(cpu))
1139 numa_node_id = -1; 1073 numa_node_id = -1;
1140 } 1074 }
1141 #endif 1075 #endif
1142 if (need_set && pos >= map_len) { 1076 if (need_set && pos >= map_len) {
1143 /* Need to add queue to this CPU's map */ 1077 /* Need to add queue to this CPU's map */
1144 if (map_len >= alloc_len) { 1078 if (map_len >= alloc_len) {
1145 alloc_len = alloc_len ? 1079 alloc_len = alloc_len ?
1146 2 * alloc_len : XPS_MIN_MAP_ALLOC; 1080 2 * alloc_len : XPS_MIN_MAP_ALLOC;
1147 new_map = kzalloc_node(XPS_MAP_SIZE(alloc_len), 1081 new_map = kzalloc_node(XPS_MAP_SIZE(alloc_len),
1148 GFP_KERNEL, 1082 GFP_KERNEL,
1149 cpu_to_node(cpu)); 1083 cpu_to_node(cpu));
1150 if (!new_map) 1084 if (!new_map)
1151 goto error; 1085 goto error;
1152 new_map->alloc_len = alloc_len; 1086 new_map->alloc_len = alloc_len;
1153 for (i = 0; i < map_len; i++) 1087 for (i = 0; i < map_len; i++)
1154 new_map->queues[i] = map->queues[i]; 1088 new_map->queues[i] = map->queues[i];
1155 new_map->len = map_len; 1089 new_map->len = map_len;
1156 } 1090 }
1157 new_map->queues[new_map->len++] = index; 1091 new_map->queues[new_map->len++] = index;
1158 } else if (!need_set && pos < map_len) { 1092 } else if (!need_set && pos < map_len) {
1159 /* Need to remove queue from this CPU's map */ 1093 /* Need to remove queue from this CPU's map */
1160 if (map_len > 1) 1094 if (map_len > 1)
1161 new_map->queues[pos] = 1095 new_map->queues[pos] =
1162 new_map->queues[--new_map->len]; 1096 new_map->queues[--new_map->len];
1163 else 1097 else
1164 new_map = NULL; 1098 new_map = NULL;
1165 } 1099 }
1166 RCU_INIT_POINTER(new_dev_maps->cpu_map[cpu], new_map); 1100 RCU_INIT_POINTER(new_dev_maps->cpu_map[cpu], new_map);
1167 } 1101 }
1168 1102
1169 /* Cleanup old maps */ 1103 /* Cleanup old maps */
1170 for_each_possible_cpu(cpu) { 1104 for_each_possible_cpu(cpu) {
1171 map = dev_maps ? 1105 map = dev_maps ?
1172 xmap_dereference(dev_maps->cpu_map[cpu]) : NULL; 1106 xmap_dereference(dev_maps->cpu_map[cpu]) : NULL;
1173 if (map && xmap_dereference(new_dev_maps->cpu_map[cpu]) != map) 1107 if (map && xmap_dereference(new_dev_maps->cpu_map[cpu]) != map)
1174 kfree_rcu(map, rcu); 1108 kfree_rcu(map, rcu);
1175 if (new_dev_maps->cpu_map[cpu]) 1109 if (new_dev_maps->cpu_map[cpu])
1176 nonempty = 1; 1110 nonempty = 1;
1177 } 1111 }
1178 1112
1179 if (nonempty) { 1113 if (nonempty) {
1180 rcu_assign_pointer(dev->xps_maps, new_dev_maps); 1114 rcu_assign_pointer(dev->xps_maps, new_dev_maps);
1181 } else { 1115 } else {
1182 kfree(new_dev_maps); 1116 kfree(new_dev_maps);
1183 RCU_INIT_POINTER(dev->xps_maps, NULL); 1117 RCU_INIT_POINTER(dev->xps_maps, NULL);
1184 } 1118 }
1185 1119
1186 if (dev_maps) 1120 if (dev_maps)
1187 kfree_rcu(dev_maps, rcu); 1121 kfree_rcu(dev_maps, rcu);
1188 1122
1189 netdev_queue_numa_node_write(queue, (numa_node_id >= 0) ? numa_node_id : 1123 netdev_queue_numa_node_write(queue, (numa_node_id >= 0) ? numa_node_id :
1190 NUMA_NO_NODE); 1124 NUMA_NO_NODE);
1191 1125
1192 mutex_unlock(&xps_map_mutex); 1126 mutex_unlock(&xps_map_mutex);
1193 1127
1194 free_cpumask_var(mask); 1128 free_cpumask_var(mask);
1195 return len; 1129 return len;
1196 1130
1197 error: 1131 error:
1198 mutex_unlock(&xps_map_mutex); 1132 mutex_unlock(&xps_map_mutex);
1199 1133
1200 if (new_dev_maps) 1134 if (new_dev_maps)
1201 for_each_possible_cpu(i) 1135 for_each_possible_cpu(i)
1202 kfree(rcu_dereference_protected( 1136 kfree(rcu_dereference_protected(
1203 new_dev_maps->cpu_map[i], 1137 new_dev_maps->cpu_map[i],
1204 1)); 1138 1));
1205 kfree(new_dev_maps); 1139 kfree(new_dev_maps);
1206 free_cpumask_var(mask); 1140 free_cpumask_var(mask);
1207 return -ENOMEM; 1141 return -ENOMEM;
1208 } 1142 }
1209 1143
1210 static struct netdev_queue_attribute xps_cpus_attribute = 1144 static struct netdev_queue_attribute xps_cpus_attribute =
1211 __ATTR(xps_cpus, S_IRUGO | S_IWUSR, show_xps_map, store_xps_map); 1145 __ATTR(xps_cpus, S_IRUGO | S_IWUSR, show_xps_map, store_xps_map);
1212 #endif /* CONFIG_XPS */ 1146 #endif /* CONFIG_XPS */
1213 1147
1214 static struct attribute *netdev_queue_default_attrs[] = { 1148 static struct attribute *netdev_queue_default_attrs[] = {
1215 &queue_trans_timeout.attr, 1149 &queue_trans_timeout.attr,
1216 #ifdef CONFIG_XPS 1150 #ifdef CONFIG_XPS
1217 &xps_cpus_attribute.attr, 1151 &xps_cpus_attribute.attr,
1218 #endif 1152 #endif
1219 NULL 1153 NULL
1220 }; 1154 };
1221 1155
1222 static void netdev_queue_release(struct kobject *kobj) 1156 static void netdev_queue_release(struct kobject *kobj)
1223 { 1157 {
1224 struct netdev_queue *queue = to_netdev_queue(kobj); 1158 struct netdev_queue *queue = to_netdev_queue(kobj);
1225 1159
1226 #ifdef CONFIG_XPS 1160 #ifdef CONFIG_XPS
1227 xps_queue_release(queue); 1161 xps_queue_release(queue);
1228 #endif 1162 #endif
1229 1163
1230 memset(kobj, 0, sizeof(*kobj)); 1164 memset(kobj, 0, sizeof(*kobj));
1231 dev_put(queue->dev); 1165 dev_put(queue->dev);
1232 } 1166 }
1233 1167
1234 static struct kobj_type netdev_queue_ktype = { 1168 static struct kobj_type netdev_queue_ktype = {
1235 .sysfs_ops = &netdev_queue_sysfs_ops, 1169 .sysfs_ops = &netdev_queue_sysfs_ops,
1236 .release = netdev_queue_release, 1170 .release = netdev_queue_release,
1237 .default_attrs = netdev_queue_default_attrs, 1171 .default_attrs = netdev_queue_default_attrs,
1238 }; 1172 };
1239 1173
1240 static int netdev_queue_add_kobject(struct net_device *net, int index) 1174 static int netdev_queue_add_kobject(struct net_device *net, int index)
1241 { 1175 {
1242 struct netdev_queue *queue = net->_tx + index; 1176 struct netdev_queue *queue = net->_tx + index;
1243 struct kobject *kobj = &queue->kobj; 1177 struct kobject *kobj = &queue->kobj;
1244 int error = 0; 1178 int error = 0;
1245 1179
1246 kobj->kset = net->queues_kset; 1180 kobj->kset = net->queues_kset;
1247 error = kobject_init_and_add(kobj, &netdev_queue_ktype, NULL, 1181 error = kobject_init_and_add(kobj, &netdev_queue_ktype, NULL,
1248 "tx-%u", index); 1182 "tx-%u", index);
1249 if (error) 1183 if (error)
1250 goto exit; 1184 goto exit;
1251 1185
1252 #ifdef CONFIG_BQL 1186 #ifdef CONFIG_BQL
1253 error = sysfs_create_group(kobj, &dql_group); 1187 error = sysfs_create_group(kobj, &dql_group);
1254 if (error) 1188 if (error)
1255 goto exit; 1189 goto exit;
1256 #endif 1190 #endif
1257 1191
1258 kobject_uevent(kobj, KOBJ_ADD); 1192 kobject_uevent(kobj, KOBJ_ADD);
1259 dev_hold(queue->dev); 1193 dev_hold(queue->dev);
1260 1194
1261 return 0; 1195 return 0;
1262 exit: 1196 exit:
1263 kobject_put(kobj); 1197 kobject_put(kobj);
1264 return error; 1198 return error;
1265 } 1199 }
1266 #endif /* CONFIG_SYSFS */ 1200 #endif /* CONFIG_SYSFS */
1267 1201
1268 int 1202 int
1269 netdev_queue_update_kobjects(struct net_device *net, int old_num, int new_num) 1203 netdev_queue_update_kobjects(struct net_device *net, int old_num, int new_num)
1270 { 1204 {
1271 #ifdef CONFIG_SYSFS 1205 #ifdef CONFIG_SYSFS
1272 int i; 1206 int i;
1273 int error = 0; 1207 int error = 0;
1274 1208
1275 for (i = old_num; i < new_num; i++) { 1209 for (i = old_num; i < new_num; i++) {
1276 error = netdev_queue_add_kobject(net, i); 1210 error = netdev_queue_add_kobject(net, i);
1277 if (error) { 1211 if (error) {
1278 new_num = old_num; 1212 new_num = old_num;
1279 break; 1213 break;
1280 } 1214 }
1281 } 1215 }
1282 1216
1283 while (--i >= new_num) { 1217 while (--i >= new_num) {
1284 struct netdev_queue *queue = net->_tx + i; 1218 struct netdev_queue *queue = net->_tx + i;
1285 1219
1286 #ifdef CONFIG_BQL 1220 #ifdef CONFIG_BQL
1287 sysfs_remove_group(&queue->kobj, &dql_group); 1221 sysfs_remove_group(&queue->kobj, &dql_group);
1288 #endif 1222 #endif
1289 kobject_put(&queue->kobj); 1223 kobject_put(&queue->kobj);
1290 } 1224 }
1291 1225
1292 return error; 1226 return error;
1293 #else 1227 #else
1294 return 0; 1228 return 0;
1295 #endif /* CONFIG_SYSFS */ 1229 #endif /* CONFIG_SYSFS */
1296 } 1230 }
1297 1231
1298 static int register_queue_kobjects(struct net_device *net) 1232 static int register_queue_kobjects(struct net_device *net)
1299 { 1233 {
1300 int error = 0, txq = 0, rxq = 0, real_rx = 0, real_tx = 0; 1234 int error = 0, txq = 0, rxq = 0, real_rx = 0, real_tx = 0;
1301 1235
1302 #ifdef CONFIG_SYSFS 1236 #ifdef CONFIG_SYSFS
1303 net->queues_kset = kset_create_and_add("queues", 1237 net->queues_kset = kset_create_and_add("queues",
1304 NULL, &net->dev.kobj); 1238 NULL, &net->dev.kobj);
1305 if (!net->queues_kset) 1239 if (!net->queues_kset)
1306 return -ENOMEM; 1240 return -ENOMEM;
1307 #endif 1241 #endif
1308 1242
1309 #ifdef CONFIG_RPS 1243 #ifdef CONFIG_RPS
1310 real_rx = net->real_num_rx_queues; 1244 real_rx = net->real_num_rx_queues;
1311 #endif 1245 #endif
1312 real_tx = net->real_num_tx_queues; 1246 real_tx = net->real_num_tx_queues;
1313 1247
1314 error = net_rx_queue_update_kobjects(net, 0, real_rx); 1248 error = net_rx_queue_update_kobjects(net, 0, real_rx);
1315 if (error) 1249 if (error)
1316 goto error; 1250 goto error;
1317 rxq = real_rx; 1251 rxq = real_rx;
1318 1252
1319 error = netdev_queue_update_kobjects(net, 0, real_tx); 1253 error = netdev_queue_update_kobjects(net, 0, real_tx);
1320 if (error) 1254 if (error)
1321 goto error; 1255 goto error;
1322 txq = real_tx; 1256 txq = real_tx;
1323 1257
1324 return 0; 1258 return 0;
1325 1259
1326 error: 1260 error:
1327 netdev_queue_update_kobjects(net, txq, 0); 1261 netdev_queue_update_kobjects(net, txq, 0);
1328 net_rx_queue_update_kobjects(net, rxq, 0); 1262 net_rx_queue_update_kobjects(net, rxq, 0);
1329 return error; 1263 return error;
1330 } 1264 }
1331 1265
1332 static void remove_queue_kobjects(struct net_device *net) 1266 static void remove_queue_kobjects(struct net_device *net)
1333 { 1267 {
1334 int real_rx = 0, real_tx = 0; 1268 int real_rx = 0, real_tx = 0;
1335 1269
1336 #ifdef CONFIG_RPS 1270 #ifdef CONFIG_RPS
1337 real_rx = net->real_num_rx_queues; 1271 real_rx = net->real_num_rx_queues;
1338 #endif 1272 #endif
1339 real_tx = net->real_num_tx_queues; 1273 real_tx = net->real_num_tx_queues;
1340 1274
1341 net_rx_queue_update_kobjects(net, real_rx, 0); 1275 net_rx_queue_update_kobjects(net, real_rx, 0);
1342 netdev_queue_update_kobjects(net, real_tx, 0); 1276 netdev_queue_update_kobjects(net, real_tx, 0);
1343 #ifdef CONFIG_SYSFS 1277 #ifdef CONFIG_SYSFS
1344 kset_unregister(net->queues_kset); 1278 kset_unregister(net->queues_kset);
1345 #endif 1279 #endif
1346 } 1280 }
1347 1281
1348 static void *net_grab_current_ns(void) 1282 static void *net_grab_current_ns(void)
1349 { 1283 {
1350 struct net *ns = current->nsproxy->net_ns; 1284 struct net *ns = current->nsproxy->net_ns;
1351 #ifdef CONFIG_NET_NS 1285 #ifdef CONFIG_NET_NS
1352 if (ns) 1286 if (ns)
1353 atomic_inc(&ns->passive); 1287 atomic_inc(&ns->passive);
1354 #endif 1288 #endif
1355 return ns; 1289 return ns;
1356 } 1290 }
1357 1291
1358 static const void *net_initial_ns(void) 1292 static const void *net_initial_ns(void)
1359 { 1293 {
1360 return &init_net; 1294 return &init_net;
1361 } 1295 }
1362 1296
1363 static const void *net_netlink_ns(struct sock *sk) 1297 static const void *net_netlink_ns(struct sock *sk)
1364 { 1298 {
1365 return sock_net(sk); 1299 return sock_net(sk);
1366 } 1300 }
1367 1301
1368 struct kobj_ns_type_operations net_ns_type_operations = { 1302 struct kobj_ns_type_operations net_ns_type_operations = {
1369 .type = KOBJ_NS_TYPE_NET, 1303 .type = KOBJ_NS_TYPE_NET,
1370 .grab_current_ns = net_grab_current_ns, 1304 .grab_current_ns = net_grab_current_ns,
1371 .netlink_ns = net_netlink_ns, 1305 .netlink_ns = net_netlink_ns,
1372 .initial_ns = net_initial_ns, 1306 .initial_ns = net_initial_ns,
1373 .drop_ns = net_drop_ns, 1307 .drop_ns = net_drop_ns,
1374 }; 1308 };
1375 EXPORT_SYMBOL_GPL(net_ns_type_operations); 1309 EXPORT_SYMBOL_GPL(net_ns_type_operations);
1376 1310
1377 #ifdef CONFIG_HOTPLUG 1311 #ifdef CONFIG_HOTPLUG
1378 static int netdev_uevent(struct device *d, struct kobj_uevent_env *env) 1312 static int netdev_uevent(struct device *d, struct kobj_uevent_env *env)
1379 { 1313 {
1380 struct net_device *dev = to_net_dev(d); 1314 struct net_device *dev = to_net_dev(d);
1381 int retval; 1315 int retval;
1382 1316
1383 /* pass interface to uevent. */ 1317 /* pass interface to uevent. */
1384 retval = add_uevent_var(env, "INTERFACE=%s", dev->name); 1318 retval = add_uevent_var(env, "INTERFACE=%s", dev->name);
1385 if (retval) 1319 if (retval)
1386 goto exit; 1320 goto exit;
1387 1321
1388 /* pass ifindex to uevent. 1322 /* pass ifindex to uevent.
1389 * ifindex is useful as it won't change (interface name may change) 1323 * ifindex is useful as it won't change (interface name may change)
1390 * and is what RtNetlink uses natively. */ 1324 * and is what RtNetlink uses natively. */
1391 retval = add_uevent_var(env, "IFINDEX=%d", dev->ifindex); 1325 retval = add_uevent_var(env, "IFINDEX=%d", dev->ifindex);
1392 1326
1393 exit: 1327 exit:
1394 return retval; 1328 return retval;
1395 } 1329 }
1396 #endif 1330 #endif
1397 1331
1398 /* 1332 /*
1399 * netdev_release -- destroy and free a dead device. 1333 * netdev_release -- destroy and free a dead device.
1400 * Called when last reference to device kobject is gone. 1334 * Called when last reference to device kobject is gone.
1401 */ 1335 */
1402 static void netdev_release(struct device *d) 1336 static void netdev_release(struct device *d)
1403 { 1337 {
1404 struct net_device *dev = to_net_dev(d); 1338 struct net_device *dev = to_net_dev(d);
1405 1339
1406 BUG_ON(dev->reg_state != NETREG_RELEASED); 1340 BUG_ON(dev->reg_state != NETREG_RELEASED);
1407 1341
1408 kfree(dev->ifalias); 1342 kfree(dev->ifalias);
1409 kfree((char *)dev - dev->padded); 1343 kfree((char *)dev - dev->padded);
1410 } 1344 }
1411 1345
1412 static const void *net_namespace(struct device *d) 1346 static const void *net_namespace(struct device *d)
1413 { 1347 {
1414 struct net_device *dev; 1348 struct net_device *dev;
1415 dev = container_of(d, struct net_device, dev); 1349 dev = container_of(d, struct net_device, dev);
1416 return dev_net(dev); 1350 return dev_net(dev);
1417 } 1351 }
1418 1352
1419 static struct class net_class = { 1353 static struct class net_class = {
1420 .name = "net", 1354 .name = "net",
1421 .dev_release = netdev_release, 1355 .dev_release = netdev_release,
1422 #ifdef CONFIG_SYSFS 1356 #ifdef CONFIG_SYSFS
1423 .dev_attrs = net_class_attributes, 1357 .dev_attrs = net_class_attributes,
1424 #endif /* CONFIG_SYSFS */ 1358 #endif /* CONFIG_SYSFS */
1425 #ifdef CONFIG_HOTPLUG 1359 #ifdef CONFIG_HOTPLUG
1426 .dev_uevent = netdev_uevent, 1360 .dev_uevent = netdev_uevent,
1427 #endif 1361 #endif
1428 .ns_type = &net_ns_type_operations, 1362 .ns_type = &net_ns_type_operations,
1429 .namespace = net_namespace, 1363 .namespace = net_namespace,
1430 }; 1364 };
1431 1365
1432 /* Delete sysfs entries but hold kobject reference until after all 1366 /* Delete sysfs entries but hold kobject reference until after all
1433 * netdev references are gone. 1367 * netdev references are gone.
1434 */ 1368 */
1435 void netdev_unregister_kobject(struct net_device * net) 1369 void netdev_unregister_kobject(struct net_device * net)
1436 { 1370 {
1437 struct device *dev = &(net->dev); 1371 struct device *dev = &(net->dev);
1438 1372
1439 kobject_get(&dev->kobj); 1373 kobject_get(&dev->kobj);
1440 1374
1441 remove_queue_kobjects(net); 1375 remove_queue_kobjects(net);
1442 1376
1443 device_del(dev); 1377 device_del(dev);
1444 } 1378 }
1445 1379
1446 /* Create sysfs entries for network device. */ 1380 /* Create sysfs entries for network device. */
1447 int netdev_register_kobject(struct net_device *net) 1381 int netdev_register_kobject(struct net_device *net)
1448 { 1382 {
1449 struct device *dev = &(net->dev); 1383 struct device *dev = &(net->dev);
1450 const struct attribute_group **groups = net->sysfs_groups; 1384 const struct attribute_group **groups = net->sysfs_groups;
1451 int error = 0; 1385 int error = 0;
1452 1386
1453 device_initialize(dev); 1387 device_initialize(dev);
1454 dev->class = &net_class; 1388 dev->class = &net_class;
1455 dev->platform_data = net; 1389 dev->platform_data = net;
1456 dev->groups = groups; 1390 dev->groups = groups;
1457 1391
1458 dev_set_name(dev, "%s", net->name); 1392 dev_set_name(dev, "%s", net->name);
1459 1393
1460 #ifdef CONFIG_SYSFS 1394 #ifdef CONFIG_SYSFS
1461 /* Allow for a device specific group */ 1395 /* Allow for a device specific group */
1462 if (*groups) 1396 if (*groups)
1463 groups++; 1397 groups++;
1464 1398
1465 *groups++ = &netstat_group; 1399 *groups++ = &netstat_group;
1466 #ifdef CONFIG_WIRELESS_EXT_SYSFS
1467 if (net->ieee80211_ptr)
1468 *groups++ = &wireless_group;
1469 #ifdef CONFIG_WIRELESS_EXT
1470 else if (net->wireless_handlers)
1471 *groups++ = &wireless_group;
1472 #endif
1473 #endif
1474 #endif /* CONFIG_SYSFS */ 1400 #endif /* CONFIG_SYSFS */
1475 1401
1476 error = device_add(dev); 1402 error = device_add(dev);
1477 if (error) 1403 if (error)
1478 return error; 1404 return error;
1479 1405
1480 error = register_queue_kobjects(net); 1406 error = register_queue_kobjects(net);
1481 if (error) { 1407 if (error) {
1482 device_del(dev); 1408 device_del(dev);
1483 return error; 1409 return error;
1484 } 1410 }
1485 1411
1486 return error; 1412 return error;
1487 } 1413 }
1488 1414
1489 int netdev_class_create_file(struct class_attribute *class_attr) 1415 int netdev_class_create_file(struct class_attribute *class_attr)
1490 { 1416 {
1491 return class_create_file(&net_class, class_attr); 1417 return class_create_file(&net_class, class_attr);
1492 } 1418 }
1493 EXPORT_SYMBOL(netdev_class_create_file); 1419 EXPORT_SYMBOL(netdev_class_create_file);
1494 1420
1495 void netdev_class_remove_file(struct class_attribute *class_attr) 1421 void netdev_class_remove_file(struct class_attribute *class_attr)
1496 { 1422 {
1497 class_remove_file(&net_class, class_attr); 1423 class_remove_file(&net_class, class_attr);
1498 } 1424 }
1499 EXPORT_SYMBOL(netdev_class_remove_file); 1425 EXPORT_SYMBOL(netdev_class_remove_file);
1500 1426
1501 int netdev_kobject_init(void) 1427 int netdev_kobject_init(void)
1502 { 1428 {
1503 kobj_ns_type_register(&net_ns_type_operations); 1429 kobj_ns_type_register(&net_ns_type_operations);
1504 return class_register(&net_class); 1430 return class_register(&net_class);
1505 } 1431 }
1506 1432
net/wireless/Kconfig
1 config WIRELESS_EXT 1 config WIRELESS_EXT
2 bool 2 bool
3 3
4 config WEXT_CORE 4 config WEXT_CORE
5 def_bool y 5 def_bool y
6 depends on CFG80211_WEXT || WIRELESS_EXT 6 depends on CFG80211_WEXT || WIRELESS_EXT
7 7
8 config WEXT_PROC 8 config WEXT_PROC
9 def_bool y 9 def_bool y
10 depends on PROC_FS 10 depends on PROC_FS
11 depends on WEXT_CORE 11 depends on WEXT_CORE
12 12
13 config WEXT_SPY 13 config WEXT_SPY
14 bool 14 bool
15 15
16 config WEXT_PRIV 16 config WEXT_PRIV
17 bool 17 bool
18 18
19 config CFG80211 19 config CFG80211
20 tristate "cfg80211 - wireless configuration API" 20 tristate "cfg80211 - wireless configuration API"
21 depends on RFKILL || !RFKILL 21 depends on RFKILL || !RFKILL
22 ---help--- 22 ---help---
23 cfg80211 is the Linux wireless LAN (802.11) configuration API. 23 cfg80211 is the Linux wireless LAN (802.11) configuration API.
24 Enable this if you have a wireless device. 24 Enable this if you have a wireless device.
25 25
26 For more information refer to documentation on the wireless wiki: 26 For more information refer to documentation on the wireless wiki:
27 27
28 http://wireless.kernel.org/en/developers/Documentation/cfg80211 28 http://wireless.kernel.org/en/developers/Documentation/cfg80211
29 29
30 When built as a module it will be called cfg80211. 30 When built as a module it will be called cfg80211.
31 31
32 config NL80211_TESTMODE 32 config NL80211_TESTMODE
33 bool "nl80211 testmode command" 33 bool "nl80211 testmode command"
34 depends on CFG80211 34 depends on CFG80211
35 help 35 help
36 The nl80211 testmode command helps implementing things like 36 The nl80211 testmode command helps implementing things like
37 factory calibration or validation tools for wireless chips. 37 factory calibration or validation tools for wireless chips.
38 38
39 Select this option ONLY for kernels that are specifically 39 Select this option ONLY for kernels that are specifically
40 built for such purposes. 40 built for such purposes.
41 41
42 Debugging tools that are supposed to end up in the hands of 42 Debugging tools that are supposed to end up in the hands of
43 users should better be implemented with debugfs. 43 users should better be implemented with debugfs.
44 44
45 Say N. 45 Say N.
46 46
47 config CFG80211_DEVELOPER_WARNINGS 47 config CFG80211_DEVELOPER_WARNINGS
48 bool "enable developer warnings" 48 bool "enable developer warnings"
49 depends on CFG80211 49 depends on CFG80211
50 default n 50 default n
51 help 51 help
52 This option enables some additional warnings that help 52 This option enables some additional warnings that help
53 cfg80211 developers and driver developers, but that can 53 cfg80211 developers and driver developers, but that can
54 trigger due to races with userspace. 54 trigger due to races with userspace.
55 55
56 For example, when a driver reports that it was disconnected 56 For example, when a driver reports that it was disconnected
57 from the AP, but the user disconnects manually at the same 57 from the AP, but the user disconnects manually at the same
58 time, the warning might trigger spuriously due to races. 58 time, the warning might trigger spuriously due to races.
59 59
60 Say Y only if you are developing cfg80211 or a driver based 60 Say Y only if you are developing cfg80211 or a driver based
61 on it (or mac80211). 61 on it (or mac80211).
62 62
63 63
64 config CFG80211_REG_DEBUG 64 config CFG80211_REG_DEBUG
65 bool "cfg80211 regulatory debugging" 65 bool "cfg80211 regulatory debugging"
66 depends on CFG80211 66 depends on CFG80211
67 default n 67 default n
68 ---help--- 68 ---help---
69 You can enable this if you want to debug regulatory changes. 69 You can enable this if you want to debug regulatory changes.
70 For more information on cfg80211 regulatory refer to the wireless 70 For more information on cfg80211 regulatory refer to the wireless
71 wiki: 71 wiki:
72 72
73 http://wireless.kernel.org/en/developers/Regulatory 73 http://wireless.kernel.org/en/developers/Regulatory
74 74
75 If unsure, say N. 75 If unsure, say N.
76 76
77 config CFG80211_DEFAULT_PS 77 config CFG80211_DEFAULT_PS
78 bool "enable powersave by default" 78 bool "enable powersave by default"
79 depends on CFG80211 79 depends on CFG80211
80 default y 80 default y
81 help 81 help
82 This option enables powersave mode by default. 82 This option enables powersave mode by default.
83 83
84 If this causes your applications to misbehave you should fix your 84 If this causes your applications to misbehave you should fix your
85 applications instead -- they need to register their network 85 applications instead -- they need to register their network
86 latency requirement, see Documentation/power/pm_qos_interface.txt. 86 latency requirement, see Documentation/power/pm_qos_interface.txt.
87 87
88 config CFG80211_DEBUGFS 88 config CFG80211_DEBUGFS
89 bool "cfg80211 DebugFS entries" 89 bool "cfg80211 DebugFS entries"
90 depends on CFG80211 90 depends on CFG80211
91 depends on DEBUG_FS 91 depends on DEBUG_FS
92 ---help--- 92 ---help---
93 You can enable this if you want to debugfs entries for cfg80211. 93 You can enable this if you want to debugfs entries for cfg80211.
94 94
95 If unsure, say N. 95 If unsure, say N.
96 96
97 config CFG80211_INTERNAL_REGDB 97 config CFG80211_INTERNAL_REGDB
98 bool "use statically compiled regulatory rules database" if EXPERT 98 bool "use statically compiled regulatory rules database" if EXPERT
99 default n 99 default n
100 depends on CFG80211 100 depends on CFG80211
101 ---help--- 101 ---help---
102 This option generates an internal data structure representing 102 This option generates an internal data structure representing
103 the wireless regulatory rules described in net/wireless/db.txt 103 the wireless regulatory rules described in net/wireless/db.txt
104 and includes code to query that database. This is an alternative 104 and includes code to query that database. This is an alternative
105 to using CRDA for defining regulatory rules for the kernel. 105 to using CRDA for defining regulatory rules for the kernel.
106 106
107 For details see: 107 For details see:
108 108
109 http://wireless.kernel.org/en/developers/Regulatory 109 http://wireless.kernel.org/en/developers/Regulatory
110 110
111 Most distributions have a CRDA package. So if unsure, say N. 111 Most distributions have a CRDA package. So if unsure, say N.
112 112
113 config CFG80211_WEXT 113 config CFG80211_WEXT
114 bool "cfg80211 wireless extensions compatibility" 114 bool "cfg80211 wireless extensions compatibility"
115 depends on CFG80211 115 depends on CFG80211
116 select WEXT_CORE 116 select WEXT_CORE
117 default y 117 default y
118 help 118 help
119 Enable this option if you need old userspace for wireless 119 Enable this option if you need old userspace for wireless
120 extensions with cfg80211-based drivers. 120 extensions with cfg80211-based drivers.
121 121
122 config WIRELESS_EXT_SYSFS
123 bool "Wireless extensions sysfs files"
124 depends on WEXT_CORE && SYSFS
125 help
126 This option enables the deprecated wireless statistics
127 files in /sys/class/net/*/wireless/. The same information
128 is available via the ioctls as well.
129
130 Say N. If you know you have ancient tools requiring it,
131 like very old versions of hal (prior to 0.5.12 release),
132 say Y and update the tools as soon as possible as this
133 option will be removed soon.
134
135 config LIB80211 122 config LIB80211
136 tristate "Common routines for IEEE802.11 drivers" 123 tristate "Common routines for IEEE802.11 drivers"
137 default n 124 default n
138 help 125 help
139 This options enables a library of common routines used 126 This options enables a library of common routines used
140 by IEEE802.11 wireless LAN drivers. 127 by IEEE802.11 wireless LAN drivers.
141 128
142 Drivers should select this themselves if needed. Say Y if 129 Drivers should select this themselves if needed. Say Y if
143 you want this built into your kernel. 130 you want this built into your kernel.
144 131
145 config LIB80211_CRYPT_WEP 132 config LIB80211_CRYPT_WEP
146 tristate 133 tristate
147 134
148 config LIB80211_CRYPT_CCMP 135 config LIB80211_CRYPT_CCMP
149 tristate 136 tristate
150 137
151 config LIB80211_CRYPT_TKIP 138 config LIB80211_CRYPT_TKIP
152 tristate 139 tristate
153 140
154 config LIB80211_DEBUG 141 config LIB80211_DEBUG
155 bool "lib80211 debugging messages" 142 bool "lib80211 debugging messages"
156 depends on LIB80211 143 depends on LIB80211
157 default n 144 default n
158 ---help--- 145 ---help---
159 You can enable this if you want verbose debugging messages 146 You can enable this if you want verbose debugging messages
160 from lib80211. 147 from lib80211.
161 148
162 If unsure, say N. 149 If unsure, say N.
163 150