09 Dec, 2010
1 commit
-
Rather than printing the message to the log, use a mib counter to keep
track of the count of occurences of time wait bucket overflow. Reduces
spam in logs.Signed-off-by: Tom Herbert
Signed-off-by: David S. Miller
11 Nov, 2010
1 commit
-
Robin Holt tried to boot a 16TB machine and found some limits were
reached : sysctl_tcp_mem[2], sysctl_udp_mem[2]We can switch infrastructure to use long "instead" of "int", now
atomic_long_t primitives are available for free.Signed-off-by: Eric Dumazet
Reported-by: Robin Holt
Reviewed-by: Robin Holt
Signed-off-by: Andrew Morton
Signed-off-by: David S. Miller
01 Jul, 2010
1 commit
-
/proc/net/snmp and /proc/net/netstat expose SNMP counters.
Width of these counters is either 32 or 64 bits, depending on the size
of "unsigned long" in kernel.This means user program parsing these files must already be prepared to
deal with 64bit values, regardless of user program being 32 or 64 bit.This patch introduces 64bit snmp values for IPSTAT mib, where some
counters can wrap pretty fast if they are 32bit wide.# netstat -s|egrep "InOctets|OutOctets"
InOctets: 244068329096
OutOctets: 244069348848Signed-off-by: Eric Dumazet
Signed-off-by: David S. Miller
03 Jun, 2010
1 commit
-
Christoph Lameter mentioned that packets could be dropped in input path
because of rp_filter settings, without any SNMP counter being
incremented. System administrator can have a hard time to track the
problem.This patch introduces a new counter, LINUX_MIB_IPRPFILTER, incremented
each time we drop a packet because Reverse Path Filter triggers.(We receive an IPv4 datagram on a given interface, and find the route to
send an answer would use another interface)netstat -s | grep IPReversePathFilter
IPReversePathFilter: 21714Reported-by: Christoph Lameter
Signed-off-by: Eric Dumazet
Signed-off-by: David S. Miller
22 Mar, 2010
1 commit
-
Its currently hard to diagnose when ACK frames are dropped because an
application set TCP_DEFER_ACCEPT on its listening socket.See http://bugzilla.kernel.org/show_bug.cgi?id=15507
This patch adds a SNMP value, named TCPDeferAcceptDrop
netstat -s | grep TCPDeferAcceptDrop
TCPDeferAcceptDrop: 0This counter is incremented every time we drop a pure ACK frame received
by a socket in SYN_RECV state because its SYNACK retrans count is lower
than defer_accept value.Signed-off-by: Eric Dumazet
Signed-off-by: David S. Miller
09 Mar, 2010
1 commit
-
Commit 6b03a53a (tcp: use limited socket backlog) added the possibility
of dropping frames when backlog queue is full.Commit d218d111 (tcp: Generalized TTL Security Mechanism) added the
possibility of dropping frames when TTL is under a given limit.This patch adds new SNMP MIB entries, named TCPBacklogDrop and
TCPMinTTLDrop, published in /proc/net/netstat in TcpExt: linenetstat -s | egrep "TCPBacklogDrop|TCPMinTTLDrop"
TCPBacklogDrop: 0
TCPMinTTLDrop: 0Signed-off-by: Eric Dumazet
Signed-off-by: David S. Miller
17 Feb, 2010
1 commit
-
Add __percpu sparse annotations to net.
These annotations are to make sparse consider percpu variables to be
in a different address space and warn if accessed without going
through percpu accessors. This patch doesn't affect normal builds.The macro and type tricks around snmp stats make things a bit
interesting. DEFINE/DECLARE_SNMP_STAT() macros mark the target field
as __percpu and SNMP_UPD_PO_STATS() macro is updated accordingly. All
snmp_mib_*() users which used to cast the argument to (void **) are
updated to cast it to (void __percpu **).Signed-off-by: Tejun Heo
Acked-by: David S. Miller
Cc: Patrick McHardy
Cc: Arnaldo Carvalho de Melo
Cc: Vlad Yasevich
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller
23 Jan, 2010
1 commit
-
Signed-off-by: Alexey Dobriyan
Signed-off-by: David S. Miller
27 Apr, 2009
1 commit
-
The IP MIB (RFC 4293) defines stats for InOctets, OutOctets, InMcastOctets and
OutMcastOctets:
http://tools.ietf.org/html/rfc4293
But it seems we don't track those in any way that easy to separate from other
protocols. This patch adds those missing counters to the stats file. Tested
successfully by meWith help from Eric Dumazet.
Signed-off-by: Neil Horman
Signed-off-by: David S. Miller
16 Feb, 2009
1 commit
-
Impact: syntax fix
Interestingly enough this compiles w/o any complaints:
orphans = percpu_counter_sum_positive(&tcp_orphan_count),
sockets = percpu_counter_sum_positive(&tcp_sockets_allocated),Signed-off-by: Thomas Gleixner
Signed-off-by: David S. Miller
30 Dec, 2008
1 commit
-
When we converted the protocol atomic counters such as the orphan
count and the total socket count deadlocks were introduced due to
the mismatch in BH status of the spots that used the percpu counter
operations.Based on the diagnosis and patch by Peter Zijlstra, this patch
fixes these issues by disabling BH where we may be in process
context.Reported-by: Jeff Kirsher
Tested-by: Ingo Molnar
Signed-off-by: Herbert Xu
Signed-off-by: David S. Miller
26 Nov, 2008
2 commits
-
Instead of using one atomic_t per protocol, use a percpu_counter
for "orphan_count", to reduce cache line contention on
heavy duty network servers.Signed-off-by: Eric Dumazet
Signed-off-by: David S. Miller -
Instead of using one atomic_t per protocol, use a percpu_counter
for "sockets_allocated", to reduce cache line contention on
heavy duty network servers.Note : We revert commit (248969ae31e1b3276fc4399d67ce29a5d81e6fd9
net: af_unix can make unix_nr_socks visbile in /proc),
since it is not anymore used after sock_prot_inuse_add() additionSigned-off-by: Eric Dumazet
Signed-off-by: David S. Miller
25 Nov, 2008
1 commit
-
Signed-off-by: Ilpo Järvinen
Signed-off-by: David S. Miller
11 Nov, 2008
1 commit
-
icmpmsg_put() can happily corrupt kernel memory, using a static
table and forgetting to reset an array index in a loop.Remove the static array since its not safe without proper locking.
Signed-off-by: Alexey Dobriyan
Signed-off-by: Eric Dumazet
Signed-off-by: David S. Miller
30 Jul, 2008
1 commit
-
From a report by Matti Aarnio, and preliminary patch by Adam Langley.
Signed-off-by: David S. Miller
18 Jul, 2008
13 commits
-
They are symmetrical to single_open ones :)
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
There are already 7 of them - time to kill some duplicate code.
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
After all this stuff is moved outside, this function can look better.
Besides, I tuned the error path in ip_proc_init_net to make it have
only 2 exit points, not 3.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
This one has become per-net long ago, but the appropriate file
is per-net only now.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
All the statistics shown in this file have been made per-net already.
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Now all the shown in it statistics is netnsizated, time to
show it in appropriate net.The appropriate net init/exit ops already exist - they make
the sockstat file per net - so just extend them.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Similar to... ouch, I repeat myself.
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Similar to ip and tcp ones :)
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Similar to tcp one.
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Proc temporary uses stats from init_net.
BTW, TCP_XXX_STATS are beautiful (w/o do { } while (0) facing) again :)
Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller
12 Jun, 2008
1 commit
-
This patch removes CVS keywords that weren't updated for a long time
from comments.Signed-off-by: Adrian Bunk
Signed-off-by: David S. Miller
01 Apr, 2008
3 commits
-
Besides, now we can see per-net fragments statistics in the
same file, since this stats is already per-net.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
Currently they live in init_net only, but now almost all the info
they can provide is available per-net.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
This counter is about to become per-proto-and-per-net, so we'll need
two arguments to determine which cell in this "table" to work with.All the places, but proc already pass proper net to it - proc will be
tuned a bit later.Some indentation with spaces in proc files is done to keep the file
coding style consistent.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller
07 Mar, 2008
1 commit
-
This reverts commit db1ed684f6c430c4cdad67d058688b8a1b5e607c ("[IPV6]
UDP: Rename IPv6 UDP files."), commit
8be8af8fa4405652e6c0797db5465a4be8afb998 ("[IPV4] UDP: Move
IPv4-specific bits to other file.") and commit
e898d4db2749c6052072e9bc4448e396cbdeb06a ("[UDP]: Allow users to
configure UDP-Lite.").First, udplite is of such small cost, and it is a core protocol just
like TCP and normal UDP are.We spent enormous amounts of effort to make udplite share as much code
with core UDP as possible. All of that work is less valuable if we're
just going to slap a config option on udplite support.It is also causing build failures, as reported on linux-next, showing
that the changeset was not tested very well. In fact, this is the
second build failure resulting from the udplite change.Finally, the config options provided was a bool, instead of a modular
option. Meaning the udplite code does not even get build tested
by allmodconfig builds, and furthermore the user is not presented
with a reasonable modular build option which is particularly needed
by distribution vendors.Signed-off-by: David S. Miller
04 Mar, 2008
1 commit
-
Let's give users an option for disabling UDP-Lite (~4K).
old:
| text data bss dec hex filename
| 286498 12432 6072 305002 4a76a net/ipv4/built-in.o
| 193830 8192 3204 205226 321aa net/ipv6/ipv6.onew (without UDP-Lite):
| text data bss dec hex filename
| 284086 12136 5432 301654 49a56 net/ipv4/built-in.o
| 191835 7832 3076 202743 317f7 net/ipv6/ipv6.oSigned-off-by: YOSHIFUJI Hideaki
29 Jan, 2008
5 commits
-
This is also simple, but introduces more changes, since
then mem counter is altered in more places.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
This is simple - just move the variable from struct inet_frags
to struct netns_frags and adjust the usage appropriately.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller -
1) Cleanups (all functions are prefixed by sock_prot_inuse)
sock_prot_inc_use(prot) -> sock_prot_inuse_add(prot,-1)
sock_prot_dec_use(prot) -> sock_prot_inuse_add(prot,-1)
sock_prot_inuse() -> sock_prot_inuse_get()New functions :
sock_prot_inuse_init() and sock_prot_inuse_free() to abstract pcounter use.
2) if CONFIG_PROC_FS=n, we can zap 'inuse' member from "struct proto",
since nobody wants to read the inuse value.This saves 1372 bytes on i386/SMP and some cpu cycles.
Signed-off-by: Eric Dumazet
Signed-off-by: David S. Miller -
Signed-off-by: Takahiro Yasui
Signed-off-by: Hideo Aoki
Signed-off-by: David S. Miller -
These are scattered over the code, but almost all the
"critical" places already have the proper struct net
at hand except for snmp proc showing function and routing
rtnl handler.Signed-off-by: Pavel Emelyanov
Signed-off-by: David S. Miller