06 Dec, 2011

1 commit


10 Aug, 2011

1 commit


18 Jul, 2011

1 commit


14 Jul, 2011

1 commit

  • Now that there is a one-to-one correspondance between neighbour
    and hh_cache entries, we no longer need:

    1) dynamic allocation
    2) attachment to dst->hh
    3) refcounting

    Initialization of the hh_cache entry is indicated by hh_len
    being non-zero, and such initialization is always done with
    the neighbour's lock held as a writer.

    Signed-off-by: David S. Miller

    David S. Miller
     

02 Jul, 2011

1 commit

  • IPV6, unlike IPV4, doesn't have a routing cache.

    Routing table entries, as well as clones made in response
    to route lookup requests, all live in the same table. And
    all of these things are together collected in the destination
    cache table for ipv6.

    This means that routing table entries count against the garbage
    collection limits, even though such entries cannot ever be reclaimed
    and are added explicitly by the administrator (rather than being
    created in response to lookups).

    Therefore it makes no sense to count ipv6 routing table entries
    against the GC limits.

    Add a DST_NOCOUNT destination cache entry flag, and skip the counting
    if it is set. Use this flag bit in ipv6 when adding routing table
    entries.

    Signed-off-by: David S. Miller

    David S. Miller
     

25 May, 2011

1 commit


21 May, 2011

2 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1446 commits)
    macvlan: fix panic if lowerdev in a bond
    tg3: Add braces around 5906 workaround.
    tg3: Fix NETIF_F_LOOPBACK error
    macvlan: remove one synchronize_rcu() call
    networking: NET_CLS_ROUTE4 depends on INET
    irda: Fix error propagation in ircomm_lmp_connect_response()
    irda: Kill set but unused variable 'bytes' in irlan_check_command_param()
    irda: Kill set but unused variable 'clen' in ircomm_connect_indication()
    rxrpc: Fix set but unused variable 'usage' in rxrpc_get_transport()
    be2net: Kill set but unused variable 'req' in lancer_fw_download()
    irda: Kill set but unused vars 'saddr' and 'daddr' in irlan_provider_connect_indication()
    atl1c: atl1c_resume() is only used when CONFIG_PM_SLEEP is defined.
    rxrpc: Fix set but unused variable 'usage' in rxrpc_get_peer().
    rxrpc: Kill set but unused variable 'local' in rxrpc_UDP_error_handler()
    rxrpc: Kill set but unused variable 'sp' in rxrpc_process_connection()
    rxrpc: Kill set but unused variable 'sp' in rxrpc_rotate_tx_window()
    pkt_sched: Kill set but unused variable 'protocol' in tc_classify()
    isdn: capi: Use pr_debug() instead of ifdefs.
    tg3: Update version to 3.119
    tg3: Apply rx_discards fix to 5719/5720
    ...

    Fix up trivial conflicts in arch/x86/Kconfig and net/mac80211/agg-tx.c
    as per Davem.

    Linus Torvalds
     
  • Commit e66eed651fd1 ("list: remove prefetching from regular list
    iterators") removed the include of prefetch.h from list.h, which
    uncovered several cases that had apparently relied on that rather
    obscure header file dependency.

    So this fixes things up a bit, using

    grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]')
    grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]')

    to guide us in finding files that either need
    inclusion, or have it despite not needing it.

    There are more of them around (mostly network drivers), but this gets
    many core ones.

    Reported-by: Stephen Rothwell
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

19 May, 2011

1 commit

  • It's way past it's usefulness. And this gets rid of a bunch
    of stray ->rt_{dst,src} references.

    Even the comment documenting the macro was inaccurate (stated
    default was 1 when it's 0).

    If reintroduced, it should be done properly, with dynamic debug
    facilities.

    Signed-off-by: David S. Miller

    David S. Miller
     

29 Apr, 2011

2 commits


18 Feb, 2011

1 commit


29 Jan, 2011

1 commit


27 Jan, 2011

1 commit

  • Routing metrics are now copy-on-write.

    Initially a route entry points it's metrics at a read-only location.
    If a routing table entry exists, it will point there. Else it will
    point at the all zero metric place-holder called 'dst_default_metrics'.

    The writeability state of the metrics is stored in the low bits of the
    metrics pointer, we have two bits left to spare if we want to store
    more states.

    For the initial implementation, COW is implemented simply via kmalloc.
    However future enhancements will change this to place the writable
    metrics somewhere else, in order to increase sharing. Very likely
    this "somewhere else" will be the inetpeer cache.

    Note also that this means that metrics updates may transiently fail
    if we cannot COW the metrics successfully.

    But even by itself, this patch should decrease memory usage and
    increase cache locality especially for routing workloads. In those
    cases the read-only metric copies stay in place and never get written
    to.

    TCP workloads where metrics get updated, and those rare cases where
    PMTU triggers occur, will take a very slight performance hit. But
    that hit will be alleviated when the long-term writable metrics
    move to a more sharable location.

    Since the metrics storage went from a u32 array of RTAX_MAX entries to
    what is essentially a pointer, some retooling of the dst_entry layout
    was necessary.

    Most importantly, we need to preserve the alignment of the reference
    count so that it doesn't share cache lines with the read-mostly state,
    as per Eric Dumazet's alignment assertion checks.

    The only non-trivial bit here is the move of the 'flags' member into
    the writeable cacheline. This is OK since we are always accessing the
    flags around the same moment when we made a modification to the
    reference count.

    Signed-off-by: David S. Miller

    David S. Miller
     

10 Nov, 2010

1 commit

  • Followup of commit ef885afbf8a37689 (net: use rcu_barrier() in
    rollback_registered_many)

    dst_dev_event() scans a garbage dst list that might be feeded by various
    network notifiers at device dismantle time.

    Its important to call dst_dev_event() after other notifiers, or we might
    enter the infamous msleep(250) in netdev_wait_allrefs(), and wait one
    second before calling again call_netdevice_notifiers(NETDEV_UNREGISTER,
    dev) to properly remove last device references.

    Use priority -10 to let dst_dev_notifier be called after other network
    notifiers (they have the default 0 priority)

    Reported-by: Ben Greear
    Reported-by: Nicolas Dichtel
    Reported-by: Octavian Purdila
    Reported-by: Benjamin LaHaise
    Tested-by: Ben Greear
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

20 Oct, 2010

1 commit

  • There is no point using RCU for dst we allocate for a very short time
    (used once).

    Change dst_release() to take DST_NOCACHE into account, but also change
    skb_dst_set_noref() to force a refcount increment for such dst.

    This is a _huge_ gain, because we dont waste memory to store xx thousand
    of dsts. Instead of queueing them to RCU, we can free them instantly.

    CPU caches can stay hot, re-using same memory blocks to hold temporary
    dsts.

    Note : remove unneeded smp_mb__before_atomic_dec(); in dst_release(),
    since atomic_dec_return() implies a full memory barrier.

    Stress test, 160.000.000 udp frames sent, IP route cache disabled
    (DDOS).

    Before:

    real 0m38.091s
    user 0m13.189s
    sys 7m53.018s

    After:

    real 0m29.946s
    user 0m12.157s
    sys 7m40.605s

    For reference, if IP route cache was enabled :

    real 0m32.030s
    user 0m10.521s
    sys 8m15.243s

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

12 Oct, 2010

2 commits

  • struct dst_ops tracks number of allocated dst in an atomic_t field,
    subject to high cache line contention in stress workload.

    Switch to a percpu_counter, to reduce number of time we need to dirty a
    central location. Place it on a separate cache line to avoid dirtying
    read only fields.

    Stress test :

    (Sending 160.000.000 UDP frames,
    IP route cache disabled, dual E5540 @2.53GHz,
    32bit kernel, FIB_TRIE, SLUB/NUMA)

    Before:

    real 0m51.179s
    user 0m15.329s
    sys 10m15.942s

    After:

    real 0m45.570s
    user 0m15.525s
    sys 9m56.669s

    With a small reordering of struct neighbour fields, subject of a
    following patch, (to separate refcnt from other read mostly fields)

    real 0m41.841s
    user 0m15.261s
    sys 8m45.949s

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • When a new dst is used to send a frame, neigh_resolve_output() tries to
    associate an struct hh_cache to this dst, calling neigh_hh_init() with
    the neigh rwlock write locked.

    Most of the time, hh_cache is already known and linked into neighbour,
    so we find it and increment its refcount.

    This patch changes the logic so that we call neigh_hh_init() with
    neighbour lock read locked only, so that fast path can be run in
    parallel by concurrent cpus.

    This brings part of the speedup we got with commit c7d4426a98a5f
    (introduce DST_NOCACHE flag) for non cached dsts, even for cached ones,
    removing one of the contention point that routers hit on multiqueue
    enabled machines.

    Further improvements would need to use a seqlock instead of an rwlock to
    protect neigh->ha[], to not dirty neigh too often and remove two atomic
    ops.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

21 Jul, 2010

1 commit


13 Apr, 2010

1 commit


12 Apr, 2010

1 commit


31 Mar, 2010

1 commit


30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

09 Feb, 2010

1 commit

  • Kernel bugzilla #15239

    On some workloads, it is quite possible to get a huge dst list to
    process in dst_gc_task(), and trigger soft lockup detection.

    Fix is to call cond_resched(), as we run in process context.

    Reported-by: Pawel Staszewski
    Tested-by: Pawel Staszewski
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

14 Nov, 2008

1 commit

  • During tbench/oprofile sessions, I found that dst_release() was in third position.

    CPU: Core 2, speed 2999.68 MHz (estimated)
    Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000
    samples % symbol name
    483726 9.0185 __copy_user_zeroing_intel
    191466 3.5697 __copy_user_intel
    185475 3.4580 dst_release
    175114 3.2648 ip_queue_xmit
    153447 2.8608 tcp_sendmsg
    108775 2.0280 tcp_recvmsg
    102659 1.9140 sysenter_past_esp
    101450 1.8914 tcp_current_mss
    95067 1.7724 __copy_from_user_ll
    86531 1.6133 tcp_transmit_skb

    Of course, all CPUS fight on the dst_entry associated with 127.0.0.1

    Instead of first checking the refcount value, then decrement it,
    we use atomic_dec_return() to help CPU to make the right memory transaction
    (ie getting the cache line in exclusive mode)

    dst_release() is now at the fifth position, and tbench a litle bit faster ;)

    CPU: Core 2, speed 3000.1 MHz (estimated)
    Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000
    samples % symbol name
    647107 8.8072 __copy_user_zeroing_intel
    258840 3.5229 ip_queue_xmit
    258302 3.5155 __copy_user_intel
    209629 2.8531 tcp_sendmsg
    165632 2.2543 dst_release
    149232 2.0311 tcp_current_mss
    147821 2.0119 tcp_recvmsg
    137893 1.8767 sysenter_past_esp
    127473 1.7349 __copy_from_user_ll
    121308 1.6510 ip_finish_output
    118510 1.6129 tcp_transmit_skb
    109295 1.4875 tcp_v4_rcv

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

13 Sep, 2008

1 commit

  • The dst garbage collector dst_gc_task() may not be scheduled as we
    expect it to be in __dst_free().

    Indeed, when the dst_gc_timer was replaced by the delayed_work
    dst_gc_work, the mod_timer() call used to schedule the garbage
    collector at an earlier date was replaced by a schedule_delayed_work()
    (see commit 86bba269d08f0c545ae76c90b56727f65d62d57f).

    But, the behaviour of mod_timer() and schedule_delayed_work() is
    different in the way they handle the delay.

    mod_timer() stops the timer and re-arm it with the new given delay,
    whereas schedule_delayed_work() only check if the work is already
    queued in the workqueue (and queue it (with delay) if it is not)
    BUT it does NOT take into account the new delay (even if the new delay
    is earlier in time).
    schedule_delayed_work() returns 0 if it didn't queue the work,
    but we don't check the return code in __dst_free().

    If I understand the code in __dst_free() correctly, we want dst_gc_task
    to be queued after DST_GC_INC jiffies if we pass the test (and not in
    some undetermined time in the future), so I think we should add a call
    to cancel_delayed_work() before schedule_delayed_work(). Patch below.

    Or we should at least test the return code of schedule_delayed_work(),
    and reset the values of dst_garbage.timer_inc and dst_garbage.timer_expires
    back to their former values if schedule_delayed_work() failed.
    Otherwise the subsequent calls to __dst_free will test the wrong values
    and assume wrong thing about when the garbage collector is supposed to
    be scheduled.

    dst_gc_task() also calls schedule_delayed_work() without checking
    its return code (or calling cancel_scheduled_work() first), but it
    should fine there: dst_gc_task is the routine of the delayed_work, so
    no dst_gc_work should be pending in the queue when it's running.

    Signed-off-by: Benjamin Thery
    Acked-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Benjamin Thery
     

28 Mar, 2008

1 commit

  • Codiff stats (allyesconfig, v2.6.24-mm1):
    -16420 187 funcs, 103 +, 16523 -, diff: -16420 --- dst_release

    Without number of debug related CONFIGs (v2.6.25-rc2-mm1):
    -7257 186 funcs, 70 +, 7327 -, diff: -7257 --- dst_release
    dst_release | +40

    Signed-off-by: Ilpo Järvinen
    Signed-off-by: David S. Miller

    Ilpo Järvinen
     

26 Mar, 2008

1 commit


29 Feb, 2008

1 commit


29 Jan, 2008

4 commits

  • The garbage collection function receive the dst_ops structure as
    parameter. This is useful for the next incoming patchset because it
    will need the dst_ops (there will be several instances) and the
    network namespace pointer (contained in the dst_ops).

    The protocols which do not take care of the namespaces will not be
    impacted by this change (expect for the function signature), they do
    just ignore the parameter.

    Signed-off-by: Daniel Lezcano
    Signed-off-by: David S. Miller

    Daniel Lezcano
     
  • This cleanup shrinks size of net/core/dst.o on i386 from 1299 to 1289 bytes.
    (This is because dev_hold()/dev_put() are doing atomic_inc()/atomic_dec() and
    force compiler to re-evaluate memory contents.)

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Move dst entries to a namespace loopback to catch refcounting leaks.

    Signed-off-by: Denis V. Lunev
    Signed-off-by: David S. Miller

    Denis V. Lunev
     
  • We have a number of copies of dst_discard scattered around the place
    which all do the same thing, namely free a packet on the input or
    output paths.

    This patch deletes all of them except dst_discard and points all the
    users to it.

    The only non-trivial bit is decnet where it returns an error.
    However, conceptually this is identical to the blackhole functions
    used in IPv4 and IPv6 which do not return errors. So they should
    either all return errors or all return zero. For now I've stuck with
    the majority and picked zero as the return value.

    It doesn't really matter in practice since few if any driver would
    react differently depending on a zero return value or NET_RX_DROP.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

07 Nov, 2007

1 commit


11 Oct, 2007

4 commits

  • This patch makes loopback_dev per network namespace. Adding
    code to create a different loopback device for each network
    namespace and adding the code to free a loopback device
    when a network namespace exits.

    This patch modifies all users the loopback_dev so they
    access it as init_net.loopback_dev, keeping all of the
    code compiling and working. A later pass will be needed to
    update the users to use something other than the initial network
    namespace.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • This patch replaces all occurences to the static variable
    loopback_dev to a pointer loopback_dev. That provides the
    mindless, trivial, uninteressting change part for the dynamic
    allocation for the loopback.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Daniel Lezcano
    Acked-By: Kirill Korotaev
    Acked-by: Benjamin Thery
    Signed-off-by: David S. Miller

    Daniel Lezcano
     
  • When the periodic IP route cache flush is done (every 600 seconds on
    default configuration), some hosts suffer a lot and eventually trigger
    the "soft lockup" message.

    dst_run_gc() is doing a scan of a possibly huge list of dst_entries,
    eventually freeing some (less than 1%) of them, while holding the
    dst_lock spinlock for the whole scan.

    Then it rearms a timer to redo the full thing 1/10 s later...
    The slowdown can last one minute or so, depending on how active are
    the tcp sessions.

    This second version of the patch converts the processing from a softirq
    based one to a workqueue.

    Even if the list of entries in garbage_list is huge, host is still
    responsive to softirqs and can make progress.

    Instead of resetting gc timer to 0.1 second if one entry was freed in a
    gc run, we do this if more than 10% of entries were freed.

    Before patch :

    Aug 16 06:21:37 SRV1 kernel: BUG: soft lockup detected on CPU#0!
    Aug 16 06:21:37 SRV1 kernel:
    Aug 16 06:21:37 SRV1 kernel: Call Trace:
    Aug 16 06:21:37 SRV1 kernel: [] wake_up_process+0x10/0x20
    Aug 16 06:21:37 SRV1 kernel: [] softlockup_tick+0xe9/0x110
    Aug 16 06:21:37 SRV1 kernel: [] dst_run_gc+0x0/0x140
    Aug 16 06:21:37 SRV1 kernel: [] run_local_timers+0x13/0x20
    Aug 16 06:21:37 SRV1 kernel: [] update_process_times+0x57/0x90
    Aug 16 06:21:37 SRV1 kernel: [] smp_local_timer_interrupt+0x34/0x60
    Aug 16 06:21:37 SRV1 kernel: [] smp_apic_timer_interrupt+0x5c/0x80
    Aug 16 06:21:37 SRV1 kernel: [] apic_timer_interrupt+0x66/0x70
    Aug 16 06:21:37 SRV1 kernel: [] dst_run_gc+0x53/0x140
    Aug 16 06:21:37 SRV1 kernel: [] dst_run_gc+0x46/0x140
    Aug 16 06:21:37 SRV1 kernel: [] run_timer_softirq+0x148/0x1c0
    Aug 16 06:21:37 SRV1 kernel: [] __do_softirq+0x6c/0xe0
    Aug 16 06:21:37 SRV1 kernel: [] call_softirq+0x1c/0x30
    Aug 16 06:21:37 SRV1 kernel: [] do_softirq+0x34/0x90
    Aug 16 06:21:37 SRV1 kernel: [] local_bh_enable_ip+0x3f/0x60
    Aug 16 06:21:37 SRV1 kernel: [] _spin_unlock_bh+0x13/0x20
    Aug 16 06:21:37 SRV1 kernel: [] rt_garbage_collect+0x1d8/0x320
    Aug 16 06:21:37 SRV1 kernel: [] dst_alloc+0x1d/0xa0
    Aug 16 06:21:37 SRV1 kernel: [] __ip_route_output_key+0x573/0x800
    Aug 16 06:21:37 SRV1 kernel: [] sock_common_recvmsg+0x32/0x50
    Aug 16 06:21:37 SRV1 kernel: [] ip_route_output_flow+0x1c/0x60
    Aug 16 06:21:37 SRV1 kernel: [] tcp_v4_connect+0x150/0x610
    Aug 16 06:21:37 SRV1 kernel: [] inet_bind_bucket_create+0x17/0x60
    Aug 16 06:21:37 SRV1 kernel: [] inet_stream_connect+0xa6/0x2c0
    Aug 16 06:21:37 SRV1 kernel: [] _spin_lock_bh+0x11/0x30
    Aug 16 06:21:37 SRV1 kernel: [] lock_sock_nested+0xcf/0xe0
    Aug 16 06:21:37 SRV1 kernel: [] _spin_lock_bh+0x11/0x30
    Aug 16 06:21:37 SRV1 kernel: [] sys_connect+0x71/0xa0
    Aug 16 06:21:37 SRV1 kernel: [] tcp_setsockopt+0x1f/0x30
    Aug 16 06:21:37 SRV1 kernel: [] sock_common_setsockopt+0xf/0x20
    Aug 16 06:21:37 SRV1 kernel: [] sys_setsockopt+0x9d/0xc0
    Aug 16 06:21:37 SRV1 kernel: [] sys_ioctl+0x5e/0x80
    Aug 16 06:21:37 SRV1 kernel: [] system_call+0x7e/0x83

    After patch : (RT_CACHE_DEBUG set to 2 to get following traces)

    dst_total: 75469 delayed: 74109 work_perf: 141 expires: 150 elapsed: 8092 us
    dst_total: 78725 delayed: 73366 work_perf: 743 expires: 400 elapsed: 8542 us
    dst_total: 86126 delayed: 71844 work_perf: 1522 expires: 775 elapsed: 8849 us
    dst_total: 100173 delayed: 68791 work_perf: 3053 expires: 1256 elapsed: 9748 us
    dst_total: 121798 delayed: 64711 work_perf: 4080 expires: 1997 elapsed: 10146 us
    dst_total: 154522 delayed: 58316 work_perf: 6395 expires: 25 elapsed: 11402 us
    dst_total: 154957 delayed: 58252 work_perf: 64 expires: 150 elapsed: 6148 us
    dst_total: 157377 delayed: 57843 work_perf: 409 expires: 400 elapsed: 6350 us
    dst_total: 163745 delayed: 56679 work_perf: 1164 expires: 775 elapsed: 7051 us
    dst_total: 176577 delayed: 53965 work_perf: 2714 expires: 1389 elapsed: 8120 us
    dst_total: 198993 delayed: 49627 work_perf: 4338 expires: 1997 elapsed: 8909 us
    dst_total: 226638 delayed: 46865 work_perf: 2762 expires: 2748 elapsed: 7351 us

    I successfully reduced the IP route cache of many hosts by a four factor
    thanks to this patch. Previously, I had to disable "ip route flush cache"
    to avoid crashes.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Every user of the network device notifiers is either a protocol
    stack or a pseudo device. If a protocol stack that does not have
    support for multiple network namespaces receives an event for a
    device that is not in the initial network namespace it quite possibly
    can get confused and do the wrong thing.

    To avoid problems until all of the protocol stacks are converted
    this patch modifies all netdev event handlers to ignore events on
    devices that are not in the initial network namespace.

    As the rest of the code is made network namespace aware these
    checks can be removed.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: David S. Miller

    Eric W. Biederman
     

08 Jun, 2007

1 commit


15 Feb, 2007

1 commit

  • After Al Viro (finally) succeeded in removing the sched.h #include in module.h
    recently, it makes sense again to remove other superfluous sched.h includes.
    There are quite a lot of files which include it but don't actually need
    anything defined in there. Presumably these includes were once needed for
    macros that used to live in sched.h, but moved to other header files in the
    course of cleaning it up.

    To ease the pain, this time I did not fiddle with any header files and only
    removed #includes from .c-files, which tend to cause less trouble.

    Compile tested against 2.6.20-rc2 and 2.6.20-rc2-mm2 (with offsets) on alpha,
    arm, i386, ia64, mips, powerpc, and x86_64 with allnoconfig, defconfig,
    allmodconfig, and allyesconfig as well as a few randconfigs on x86_64 and all
    configs in arch/arm/configs on arm. I also checked that no new warnings were
    introduced by the patch (actually, some warnings are removed that were emitted
    by unnecessarily included header files).

    Signed-off-by: Tim Schmielau
    Acked-by: Russell King
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Schmielau