02 Apr, 2013

1 commit


30 Mar, 2013

2 commits


21 Mar, 2013

1 commit

  • Previously, if you did an "ifconfig down" or similar on one core, and
    the kernel had CONFIG_XFRM enabled, every core would be interrupted to
    check its percpu flow list for items that could be garbage collected.

    With this change, we generate a mask of cores that actually have any
    percpu items, and only interrupt those cores. When we are trying to
    isolate a set of cpus from interrupts, this is important to do.

    Signed-off-by: Chris Metcalf
    Signed-off-by: David S. Miller

    Chris Metcalf
     

28 Feb, 2013

1 commit

  • I'm not sure why, but the hlist for each entry iterators were conceived

    list_for_each_entry(pos, head, member)

    The hlist ones were greedy and wanted an extra parameter:

    hlist_for_each_entry(tpos, pos, head, member)

    Why did they need an extra pos parameter? I'm not quite sure. Not only
    they don't really need it, it also prevents the iterator from looking
    exactly like the list iterator, which is unfortunate.

    Besides the semantic patch, there was some manual work required:

    - Fix up the actual hlist iterators in linux/list.h
    - Fix up the declaration of other iterators based on the hlist ones.
    - A very small amount of places were using the 'node' parameter, this
    was modified to use 'obj->member' instead.
    - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
    properly, so those had to be fixed up manually.

    The semantic patch which is mostly the work of Peter Senna Tschudin is here:

    @@
    iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;

    type T;
    expression a,c,d,e;
    identifier b;
    statement S;
    @@

    -T b;

    [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
    [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
    [akpm@linux-foundation.org: checkpatch fixes]
    [akpm@linux-foundation.org: fix warnings]
    [akpm@linux-foudnation.org: redo intrusive kvm changes]
    Tested-by: Peter Senna Tschudin
    Acked-by: Paul E. McKenney
    Signed-off-by: Sasha Levin
    Cc: Wu Fengguang
    Cc: Marcelo Tosatti
    Cc: Gleb Natapov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

23 Jan, 2013

1 commit


20 Nov, 2012

1 commit


22 Dec, 2011

1 commit

  • flow_cach_flush() might sleep but can be called from
    atomic context via the xfrm garbage collector. So add
    a flow_cache_flush_deferred() function and use this if
    the xfrm garbage colector is invoked from within the
    packet path.

    Signed-off-by: Steffen Klassert
    Acked-by: Timo Teräs
    Signed-off-by: David S. Miller

    Steffen Klassert
     

18 Oct, 2011

1 commit


17 Sep, 2011

1 commit


16 Sep, 2011

1 commit

  • flow_cache_lookup will return a cached object (or null pointer) that the
    resolver (i.e. xfrm_policy_lookup) previously found for another namespace
    using the same key/family/dir. Instead, make the namespace part of what
    identifies entries in the cache.

    As before, flow_entry_valid will return 0 for entries where the namespace
    has been deleted, and they will be removed from the cache the next time
    flow_cache_gc_task is run.

    Reported-by: Andrew Dickinson
    Signed-off-by: David Ward
    Signed-off-by: David S. Miller

    dpward
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

23 Feb, 2011

1 commit


24 Sep, 2010

1 commit


14 Sep, 2010

1 commit

  • Allocate hash tables for every online cpus, not every possible ones.

    NUMA aware allocations.

    Dont use a full page on arches where PAGE_SIZE > 1024*sizeof(void *)

    misc:
    __percpu , __read_mostly, __cpuinit annotations
    flow_compare_t is just an "unsigned long"

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

13 Jul, 2010

1 commit


29 Jun, 2010

1 commit


07 Apr, 2010

2 commits

  • Speed up lookups by freeing flow cache entries later. After
    virtualizing flow cache entry operations, the flow cache may now
    end up calling policy or bundle destructor which can be slowish.

    As gc_list is more effective with double linked list, the flow cache
    is converted to use common hlist and list macroes where appropriate.

    Signed-off-by: Timo Teras
    Signed-off-by: David S. Miller

    Timo Teräs
     
  • This allows to validate the cached object before returning it.
    It also allows to destruct object properly, if the last reference
    was held in flow cache. This is also a prepartion for caching
    bundles in the flow cache.

    In return for virtualizing the methods, we save on:
    - not having to regenerate the whole flow cache on policy removal:
    each flow matching a killed policy gets refreshed as the getter
    function notices it smartly.
    - we do not have to call flow_cache_flush from policy gc, since the
    flow cache now properly deletes the object if it had any references

    Signed-off-by: Timo Teras
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller

    Timo Teräs
     

02 Apr, 2010

1 commit

  • Group all per-cpu data to one structure instead of having many
    globals. Also prepare the internals so that we can have multiple
    instances of the flow cache if needed.

    Only the kmem_cache is left as a global as all flow caches share
    the same element size, and benefit from using a common cache.

    Signed-off-by: Timo Teras
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller

    Timo Teräs
     

26 Nov, 2008

1 commit

  • Pass netns to xfrm_lookup()/__xfrm_lookup(). For that pass netns
    to flow_cache_lookup() and resolver callback.

    Take it from socket or netdevice. Stub DECnet to init_net.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: David S. Miller

    Alexey Dobriyan
     

07 Nov, 2008

1 commit


26 Jun, 2008

1 commit


19 Apr, 2008

1 commit


08 Feb, 2008

2 commits

  • 1) We can shrink sizeof(struct flow_cache_entry) by 8 bytes on 64bit arches.
    2) No need to align these structures to hardware cache lines, this only waste
    ram for very litle gain.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • We use a percpu variable named flow_hash_info, which holds 12 bytes.

    It is currently marked as ____cacheline_aligned, which makes linker
    skip space to properly align this variable.

    Before :
    c065cc90 D per_cpu__softnet_data
    c065cd00 d per_cpu__flow_tables

    c065cd80 d per_cpu__flow_hash_info

    c065ce00 d per_cpu__flow_flush_tasklets
    c065ce14 d per_cpu__rt_cache_stat

    This alignement is quite unproductive, and removing it reduces the
    size of percpu data (by 240 bytes on my x86 machine), and improves
    performance (flow_tables & flow_hash_info can share a single cache
    line)

    After patch :
    c065cc04 D per_cpu__softnet_data
    c065cc4c d per_cpu__flow_tables
    c065cc50 d per_cpu__flow_hash_info
    c065cc5c d per_cpu__flow_flush_tasklets
    c065cc70 d per_cpu__rt_cache_stat

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     

29 Jan, 2008

1 commit

  • Many-many code in the kernel initialized the timer->function
    and timer->data together with calling init_timer(timer). There
    is already a helper for this. Use it for networking code.

    The patch is HUGE, but makes the code 130 lines shorter
    (98 insertions(+), 228 deletions(-)).

    Signed-off-by: Pavel Emelyanov
    Acked-by: Arnaldo Carvalho de Melo
    Signed-off-by: David S. Miller

    Pavel Emelyanov
     

26 Jan, 2008

1 commit

  • Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
    get_online_cpus and put_online_cpus instead as it highlights the
    refcount semantics in these operations.

    The new API guarantees protection against the cpu-hotplug operation, but
    it doesn't guarantee serialized access to any of the local data
    structures. Hence the changes needs to be reviewed.

    In case of pseries_add_processor/pseries_remove_processor, use
    cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
    cpu_present_map there.

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Ingo Molnar

    Gautham R Shenoy
     

24 Oct, 2007

1 commit


20 Jul, 2007

1 commit

  • Slab destructors were no longer supported after Christoph's
    c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been
    BUGs for both slab and slub, and slob never supported them
    either.

    This rips out support for the dtor pointer from kmem_cache_create()
    completely and fixes up every single callsite in the kernel (there were
    about 224, not including the slab allocator definitions themselves,
    or the documentation references).

    Signed-off-by: Paul Mundt

    Paul Mundt
     

10 May, 2007

1 commit

  • Since nonboot CPUs are now disabled after tasks and devices have been
    frozen and the CPU hotplug infrastructure is used for this purpose, we need
    special CPU hotplug notifications that will help the CPU-hotplug-aware
    subsystems distinguish normal CPU hotplug events from CPU hotplug events
    related to a system-wide suspend or resume operation in progress. This
    patch introduces such notifications and causes them to be used during
    suspend and resume transitions. It also changes all of the
    CPU-hotplug-aware subsystems to take these notifications into consideration
    (for now they are handled in the same way as the corresponding "normal"
    ones).

    [oleg@tv-sign.ru: cleanups]
    Signed-off-by: Rafael J. Wysocki
    Cc: Gautham R Shenoy
    Cc: Pavel Machek
    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     

24 Jan, 2007

1 commit

  • When old flow cache entries that are not at the head of their chain
    trigger a transient security error they get unlinked along with all
    the entries preceding them in the chain. The preceding entries are
    not freed correctly.

    This patch fixes this by simply leaving the entry around. It's based
    on a suggestion by Venkat Yekkirala.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

08 Dec, 2006

3 commits

  • There was lots of #ifdef noise in the kernel due to hotcpu_notifier(fn,
    prio) not correctly marking 'fn' as used in the !HOTPLUG_CPU case, and thus
    generating compiler warnings of unused symbols, hence forcing people to add
    #ifdefs.

    the compiler can skip truly unused functions just fine:

    text data bss dec hex filename
    1624412 728710 3674856 6027978 5bfaca vmlinux.before
    1624412 728710 3674856 6027978 5bfaca vmlinux.after

    [akpm@osdl.org: topology.c fix]
    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Replace all uses of kmem_cache_t with struct kmem_cache.

    The patch was generated using the following script:

    #!/bin/sh
    #
    # Replace one string by another in all the kernel sources.
    #

    set -e

    for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do
    quilt add $file
    sed -e "1,\$s/$1/$2/g" $file >/tmp/$$
    mv /tmp/$$ $file
    quilt refresh
    done

    The script was run like this

    sh replace kmem_cache_t "struct kmem_cache"

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • SLAB_ATOMIC is an alias of GFP_ATOMIC

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

12 Oct, 2006

1 commit

  • When a security module is loaded (in this case, SELinux), the
    security_xfrm_policy_lookup() hook can return an access denied permission
    (or other error). We were not handling that correctly, and in fact
    inverting the return logic and propagating a false "ok" back up to
    xfrm_lookup(), which then allowed packets to pass as if they were not
    associated with an xfrm policy.

    The way I was seeing the problem was when connecting via IPsec to a
    confined service on an SELinux box (vsftpd), which did not have the
    appropriate SELinux policy permissions to send packets via IPsec.

    The first SYNACK would be blocked, because of an uncached lookup via
    flow_cache_lookup(), which would fail to resolve an xfrm policy because
    the SELinux policy is checked at that point via the resolver.

    However, retransmitted SYNACKs would then find a cached flow entry when
    calling into flow_cache_lookup() with a null xfrm policy, which is
    interpreted by xfrm_lookup() as the packet not having any associated
    policy and similarly to the first case, allowing it to pass without
    transformation.

    The solution presented here is to first ensure that errno values are
    correctly propagated all the way back up through the various call chains
    from security_xfrm_policy_lookup(), and handled correctly.

    Then, flow_cache_lookup() is modified, so that if the policy resolver
    fails (typically a permission denied via the security module), the flow
    cache entry is killed rather than having a null policy assigned (which
    indicates that the packet can pass freely). This also forces any future
    lookups for the same flow to consult the security module (e.g. SELinux)
    for current security policy (rather than, say, caching the error on the
    flow cache entry).

    Signed-off-by: James Morris

    James Morris
     

23 Sep, 2006

2 commits


11 Apr, 2006

1 commit

  • for_each_cpu() actually iterates across all possible CPUs. We've had mistakes
    in the past where people were using for_each_cpu() where they should have been
    iterating across only online or present CPUs. This is inefficient and
    possibly buggy.

    We're renaming for_each_cpu() to for_each_possible_cpu() to avoid this in the
    future.

    This patch replaces for_each_cpu with for_each_possible_cpu under /net

    Signed-off-by: KAMEZAWA Hiroyuki
    Acked-by: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

10 Apr, 2006

1 commit