20 Sep, 2018

1 commit

  • Rehashing and destroying large hash table takes a lot of time,
    and happens in process context. It is safe to add cond_resched()
    in rhashtable_rehash_table() and rhashtable_free_and_destroy()

    Signed-off-by: Eric Dumazet
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller
    (cherry picked from commit ae6da1f503abb5a5081f9f6c4a6881de97830f3e)
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     

25 Jul, 2018

1 commit

  • [ Upstream commit 107d01f5ba10f4162c38109496607eb197059064 ]

    rhashtable_init() currently does not take into account the user-passed
    min_size parameter unless param->nelem_hint is set as well. As such,
    the default size (number of buckets) will always be HASH_DEFAULT_SIZE
    even if the smallest allowed size is larger than that. Remediate this
    by unconditionally calling into rounded_hashtable_size() and handling
    things accordingly.

    Signed-off-by: Davidlohr Bueso
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Davidlohr Bueso
     

01 Apr, 2018

1 commit

  • [ Upstream commit d3dcf8eb615537526bd42ff27a081d46d337816e ]

    When inserting duplicate objects (those with the same key),
    current rhlist implementation messes up the chain pointers by
    updating the bucket pointer instead of prev next pointer to the
    newly inserted node. This causes missing elements on removal and
    travesal.

    Fix that by properly updating pprev pointer to point to
    the correct rhash_head next pointer.

    Issue: 1241076
    Change-Id: I86b2c140bcb4aeb10b70a72a267ff590bb2b17e7
    Fixes: ca26893f05e8 ('rhashtable: Add rhlist interface')
    Signed-off-by: Paul Blakey
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Paul Blakey
     

20 Sep, 2017

1 commit

  • Clarify that rhashtable_walk_{stop,start} will not reset the iterator to
    the beginning of the hash table. Confusion between rhashtable_walk_enter
    and rhashtable_walk_start has already lead to a bug.

    Signed-off-by: Andreas Gruenbacher
    Signed-off-by: David S. Miller

    Andreas Gruenbacher
     

16 Jul, 2017

1 commit

  • Pull random updates from Ted Ts'o:
    "Add wait_for_random_bytes() and get_random_*_wait() functions so that
    callers can more safely get random bytes if they can block until the
    CRNG is initialized.

    Also print a warning if get_random_*() is called before the CRNG is
    initialized. By default, only one single-line warning will be printed
    per boot. If CONFIG_WARN_ALL_UNSEEDED_RANDOM is defined, then a
    warning will be printed for each function which tries to get random
    bytes before the CRNG is initialized. This can get spammy for certain
    architecture types, so it is not enabled by default"

    * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
    random: reorder READ_ONCE() in get_random_uXX
    random: suppress spammy warnings about unseeded randomness
    random: warn when kernel uses unseeded randomness
    net/route: use get_random_int for random counter
    net/neighbor: use get_random_u32 for 32-bit hash random
    rhashtable: use get_random_u32 for hash_rnd
    ceph: ensure RNG is seeded before using
    iscsi: ensure RNG is seeded before use
    cifs: use get_random_u32 for 32-bit lock random
    random: add get_random_{bytes,u32,u64,int,long,once}_wait family
    random: add wait_for_random_bytes() API

    Linus Torvalds
     

11 Jul, 2017

1 commit

  • bucket_table_alloc() can be currently called with GFP_KERNEL or
    GFP_ATOMIC. For the former we basically have an open coded kvzalloc()
    while the later only uses kzalloc(). Let's simplify the code a bit by
    the dropping the open coded path and replace it with kvzalloc().

    Link: http://lkml.kernel.org/r/20170531155145.17111-3-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Cc: Thomas Graf
    Cc: Herbert Xu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

20 Jun, 2017

1 commit


09 May, 2017

1 commit

  • alloc_bucket_locks allocation pattern is quite unusual. We are
    preferring vmalloc when CONFIG_NUMA is enabled. The rationale is that
    vmalloc will respect the memory policy of the current process and so the
    backing memory will get distributed over multiple nodes if the requester
    is configured properly. At least that is the intention, in reality
    rhastable is shrunk and expanded from a kernel worker so no mempolicy
    can be assumed.

    Let's just simplify the code and use kvmalloc helper, which is a
    transparent way to use kmalloc with vmalloc fallback, if the caller is
    allowed to block and use the flag otherwise.

    Link: http://lkml.kernel.org/r/20170306103032.2540-4-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Acked-by: Vlastimil Babka
    Cc: Tom Herbert
    Cc: Eric Dumazet
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

02 May, 2017

1 commit

  • By using smaller datatypes this (rather large) struct shrinks considerably
    (80 -> 48 bytes on x86_64).

    As this is embedded in other structs, this also rerduces size of several
    others, e.g. cls_fl_head or nft_hash.

    Signed-off-by: Florian Westphal
    Signed-off-by: David S. Miller

    Florian Westphal
     

28 Apr, 2017

1 commit

  • The commit 6d684e54690c ("rhashtable: Cap total number of entries
    to 2^31") breaks rhashtable users that do not set max_size. This
    is because when max_size is zero max_elems is also incorrectly set
    to zero instead of 2^31.

    This patch fixes it by only lowering max_elems when max_size is not
    zero.

    Fixes: 6d684e54690c ("rhashtable: Cap total number of entries to 2^31")
    Reported-by: Florian Fainelli
    Reported-by: kernel test robot
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

27 Apr, 2017

2 commits

  • When max_size is not set or if it set to a sufficiently large
    value, the nelems counter can overflow. This would cause havoc
    with the automatic shrinking as it would then attempt to fit a
    huge number of entries into a tiny hash table.

    This patch fixes this by adding max_elems to struct rhashtable
    to cap the number of elements. This is set to 2^31 as nelems is
    not a precise count. This is sufficiently smaller than UINT_MAX
    that it should be safe.

    When max_size is set max_elems will be lowered to at most twice
    max_size as is the status quo.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • no users in the tree, insecure_max_entries is always set to
    ht->p.max_size * 2 in rhtashtable_init().

    Replace only spot that uses it with a ht->p.max_size check.

    Signed-off-by: Florian Westphal
    Signed-off-by: David S. Miller

    Florian Westphal
     

19 Apr, 2017

1 commit

  • commit 83e7e4ce9e93c3 ("mac80211: Use rhltable instead of rhashtable")
    removed the last user that made use of 'insecure_elasticity' parameter,
    i.e. the default of 16 is used everywhere.

    Replace it with a constant.

    Signed-off-by: Florian Westphal
    Signed-off-by: David S. Miller

    Florian Westphal
     

02 Mar, 2017

1 commit


27 Feb, 2017

2 commits


18 Feb, 2017

1 commit

  • This patch adds code that handles GFP_ATOMIC kmalloc failure on
    insertion. As we cannot use vmalloc, we solve it by making our
    hash table nested. That is, we allocate single pages at each level
    and reach our desired table size by nesting them.

    When a nested table is created, only a single page is allocated
    at the top-level. Lower levels are allocated on demand during
    insertion. Therefore for each insertion to succeed, only two
    (non-consecutive) pages are needed.

    After a nested table is created, a rehash will be scheduled in
    order to switch to a vmalloced table as soon as possible. Also,
    the rehash code will never rehash into a nested table. If we
    detect a nested table during a rehash, the rehash will be aborted
    and a new rehash will be scheduled.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

20 Sep, 2016

1 commit

  • The insecure_elasticity setting is an ugly wart brought out by
    users who need to insert duplicate objects (that is, distinct
    objects with identical keys) into the same table.

    In fact, those users have a much bigger problem. Once those
    duplicate objects are inserted, they don't have an interface to
    find them (unless you count the walker interface which walks
    over the entire table).

    Some users have resorted to doing a manual walk over the hash
    table which is of course broken because they don't handle the
    potential existence of multiple hash tables. The result is that
    they will break sporadically when they encounter a hash table
    resize/rehash.

    This patch provides a way out for those users, at the expense
    of an extra pointer per object. Essentially each object is now
    a list of objects carrying the same key. The hash table will
    only see the lists so nothing changes as far as rhashtable is
    concerned.

    To use this new interface, you need to insert a struct rhlist_head
    into your objects instead of struct rhash_head. While the hash
    table is unchanged, for type-safety you'll need to use struct
    rhltable instead of struct rhashtable. All the existing interfaces
    have been duplicated for rhlist, including the hash table walker.

    One missing feature is nulls marking because AFAIK the only potential
    user of it does not need duplicate objects. Should anyone need
    this it shouldn't be too hard to add.

    Signed-off-by: Herbert Xu
    Acked-by: Thomas Graf
    Signed-off-by: David S. Miller

    Herbert Xu
     

07 Sep, 2016

1 commit

  • Pablo Neira Ayuso says:

    ====================
    Netfilter updates for net-next

    The following patchset contains Netfilter updates for your net-next
    tree. Most relevant updates are the removal of per-conntrack timers to
    use a workqueue/garbage collection approach instead from Florian
    Westphal, the hash and numgen expression for nf_tables from Laura
    Garcia, updates on nf_tables hash set to honor the NLM_F_EXCL flag,
    removal of ip_conntrack sysctl and many other incremental updates on our
    Netfilter codebase.

    More specifically, they are:

    1) Retrieve only 4 bytes to fetch ports in case of non-linear skb
    transport area in dccp, sctp, tcp, udp and udplite protocol
    conntrackers, from Gao Feng.

    2) Missing whitespace on error message in physdev match, from Hangbin Liu.

    3) Skip redundant IPv4 checksum calculation in nf_dup_ipv4, from Liping Zhang.

    4) Add nf_ct_expires() helper function and use it, from Florian Westphal.

    5) Replace opencoded nf_ct_kill() call in IPVS conntrack support, also
    from Florian.

    6) Rename nf_tables set implementation to nft_set_{name}.c

    7) Introduce the hash expression to allow arbitrary hashing of selector
    concatenations, from Laura Garcia Liebana.

    8) Remove ip_conntrack sysctl backward compatibility code, this code has
    been around for long time already, and we have two interfaces to do
    this already: nf_conntrack sysctl and ctnetlink.

    9) Use nf_conntrack_get_ht() helper function whenever possible, instead
    of opencoding fetch of hashtable pointer and size, patch from Liping Zhang.

    10) Add quota expression for nf_tables.

    11) Add number generator expression for nf_tables, this supports
    incremental and random generators that can be combined with maps,
    very useful for load balancing purpose, again from Laura Garcia Liebana.

    12) Fix a typo in a debug message in FTP conntrack helper, from Colin Ian King.

    13) Introduce a nft_chain_parse_hook() helper function to parse chain hook
    configuration, this is used by a follow up patch to perform better chain
    update validation.

    14) Add rhashtable_lookup_get_insert_key() to rhashtable and use it from the
    nft_set_hash implementation to honor the NLM_F_EXCL flag.

    15) Missing nulls check in nf_conntrack from nf_conntrack_tuple_taken(),
    patch from Florian Westphal.

    16) Don't use the DYING bit to know if the conntrack event has been already
    delivered, instead a state variable to track event re-delivery
    states, also from Florian.

    17) Remove the per-conntrack timer, use the workqueue approach that was
    discussed during the NFWS, from Florian Westphal.

    18) Use the netlink conntrack table dump path to kill stale entries,
    again from Florian.

    19) Add a garbage collector to get rid of stale conntracks, from
    Florian.

    20) Reschedule garbage collector if eviction rate is high.

    21) Get rid of the __nf_ct_kill_acct() helper.

    22) Use ARPHRD_ETHER instead of hardcoded 1 from ARP logger.

    23) Make nf_log_set() interface assertive on unsupported families.
    ====================

    Signed-off-by: David S. Miller

    David S. Miller
     

30 Aug, 2016

1 commit


27 Aug, 2016

1 commit


26 Aug, 2016

1 commit

  • This patch modifies __rhashtable_insert_fast() so it returns the
    existing object that clashes with the one that you want to insert.
    In case the object is successfully inserted, NULL is returned.
    Otherwise, you get an error via ERR_PTR().

    This patch adapts the existing callers of __rhashtable_insert_fast()
    so they handle this new logic, and it adds a new
    rhashtable_lookup_get_insert_key() interface to fetch this existing
    object.

    nf_tables needs this change to improve handling of EEXIST cases via
    honoring the NLM_F_EXCL flag and by checking if the data part of the
    mapping matches what we have.

    Cc: Herbert Xu
    Cc: Thomas Graf
    Signed-off-by: Pablo Neira Ayuso
    Acked-by: Herbert Xu

    Pablo Neira Ayuso
     

20 Aug, 2016

1 commit

  • The commit 8f6fd83c6c5ec66a4a70c728535ddcdfef4f3697 ("rhashtable:
    accept GFP flags in rhashtable_walk_init") added a GFP flag argument
    to rhashtable_walk_init because some users wish to use the walker
    in an unsleepable context.

    In fact we don't need to allocate memory in rhashtable_walk_init
    at all. The walker is always paired with an iterator so we could
    just stash ourselves there.

    This patch does that by introducing a new enter function to replace
    the existing init function. This way we don't have to churn all
    the existing users again.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

16 Aug, 2016

1 commit

  • I got this:

    ================================================================================
    UBSAN: Undefined behaviour in ./include/linux/log2.h:63:13
    shift exponent 64 is too large for 64-bit type 'long unsigned int'
    CPU: 1 PID: 721 Comm: kworker/1:1 Not tainted 4.8.0-rc1+ #87
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.3-0-ge2fc41e-prebuilt.qemu-project.org 04/01/2014
    Workqueue: events rht_deferred_worker
    0000000000000000 ffff88011661f8d8 ffffffff82344f50 0000000041b58ab3
    ffffffff84f98000 ffffffff82344ea4 ffff88011661f900 ffff88011661f8b0
    0000000000000001 ffff88011661f6b8 dffffc0000000000 ffffffff867f7640
    Call Trace:
    [] dump_stack+0xac/0xfc
    [] ? _atomic_dec_and_lock+0xc4/0xc4
    [] ubsan_epilogue+0xd/0x8a
    [] __ubsan_handle_shift_out_of_bounds+0x255/0x29a
    [] ? __ubsan_handle_out_of_bounds+0x180/0x180
    [] ? nl80211_req_set_reg+0x256/0x2f0
    [] ? print_context_stack+0x8a/0x160
    [] ? amd_pmu_reset+0x341/0x380
    [] rht_deferred_worker+0x1618/0x1790
    [] ? rht_deferred_worker+0x1618/0x1790
    [] ? rhashtable_jhash2+0x370/0x370
    [] ? process_one_work+0x6fd/0x1970
    [] process_one_work+0x79f/0x1970
    [] ? process_one_work+0x6fd/0x1970
    [] ? try_to_grab_pending+0x4c0/0x4c0
    [] ? worker_thread+0x1c4/0x1340
    [] worker_thread+0x55f/0x1340
    [] ? __schedule+0x4df/0x1d40
    [] ? process_one_work+0x1970/0x1970
    [] ? process_one_work+0x1970/0x1970
    [] kthread+0x237/0x390
    [] ? __kthread_parkme+0x280/0x280
    [] ? _raw_spin_unlock_irq+0x33/0x50
    [] ret_from_fork+0x1f/0x40
    [] ? __kthread_parkme+0x280/0x280
    ================================================================================

    roundup_pow_of_two() is undefined when called with an argument of 0, so
    let's avoid the call and just fall back to ht->p.min_size (which should
    never be smaller than HASH_MIN_SIZE).

    Cc: Herbert Xu
    Signed-off-by: Vegard Nossum
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller

    Vegard Nossum
     

15 Aug, 2016

1 commit

  • Sander reports following splat after netfilter nat bysrc table got
    converted to rhashtable:

    swapper/0: page allocation failure: order:3, mode:0x2084020(GFP_ATOMIC|__GFP_COMP)
    CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.8.0-rc1 [..]
    [] warn_alloc_failed+0xdd/0x140
    [] __alloc_pages_nodemask+0x3e1/0xcf0
    [] alloc_pages_current+0x8d/0x110
    [] kmalloc_order+0x1f/0x70
    [] __kmalloc+0x129/0x140
    [] bucket_table_alloc+0xc1/0x1d0
    [] rhashtable_insert_rehash+0x5d/0xe0
    [] nf_nat_setup_info+0x2ef/0x400

    The failure happens when allocating the spinlock array.
    Even with GFP_KERNEL its unlikely for such a large allocation
    to succeed.

    Thomas Graf pointed me at inet_ehash_locks_alloc(), so in addition
    to adding NOWARN for atomic allocations this also makes the bucket-array
    sizing more conservative.

    In commit 095dc8e0c3686 ("tcp: fix/cleanup inet_ehash_locks_alloc()"),
    Eric Dumazet says: "Budget 2 cache lines per cpu worth of 'spinlocks'".
    IOW, consider size needed by a single spinlock when determining
    number of locks per cpu. So with 64 byte per cacheline and 4 byte per
    spinlock this gives 32 locks per cpu.

    Resulting size of the lock-array (sizeof(spinlock) == 4):

    cpus: 1 2 4 8 16 32 64
    old: 1k 1k 4k 8k 16k 16k 16k
    new: 128 256 512 1k 2k 4k 8k

    8k allocation should have decent chance of success even
    with GFP_ATOMIC, and should not fail with GFP_KERNEL.

    With 72-byte spinlock (LOCKDEP):
    cpus : 1 2
    old: 9k 18k
    new: ~2k ~4k

    Reported-by: Sander Eikelenboom
    Suggested-by: Thomas Graf
    Signed-off-by: Florian Westphal
    Signed-off-by: David S. Miller

    Florian Westphal
     

05 Apr, 2016

1 commit

  • In certain cases, the 802.11 mesh pathtable code wants to
    iterate over all of the entries in the forwarding table from
    the receive path, which is inside an RCU read-side critical
    section. Enable walks inside atomic sections by allowing
    GFP_ATOMIC allocations for the walker state.

    Change all existing callsites to pass in GFP_KERNEL.

    Acked-by: Thomas Graf
    Signed-off-by: Bob Copeland
    [also adjust gfs2/glock.c and rhashtable tests]
    Signed-off-by: Johannes Berg

    Bob Copeland
     

01 Jan, 2016

1 commit


19 Dec, 2015

1 commit

  • The commit c6ff5268293ef98e48a99597e765ffc417e39fa5 ("rhashtable:
    Fix walker list corruption") causes a suspicious RCU usage warning
    because we no longer hold ht->mutex when we dereference ht->tbl.

    However, this is a false positive because we now hold ht->lock
    which also guarantees that ht->tbl won't disppear from under us.

    This patch kills the warning by using rcu_dereference_protected.

    Reported-by: kernel test robot
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

18 Dec, 2015

1 commit


17 Dec, 2015

1 commit

  • The commit ba7c95ea3870fe7b847466d39a049ab6f156aa2c ("rhashtable:
    Fix sleeping inside RCU critical section in walk_stop") introduced
    a new spinlock for the walker list. However, it did not convert
    all existing users of the list over to the new spin lock. Some
    continued to use the old mutext for this purpose. This obviously
    led to corruption of the list.

    The fix is to use the spin lock everywhere where we touch the list.

    This also allows us to do rcu_rad_lock before we take the lock in
    rhashtable_walk_start. With the old mutex this would've deadlocked
    but it's safe with the new spin lock.

    Fixes: ba7c95ea3870 ("rhashtable: Fix sleeping inside RCU...")
    Reported-by: Colin Ian King
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

16 Dec, 2015

1 commit

  • William Hua wrote:
    >
    > I wasn't aware there was an enforced minimum size. I simply set the
    > nelem_hint in the rhastable_params struct to 1, expecting it to grow as
    > needed. This caused a segfault afterwards when trying to insert an
    > element.

    OK we're doing the size computation before we enforce the limit
    on min_size.

    ---8
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     

09 Dec, 2015

1 commit

  • The patch 9497df88ab5567daa001829051c5f87161a81ff0 ("rhashtable:
    Fix reader/rehash race") added a pair of barriers. In fact the
    wmb is superfluous because every subsequent write to the old or
    new hash table uses rcu_assign_pointer, which itself carriers a
    full barrier prior to the assignment.

    Therefore we may remove the explicit wmb.

    Signed-off-by: Herbert Xu
    Acked-by: Thomas Graf
    Signed-off-by: David S. Miller

    Herbert Xu
     

06 Dec, 2015

1 commit


05 Dec, 2015

2 commits

  • When an rhashtable user pounds rhashtable hard with back-to-back
    insertions we may end up growing the table in GFP_ATOMIC context.
    Unfortunately when the table reaches a certain size this often
    fails because we don't have enough physically contiguous pages
    to hold the new table.

    Eric Dumazet suggested (and in fact wrote this patch) using
    __vmalloc instead which can be used in GFP_ATOMIC context.

    Reported-by: Phil Sutter
    Suggested-by: Eric Dumazet
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • Thomas and Phil observed that under stress rhashtable insertion
    sometimes failed with EBUSY, even though this error should only
    ever been seen when we're under attack and our hash chain length
    has grown to an unacceptable level, even after a rehash.

    It turns out that the logic for detecting whether there is an
    existing rehash is faulty. In particular, when two threads both
    try to grow the same table at the same time, one of them may see
    the newly grown table and thus erroneously conclude that it had
    been rehashed. This is what leads to the EBUSY error.

    This patch fixes this by remembering the current last table we
    used during insertion so that rhashtable_insert_rehash can detect
    when another thread has also done a resize/rehash. When this is
    detected we will give up our resize/rehash and simply retry the
    insertion with the new table.

    Reported-by: Thomas Graf
    Reported-by: Phil Sutter
    Signed-off-by: Herbert Xu
    Tested-by: Phil Sutter
    Signed-off-by: David S. Miller

    Herbert Xu
     

23 Sep, 2015

1 commit

  • rhashtable_rehash_one() uses complex logic to update entry->next field,
    after INIT_RHT_NULLS_HEAD and NULLS_MARKER expansion:

    entry->next = 1 | ((base + off) << 1)

    This can be compiled along the lines of:

    entry->next = base + off
    entry->next <next |= 1

    Which will break concurrent readers.

    NULLS value recomputation is not needed here, so just remove
    the complex logic.

    The data race was found with KernelThreadSanitizer (KTSAN).

    Signed-off-by: Dmitry Vyukov
    Acked-by: Eric Dumazet
    Acked-by: Thomas Graf
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller

    Dmitriy Vyukov
     

09 Jul, 2015

1 commit

  • If rhashtable_walk_next detects a resize operation in progress, it jumps
    to the new table and continues walking that one. But it misses to drop
    the reference to it's current item, leading it to continue traversing
    the new table's bucket in which the current item is sorted into, and
    after reaching that bucket's end continues traversing the new table's
    second bucket instead of the first one, thereby potentially missing
    items.

    This fixes the rhashtable runtime test for me. Bug probably introduced
    by Herbert Xu's patch eddee5ba ("rhashtable: Fix walker behaviour during
    rehash") although not explicitly tested.

    Fixes: eddee5ba ("rhashtable: Fix walker behaviour during rehash")
    Signed-off-by: Phil Sutter
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller

    Phil Sutter
     

09 Jun, 2015

1 commit


07 Jun, 2015

1 commit


23 May, 2015

1 commit

  • Conflicts:
    drivers/net/ethernet/cadence/macb.c
    drivers/net/phy/phy.c
    include/linux/skbuff.h
    net/ipv4/tcp.c
    net/switchdev/switchdev.c

    Switchdev was a case of RTNH_H_{EXTERNAL --> OFFLOAD}
    renaming overlapping with net-next changes of various
    sorts.

    phy.c was a case of two changes, one adding a local
    variable to a function whilst the second was removing
    one.

    tcp.c overlapped a deadlock fix with the addition of new tcp_info
    statistic values.

    macb.c involved the addition of two zyncq device entries.

    skbuff.h involved adding back ipv4_daddr to nf_bridge_info
    whilst net-next changes put two other existing members of
    that struct into a union.

    Signed-off-by: David S. Miller

    David S. Miller