30 May, 2018

1 commit


24 Nov, 2017

1 commit

  • commit 135bd1a230bb69a68c9808a7d25467318900b80a upstream.

    The pending-callbacks check in rcu_prepare_for_idle() is backwards.
    It should accelerate if there are pending callbacks, but the check
    rather uselessly accelerates only if there are no callbacks. This commit
    therefore inverts this check.

    Fixes: 15fecf89e46a ("srcu: Abstract multi-tail callback list handling")
    Signed-off-by: Neeraj Upadhyay
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Greg Kroah-Hartman

    Neeraj Upadhyay
     

09 Sep, 2017

1 commit

  • First, number of CPUs can't be negative number.

    Second, different signnnedness leads to suboptimal code in the following
    cases:

    1)
    kmalloc(nr_cpu_ids * sizeof(X));

    "int" has to be sign extended to size_t.

    2)
    while (loff_t *pos < nr_cpu_ids)

    MOVSXD is 1 byte longed than the same MOV.

    Other cases exist as well. Basically compiler is told that nr_cpu_ids
    can't be negative which can't be deduced if it is "int".

    Code savings on allyesconfig kernel: -3KB

    add/remove: 0/0 grow/shrink: 25/264 up/down: 261/-3631 (-3370)
    function old new delta
    coretemp_cpu_online 450 512 +62
    rcu_init_one 1234 1272 +38
    pci_device_probe 374 399 +25

    ...

    pgdat_reclaimable_pages 628 556 -72
    select_fallback_rq 446 369 -77
    task_numa_find_cpu 1923 1807 -116

    Link: http://lkml.kernel.org/r/20170819114959.GA30580@avx2
    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

17 Aug, 2017

4 commits

  • …isc.2017.08.17a', 'spin_unlock_wait_no.2017.08.17a', 'srcu.2017.07.27c' and 'torture.2017.07.24c' into HEAD

    doc.2017.08.17a: Documentation updates.
    fixes.2017.08.17a: RCU fixes.
    hotplug.2017.07.25b: CPU-hotplug updates.
    misc.2017.08.17a: Miscellaneous fixes outside of RCU (give or take conflicts).
    spin_unlock_wait_no.2017.08.17a: Remove spin_unlock_wait().
    srcu.2017.07.27c: SRCU updates.
    torture.2017.07.24c: Torture-test updates.

    Paul E. McKenney
     
  • This commit adds assertions verifying the consistency of the rcu_node
    structure's ->blkd_tasks list and its ->gp_tasks, ->exp_tasks, and
    ->boost_tasks pointers. In particular, the ->blkd_tasks lists must be
    empty except for leaf rcu_node structures.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • There is currently event tracing to track when a task is preempted
    within a preemptible RCU read-side critical section, and also when that
    task subsequently reaches its outermost rcu_read_unlock(), but none
    indicating when a new grace period starts when that grace period must
    wait on pre-existing readers that have been been preempted at least once
    since the beginning of their current RCU read-side critical sections.

    This commit therefore adds an event trace at grace-period start in
    the case where there are such readers. Note that only the first
    reader in the list is traced.

    Signed-off-by: Paul E. McKenney
    Acked-by: Steven Rostedt (VMware)

    Paul E. McKenney
     
  • Strings used in event tracing need to be specially handled, for example,
    using the TPS() macro. Without the TPS() macro, although output looks
    fine from within a running kernel, extracting traces from a crash dump
    produces garbage instead of strings. This commit therefore adds the TPS()
    macro to some unadorned strings that were passed to event-tracing macros.

    Signed-off-by: Paul E. McKenney
    Acked-by: Steven Rostedt (VMware)

    Paul E. McKenney
     

26 Jul, 2017

2 commits

  • RCU's CPU-hotplug callback-migration code first moves the outgoing
    CPU's callbacks to ->orphan_done and ->orphan_pend, and only then
    moves them to the NOCB callback list. This commit avoids the
    extra step (and simplifies the code) by moving the callbacks directly
    from the outgoing CPU's callback list to the NOCB callback list.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The handling of RCU's no-CBs CPUs has a maintenance headache, namely
    that if call_rcu() is invoked with interrupts disabled, the rcuo kthread
    wakeup must be defered to a point where we can be sure that scheduler
    locks are not held. Of course, there are a lot of code paths leading
    from an interrupts-disabled invocation of call_rcu(), and missing any
    one of these can result in excessive callback-invocation latency, and
    potentially even system hangs.

    This commit therefore uses a timer to guarantee that the wakeup will
    eventually occur. If one of the deferred-wakeup points kicks in, then
    the timer is simply cancelled.

    This commit also fixes up an incomplete removal of commits that were
    intended to plug remaining exit paths, which should have the added
    benefit of reducing the overhead of RCU's context-switch hooks. In
    addition, it simplifies leader-to-follower callback-list handoff by
    introducing locking. The call_rcu()-to-leader handoff continues to
    use atomic operations in order to maintain good real-time latency for
    common-case use of call_rcu().

    Signed-off-by: Paul E. McKenney
    [ paulmck: Dan Carpenter fix for mod_timer() usage bug found by smatch. ]

    Paul E. McKenney
     

09 Jun, 2017

7 commits

  • The CONFIG_RCU_NOCB_CPU_ALL, CONFIG_RCU_NOCB_CPU_NONE, and
    CONFIG_RCU_NOCB_CPU_ZERO Kconfig options are used only in testing and
    are redundant with the rcu_nocbs= boot parameter. This commit therefore
    removes these three Kconfig options and adjusts the rcutorture scripts
    to use the boot parameter instead.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • RCU's debugfs tracing used to be the only reasonable low-level debug
    information available, but ftrace and event tracing has since surpassed
    the RCU debugfs level of usefulness. This commit therefore removes
    RCU's debugfs tracing.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The PROVE_RCU_REPEATEDLY Kconfig option was initially added due to
    the volume of messages from PROVE_RCU: Doing just one per boot would
    have required excessive numbers of boots to locate them all. However,
    PROVE_RCU messages are now relatively rare, so there is no longer any
    reason to need more than one such message per boot. This commit therefore
    removes the PROVE_RCU_REPEATEDLY Kconfig option.

    Signed-off-by: Paul E. McKenney
    Cc: Ingo Molnar

    Paul E. McKenney
     
  • The NO_HZ_FULL_SYSIDLE full-system-idle capability was added in 2013
    by commit 0edd1b1784cb ("nohz_full: Add full-system-idle state machine"),
    but has not been used. This commit therefore removes it.

    If it turns out to be needed later, this commit can always be reverted.

    Signed-off-by: Paul E. McKenney
    Cc: Frederic Weisbecker
    Cc: Rik van Riel
    Cc: Ingo Molnar
    Acked-by: Linus Torvalds

    Paul E. McKenney
     
  • The RCU_TORTURE_TEST_SLOW_PREINIT, RCU_TORTURE_TEST_SLOW_PREINIT_DELAY,
    RCU_TORTURE_TEST_SLOW_PREINIT_DELAY, RCU_TORTURE_TEST_SLOW_INIT,
    RCU_TORTURE_TEST_SLOW_INIT_DELAY, RCU_TORTURE_TEST_SLOW_CLEANUP,
    and RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY Kconfig options are only
    useful for torture testing, and there are the rcutree.gp_cleanup_delay,
    rcutree.gp_init_delay, and rcutree.gp_preinit_delay kernel boot parameters
    that rcutorture can use instead. The effect of these parameters is to
    artificially slow down grace period initialization and cleanup in order
    to make some types of race conditions happen more often.

    This commit therefore simplifies Tree RCU a bit by removing the Kconfig
    options and adding the corresponding kernel parameters to rcutorture's
    .boot files instead. However, this commit also leaves out the kernel
    parameters for TREE02, TREE04, and TREE07 in order to have about the
    same number of tests slowed as not slowed. TREE01, TREE03, TREE05,
    and TREE06 are slowed, and the rest are not slowed.

    Reported-by: Linus Torvalds
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The include/linux/rcupdate.h file is included by more than 200
    files, so shrinking it should provide some build-time benefits.
    This commit therefore moves several docbook comments from rcupdate.h to
    kernel/rcu/update.c, kernel/rcu/tree.c, and kernel/rcu/tree_plugin.h, thus
    reducing the number of times that the compiler has to scan these comments.
    This likely provides only a small benefit, but every little bit helps.

    This commit also fixes a malformed bulleted list noted by the 0day
    Test Robot.

    Reported-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Wait/wakeup operations do not guarantee ordering on their own. Instead,
    either locking or memory barriers are required. This commit therefore
    adds memory barriers to wake_nocb_leader() and nocb_leader_wait().

    Signed-off-by: Paul E. McKenney
    Tested-by: Krister Johansen
    Cc: # 4.6.x

    Paul E. McKenney
     

08 Jun, 2017

6 commits

  • The RCU_NOGP_WAKE_NOT, RCU_NOGP_WAKE, and RCU_NOGP_WAKE_FORCE flags
    are used to mediate wakeups for the no-CBs CPU kthreads. The "NOGP"
    really doesn't make any sense, so this commit does s/NOGP/NOCB/.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Comments can be helpful, but assertions carry more force. This commit
    therefore adds lockdep_assert_held() and RCU_LOCKDEP_WARN() calls to
    enforce lock-held and interrupt-disabled preconditions.

    Reported-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This commit updates rcu_bootup_announce_oddness() to check additional
    Kconfig options and module/boot parameters.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This commit adds a rcupdate_announce_bootup_oddness() function to
    print out non-default values of significant kernel boot parameter
    settings to aid in debugging.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The synchronize_kernel() primitive was removed in favor of
    synchronize_sched() more than a decade ago, and it seems likely that
    rather few kernel hackers are familiar with it. Its continued presence
    is therefore providing more confusion than enlightenment. This commit
    therefore removes the reference from the synchronize_sched() header
    comment, and adds the corresponding information to the synchronize_rcu(0
    header comment.

    Reported-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Although preemptible RCU allows its read-side critical sections to be
    preempted, general blocking is forbidden. The reason for this is that
    excessive preemption times can be handled by CONFIG_RCU_BOOST=y, but a
    voluntarily blocked task doesn't care how high you boost its priority.
    Because preemptible RCU is a global mechanism, one ill-behaved reader
    hurts everyone. Hence the prohibition against general blocking in
    RCU-preempt read-side critical sections. Preemption yes, blocking no.

    This commit enforces this prohibition.

    There is a special exception for the -rt patchset (which they kindly
    volunteered to implement): It is OK to block (as opposed to merely being
    preempted) within an RCU-preempt read-side critical section, but only if
    the blocking is subject to priority inheritance. This exception permits
    CONFIG_RCU_BOOST=y to get -rt RCU readers out of trouble.

    Why doesn't this exception also apply to mainline's rt_mutex? Because
    of the possibility that someone does general blocking while holding
    an rt_mutex. Yes, the priority boosting will affect the rt_mutex,
    but it won't help with the task doing general blocking while holding
    that rt_mutex.

    Reported-by: Thomas Gleixner
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

03 May, 2017

2 commits

  • Because the rcu_cblist_n_lazy_cbs() just samples the ->len_lazy counter,
    and because the rcu_cblist structure is quite straightforward, it makes
    sense to open-code rcu_cblist_n_lazy_cbs(p) as p->len_lazy, cutting out
    a level of indirection. This commit makes this change.

    Reported-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Linus Torvalds

    Paul E. McKenney
     
  • Because the rcu_cblist_n_cbs() just samples the ->len counter, and
    because the rcu_cblist structure is quite straightforward, it makes
    sense to open-code rcu_cblist_n_cbs(p) as p->len, cutting out a level
    of indirection. This commit makes this change.

    Reported-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Linus Torvalds

    Paul E. McKenney
     

02 May, 2017

1 commit

  • Because the rcu_cblist_empty() just samples the ->head pointer, and
    because the rcu_cblist structure is quite straightforward, it makes
    sense to open-code rcu_cblist_empty(p) as !p->head, cutting out a
    level of indirection. This commit makes this change.

    Reported-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Linus Torvalds

    Paul E. McKenney
     

21 Apr, 2017

1 commit


20 Apr, 2017

1 commit


19 Apr, 2017

2 commits

  • RCU has only one multi-tail callback list, which is implemented via
    the nxtlist, nxttail, nxtcompleted, qlen_lazy, and qlen fields in the
    rcu_data structure, and whose operations are open-code throughout the
    Tree RCU implementation. This has been more or less OK in the past,
    but upcoming callback-list optimizations in SRCU could really use
    a multi-tail callback list there as well.

    This commit therefore abstracts the multi-tail callback list handling
    into a new kernel/rcu/rcu_segcblist.h file, and uses this new API.
    The simple head-and-tail pointer callback list is also abstracted and
    applied everywhere except for the NOCB callback-offload lists. (Yes,
    the plan is to apply them there as well, but this commit is already
    bigger than would be good.)

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The rcu_all_qs() and rcu_note_context_switch() do a series of checks,
    taking various actions to supply RCU with quiescent states, depending
    on the outcomes of the various checks. This is a bit much for scheduling
    fastpaths, so this commit creates a separate ->rcu_urgent_qs field in
    the rcu_dynticks structure that acts as a global guard for these checks.
    Thus, in the common case, rcu_all_qs() and rcu_note_context_switch()
    check the ->rcu_urgent_qs field, find it false, and simply return.

    Signed-off-by: Paul E. McKenney
    Cc: Peter Zijlstra

    Paul E. McKenney
     

02 Mar, 2017

2 commits


26 Jan, 2017

1 commit


24 Jan, 2017

2 commits

  • This commit is the fourth step towards full abstraction of all accesses
    to the ->dynticks counter, implementing previously open-coded checks and
    comparisons in new rcu_dynticks_in_eqs() and rcu_dynticks_in_eqs_since()
    functions. This abstraction will ease changes to the ->dynticks counter
    operation.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • It used to be that the rcuo callback-offload kthreads were spawned
    in rcu_organize_nocb_kthreads(), and the comment before the "for"
    loop says as much. However, this spawning has long since moved to
    the CPU-hotplug code, so this commit fixes this comment.

    Reported-by: Michalis Kokologiannakis
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     

15 Jan, 2017

1 commit

  • The current preemptible RCU implementation goes through three phases
    during bootup. In the first phase, there is only one CPU that is running
    with preemption disabled, so that a no-op is a synchronous grace period.
    In the second mid-boot phase, the scheduler is running, but RCU has
    not yet gotten its kthreads spawned (and, for expedited grace periods,
    workqueues are not yet running. During this time, any attempt to do
    a synchronous grace period will hang the system (or complain bitterly,
    depending). In the third and final phase, RCU is fully operational and
    everything works normally.

    This has been OK for some time, but there has recently been some
    synchronous grace periods showing up during the second mid-boot phase.
    This code worked "by accident" for awhile, but started failing as soon
    as expedited RCU grace periods switched over to workqueues in commit
    8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue").
    Note that the code was buggy even before this commit, as it was subject
    to failure on real-time systems that forced all expedited grace periods
    to run as normal grace periods (for example, using the rcu_normal ksysfs
    parameter). The callchain from the failure case is as follows:

    early_amd_iommu_init()
    |-> acpi_put_table(ivrs_base);
    |-> acpi_tb_put_table(table_desc);
    |-> acpi_tb_invalidate_table(table_desc);
    |-> acpi_tb_release_table(...)
    |-> acpi_os_unmap_memory
    |-> acpi_os_unmap_iomem
    |-> acpi_os_map_cleanup
    |-> synchronize_rcu_expedited

    The kernel showing this callchain was built with CONFIG_PREEMPT_RCU=y,
    which caused the code to try using workqueues before they were
    initialized, which did not go well.

    This commit therefore reworks RCU to permit synchronous grace periods
    to proceed during this mid-boot phase. This commit is therefore a
    fix to a regression introduced in v4.9, and is therefore being put
    forward post-merge-window in v4.10.

    This commit sets a flag from the existing rcu_scheduler_starting()
    function which causes all synchronous grace periods to take the expedited
    path. The expedited path now checks this flag, using the requesting task
    to drive the expedited grace period forward during the mid-boot phase.
    Finally, this flag is updated by a core_initcall() function named
    rcu_exp_runtime_mode(), which causes the runtime codepaths to be used.

    Note that this arrangement assumes that tasks are not sent POSIX signals
    (or anything similar) from the time that the first task is spawned
    through core_initcall() time.

    Fixes: 8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue")
    Reported-by: "Zheng, Lv"
    Reported-by: Borislav Petkov
    Signed-off-by: Paul E. McKenney
    Tested-by: Stan Kain
    Tested-by: Ivan
    Tested-by: Emanuel Castelo
    Tested-by: Bruno Pesavento
    Tested-by: Borislav Petkov
    Tested-by: Frederic Bezies
    Cc: # 4.9.0-

    Paul E. McKenney
     

22 Aug, 2016

1 commit

  • Carrying out the following steps results in a softlockup in the
    RCU callback-offload (rcuo) kthreads:

    1. Connect to ixgbevf, and set the speed to 10Gb/s.
    2. Use ifconfig to bring the nic up and down repeatedly.

    [ 317.005148] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
    [ 368.106005] BUG: soft lockup - CPU#1 stuck for 22s! [rcuos/1:15]
    [ 368.106005] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
    [ 368.106005] task: ffff88057dd8a220 ti: ffff88057dd9c000 task.ti: ffff88057dd9c000
    [ 368.106005] RIP: 0010:[] [] fib_table_lookup+0x14/0x390
    [ 368.106005] RSP: 0018:ffff88061fc83ce8 EFLAGS: 00000286
    [ 368.106005] RAX: 0000000000000001 RBX: 00000000020155c0 RCX: 0000000000000001
    [ 368.106005] RDX: ffff88061fc83d50 RSI: ffff88061fc83d70 RDI: ffff880036d11a00
    [ 368.106005] RBP: ffff88061fc83d08 R08: 0000000000000001 R09: 0000000000000000
    [ 368.106005] R10: ffff880036d11a00 R11: ffffffff819e0900 R12: ffff88061fc83c58
    [ 368.106005] R13: ffffffff816154dd R14: ffff88061fc83d08 R15: 00000000020155c0
    [ 368.106005] FS: 0000000000000000(0000) GS:ffff88061fc80000(0000) knlGS:0000000000000000
    [ 368.106005] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 368.106005] CR2: 00007f8c2aee9c40 CR3: 000000057b222000 CR4: 00000000000407e0
    [ 368.106005] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [ 368.106005] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    [ 368.106005] Stack:
    [ 368.106005] 00000000010000c0 ffff88057b766000 ffff8802e380b000 ffff88057af03e00
    [ 368.106005] ffff88061fc83dc0 ffffffff815349a6 ffff88061fc83d40 ffffffff814ee146
    [ 368.106005] ffff8802e380af00 00000000e380af00 ffffffff819e0900 020155c0010000c0
    [ 368.106005] Call Trace:
    [ 368.106005]
    [ 368.106005]
    [ 368.106005] [] ip_route_input_noref+0x516/0xbd0
    [ 368.106005] [] ? skb_release_data+0xd6/0x110
    [ 368.106005] [] ? kfree_skb+0x3a/0xa0
    [ 368.106005] [] ip_rcv_finish+0x29f/0x350
    [ 368.106005] [] ip_rcv+0x234/0x380
    [ 368.106005] [] __netif_receive_skb_core+0x676/0x870
    [ 368.106005] [] __netif_receive_skb+0x18/0x60
    [ 368.106005] [] process_backlog+0xae/0x180
    [ 368.106005] [] net_rx_action+0x152/0x240
    [ 368.106005] [] __do_softirq+0xef/0x280
    [ 368.106005] [] call_softirq+0x1c/0x30
    [ 368.106005]
    [ 368.106005]
    [ 368.106005] [] do_softirq+0x65/0xa0
    [ 368.106005] [] local_bh_enable+0x94/0xa0
    [ 368.106005] [] rcu_nocb_kthread+0x232/0x370
    [ 368.106005] [] ? wake_up_bit+0x30/0x30
    [ 368.106005] [] ? rcu_start_gp+0x40/0x40
    [ 368.106005] [] kthread+0xcf/0xe0
    [ 368.106005] [] ? kthread_create_on_node+0x140/0x140
    [ 368.106005] [] ret_from_fork+0x58/0x90
    [ 368.106005] [] ? kthread_create_on_node+0x140/0x140

    ==================================cut here==============================

    It turns out that the rcuos callback-offload kthread is busy processing
    a very large quantity of RCU callbacks, and it is not reliquishing the
    CPU while doing so. This commit therefore adds an cond_resched_rcu_qs()
    within the loop to allow other tasks to run.

    Signed-off-by: Ding Tianhong
    [ paulmck: Substituted cond_resched_rcu_qs for cond_resched. ]
    Signed-off-by: Paul E. McKenney

    Ding Tianhong
     

16 Jun, 2016

2 commits

  • doc.2016.06.15a: Documentation updates
    fixes.2016.06.15b: Documentation updates
    torture.2016.06.14a: Documentation updates

    Paul E. McKenney
     
  • In many cases in the RCU tree code, we iterate over the set of cpus for
    a leaf node described by rcu_node::grplo and rcu_node::grphi, checking
    per-cpu data for each cpu in this range. However, if the set of possible
    cpus is sparse, some cpus described in this range are not possible, and
    thus no per-cpu region will have been allocated (or initialised) for
    them by the generic percpu code.

    Erroneous accesses to a per-cpu area for these !possible cpus may fault
    or may hit other data depending on the addressed generated when the
    erroneous per cpu offset is applied. In practice, both cases have been
    observed on arm64 hardware (the former being silent, but detectable with
    additional patches).

    To avoid issues resulting from this, we must iterate over the set of
    *possible* cpus for a given leaf node. This patch add a new helper,
    for_each_leaf_node_possible_cpu, to enable this. As iteration is often
    intertwined with rcu_node local bitmask manipulation, a new
    leaf_node_cpu_bit helper is added to make this simpler and more
    consistent. The RCU tree code is made to use both of these where
    appropriate.

    Without this patch, running reboot at a shell can result in an oops
    like:

    [ 3369.075979] Unable to handle kernel paging request at virtual address ffffff8008b21b4c
    [ 3369.083881] pgd = ffffffc3ecdda000
    [ 3369.087270] [ffffff8008b21b4c] *pgd=00000083eca48003, *pud=00000083eca48003, *pmd=0000000000000000
    [ 3369.096222] Internal error: Oops: 96000007 [#1] PREEMPT SMP
    [ 3369.101781] Modules linked in:
    [ 3369.104825] CPU: 2 PID: 1817 Comm: NetworkManager Tainted: G W 4.6.0+ #3
    [ 3369.121239] task: ffffffc0fa13e000 ti: ffffffc3eb940000 task.ti: ffffffc3eb940000
    [ 3369.128708] PC is at sync_rcu_exp_select_cpus+0x188/0x510
    [ 3369.134094] LR is at sync_rcu_exp_select_cpus+0x104/0x510
    [ 3369.139479] pc : [] lr : [] pstate: 200001c5
    [ 3369.146860] sp : ffffffc3eb9435a0
    [ 3369.150162] x29: ffffffc3eb9435a0 x28: ffffff8008be4f88
    [ 3369.155465] x27: ffffff8008b66c80 x26: ffffffc3eceb2600
    [ 3369.160767] x25: 0000000000000001 x24: ffffff8008be4f88
    [ 3369.166070] x23: ffffff8008b51c3c x22: ffffff8008b66c80
    [ 3369.171371] x21: 0000000000000001 x20: ffffff8008b21b40
    [ 3369.176673] x19: ffffff8008b66c80 x18: 0000000000000000
    [ 3369.181975] x17: 0000007fa951a010 x16: ffffff80086a30f0
    [ 3369.187278] x15: 0000007fa9505590 x14: 0000000000000000
    [ 3369.192580] x13: ffffff8008b51000 x12: ffffffc3eb940000
    [ 3369.197882] x11: 0000000000000006 x10: ffffff8008b51b78
    [ 3369.203184] x9 : 0000000000000001 x8 : ffffff8008be4000
    [ 3369.208486] x7 : ffffff8008b21b40 x6 : 0000000000001003
    [ 3369.213788] x5 : 0000000000000000 x4 : ffffff8008b27280
    [ 3369.219090] x3 : ffffff8008b21b4c x2 : 0000000000000001
    [ 3369.224406] x1 : 0000000000000001 x0 : 0000000000000140
    ...
    [ 3369.972257] [] sync_rcu_exp_select_cpus+0x188/0x510
    [ 3369.978685] [] synchronize_rcu_expedited+0x64/0xa8
    [ 3369.985026] [] synchronize_net+0x24/0x30
    [ 3369.990499] [] dev_deactivate_many+0x28c/0x298
    [ 3369.996493] [] __dev_close_many+0x60/0xd0
    [ 3370.002052] [] __dev_close+0x28/0x40
    [ 3370.007178] [] __dev_change_flags+0x8c/0x158
    [ 3370.012999] [] dev_change_flags+0x20/0x60
    [ 3370.018558] [] do_setlink+0x288/0x918
    [ 3370.023771] [] rtnl_newlink+0x398/0x6a8
    [ 3370.029158] [] rtnetlink_rcv_msg+0xe4/0x220
    [ 3370.034891] [] netlink_rcv_skb+0xc4/0xf8
    [ 3370.040364] [] rtnetlink_rcv+0x2c/0x40
    [ 3370.045663] [] netlink_unicast+0x160/0x238
    [ 3370.051309] [] netlink_sendmsg+0x2f0/0x358
    [ 3370.056956] [] sock_sendmsg+0x18/0x30
    [ 3370.062168] [] ___sys_sendmsg+0x26c/0x280
    [ 3370.067728] [] __sys_sendmsg+0x44/0x88
    [ 3370.073027] [] SyS_sendmsg+0x10/0x20
    [ 3370.078153] [] el0_svc_naked+0x24/0x28

    Signed-off-by: Mark Rutland
    Reported-by: Dennis Chen
    Cc: Catalin Marinas
    Cc: Josh Triplett
    Cc: Lai Jiangshan
    Cc: Mathieu Desnoyers
    Cc: Steve Capper
    Cc: Steven Rostedt
    Cc: Will Deacon
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Paul E. McKenney

    Mark Rutland
     

15 Jun, 2016

2 commits