01 Sep, 2020

1 commit

  • The current notifiers have the following error handling pattern all
    over the place:

    int err, nr;

    err = __foo_notifier_call_chain(&chain, val_up, v, -1, &nr);
    if (err & NOTIFIER_STOP_MASK)
    __foo_notifier_call_chain(&chain, val_down, v, nr-1, NULL)

    And aside from the endless repetition thereof, it is broken. Consider
    blocking notifiers; both calls take and drop the rwsem, this means
    that the notifier list can change in between the two calls, making @nr
    meaningless.

    Fix this by replacing all the __foo_notifier_call_chain() functions
    with foo_notifier_call_chain_robust() that embeds the above pattern,
    but ensures it is inside a single lock region.

    Note: I switched atomic_notifier_call_chain_robust() to use
    the spinlock, since RCU cannot provide the guarantee
    required for the recovery.

    Note: software_resume() error handling was broken afaict.

    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Ingo Molnar
    Acked-by: Rafael J. Wysocki
    Link: https://lore.kernel.org/r/20200818135804.325626653@infradead.org

    Peter Zijlstra
     

16 May, 2020

1 commit

  • cpu_pm_notify() is basically a wrapper of notifier_call_chain().
    notifier_call_chain() doesn't initialize *nr_calls to 0 before it
    starts incrementing it--presumably it's up to the callers to do this.

    Unfortunately the callers of cpu_pm_notify() don't init *nr_calls.
    This potentially means you could get too many or two few calls to
    CPU_PM_ENTER_FAILED or CPU_CLUSTER_PM_ENTER_FAILED depending on the
    luck of the stack.

    Let's fix this.

    Fixes: ab10023e0088 ("cpu_pm: Add cpu power management notifiers")
    Cc: stable@vger.kernel.org
    Cc: Rafael J. Wysocki
    Reviewed-by: Stephen Boyd
    Reviewed-by: Greg Kroah-Hartman
    Signed-off-by: Douglas Anderson
    Link: https://lore.kernel.org/r/20200504104917.v6.3.I2d44fc0053d019f239527a4e5829416714b7e299@changeid
    Signed-off-by: Bjorn Andersson

    Douglas Anderson
     

05 Jun, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this software is licensed under the terms of the gnu general public
    license version 2 as published by the free software foundation and
    may be copied distributed and modified under those terms this
    program is distributed in the hope that it will be useful but
    without any warranty without even the implied warranty of
    merchantability or fitness for a particular purpose see the gnu
    general public license for more details

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 285 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Alexios Zavras
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190529141900.642774971@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

31 Jul, 2017

1 commit

  • This patch replaces an rwlock and raw notifier by an atomic notifier
    protected by a spin_lock and RCU.

    The main reason for this change is due to a 'scheduling while atomic'
    bug with RT kernels on ARM/ARM64. On ARM/ARM64, the rwlock
    cpu_pm_notifier_lock in cpu_pm_enter/exit() causes a potential
    schedule after IRQ disable in the idle call chain:

    cpu_startup_entry
    cpu_idle_loop
    local_irq_disable()
    cpuidle_idle_call
    call_cpuidle
    cpuidle_enter
    cpuidle_enter_state
    ->enter :arm_enter_idle_state
    cpu_pm_enter/exit
    CPU_PM_CPU_IDLE_ENTER
    read_lock(&cpu_pm_notifier_lock); ] arm_enter_idle_state+0x18/0x70
    [ 4.609614] Modules linked in:
    [ 4.609615] [] cpuidle_enter_state+0xf0/0x218
    [ 4.609620] [] cpuidle_enter+0x18/0x20
    [ 4.609626] Preemption disabled at:
    [ 4.609627] [] call_cpuidle+0x24/0x40
    [ 4.609635] [] schedule_preempt_disabled+0x1c/0x28
    [ 4.609639] [] cpu_startup_entry+0x154/0x1f8
    [ 4.609645] [] secondary_start_kernel+0x15c/0x1a0

    Daniel Lezcano said this notification is needed on arm/arm64 platforms.
    Sebastian suggested using atomic_notifier instead of rwlock, which is not
    only removing the sleeping in idle, but also improving latency.

    Tony Lindgren found a miss use that rcu_read_lock used after rcu_idle_enter
    Paul McKenney suggested trying RCU_NONIDLE.

    Signed-off-by: Alex Shi
    Tested-by: Tony Lindgren
    Acked-by: Sebastian Andrzej Siewior
    [ rjw: Subject & changelog ]
    Signed-off-by: Rafael J. Wysocki

    Alex Shi
     

03 Sep, 2015

1 commit

  • cpu_cluster_pm_exit() must be sent after cpu_cluster_pm_enter() has been
    sent for the cluster and before any cpu_pm_exit() notifications are sent
    for any CPU.

    Cc: Nicolas Pitre
    Acked-by: Kevin Hilman
    Signed-off-by: Lina Iyer
    Signed-off-by: Rafael J. Wysocki

    Lina Iyer
     

01 Jun, 2012

1 commit


23 Sep, 2011

2 commits

  • Implements syscore_ops in cpu_pm to call the cpu and
    cpu cluster notifiers during suspend and resume,
    allowing drivers receiving the notifications to
    avoid implementing syscore_ops.

    Signed-off-by: Colin Cross
    Signed-off-by: Santosh Shilimkar
    Reviewed-by: Kevin Hilman
    Tested-and-Acked-by: Shawn Guo
    Tested-by: Vishwanath BS

    Colin Cross
     
  • During some CPU power modes entered during idle, hotplug and
    suspend, peripherals located in the CPU power domain, such as
    the GIC, localtimers, and VFP, may be powered down. Add a
    notifier chain that allows drivers for those peripherals to
    be notified before and after they may be reset.

    Notified drivers can include VFP co-processor, interrupt controller
    and it's PM extensions, local CPU timers context save/restore which
    shouldn't be interrupted. Hence CPU PM event APIs must be called
    with interrupts disabled.

    Signed-off-by: Colin Cross
    Signed-off-by: Santosh Shilimkar
    Reviewed-by: Kevin Hilman
    Tested-and-Acked-by: Shawn Guo
    Tested-by: Kevin Hilman
    Tested-by: Vishwanath BS

    Colin Cross