Commit 39ed853a2447ce85cf29b3c0357998ff968beeb5

Authored by Linus Torvalds

Merge tag 'pm+acpi-4.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI fixes from Rafael Wysocki:
 "These are fixes for recent regressions (ACPI resources management,
  suspend-to-idle), stable-candidate fixes (ACPI backlight), fixes
  related to the wakeup IRQ management changes made in v3.18, other
  fixes (suspend-to-idle, cpufreq ppc driver) and a couple of cleanups
  (suspend-to-idle, generic power domains, ACPI backlight).

  Specifics:

   - Fix ACPI resources management problems introduced by the recent
     rework of the code in question (Jiang Liu) and a build issue
     introduced by those changes (Joachim Nilsson).

   - Fix a recent suspend-to-idle regression on systems where entering
     idle states causes local timers to stop, prevent suspend-to-idle
     from crashing in restricted configurations (no cpuidle driver,
     cpuidle disabled etc.) and clean up the idle loop somewhat while at
     it (Rafael J Wysocki).

   - Fix build problem in the cpufreq ppc driver (Geert Uytterhoeven).

   - Allow the ACPI backlight driver module to be loaded if ACPI is
     disabled which helps the i915 driver in those configurations
     (stable-candidate) and change the code to help debug unusual use
     cases (Chris Wilson).

   - Wakeup IRQ management changes in v3.18 caused some drivers on the
     at91 platform to trigger a warning from the IRQ core related to an
     unexpected combination of interrupt action handler flags.  However,
     on at91 a timer IRQ is shared with some other devices (including
     system wakeup ones) and that leads to the unusual combination of
     flags in question.

     To make it possible to avoid the warning introduce a new interrupt
     action handler flag (which can be used by drivers to indicate the
     special case to the core) and rework the problematic at91 drivers
     to use it and work as expected during system suspend/resume.  From
     Boris Brezillon, Rafael J Wysocki and Mark Rutland.

   - Clean up the generic power domains subsystem's debugfs interface
     (Kevin Hilman)"

* tag 'pm+acpi-4.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  genirq / PM: describe IRQF_COND_SUSPEND
  tty: serial: atmel: rework interrupt and wakeup handling
  watchdog: at91sam9: request the irq with IRQF_NO_SUSPEND
  cpuidle / sleep: Use broadcast timer for states that stop local timer
  clk: at91: implement suspend/resume for the PMC irqchip
  rtc: at91rm9200: rework wakeup and interrupt handling
  rtc: at91sam9: rework wakeup and interrupt handling
  PM / wakeup: export pm_system_wakeup symbol
  genirq / PM: Add flag for shared NO_SUSPEND interrupt lines
  ACPI / video: Propagate the error code for acpi_video_register
  ACPI / video: Load the module even if ACPI is disabled
  PM / Domains: cleanup: rename gpd -> genpd in debugfs interface
  cpufreq: ppc: Add missing #include <asm/smp.h>
  x86/PCI/ACPI: Relax ACPI resource descriptor checks to work around BIOS bugs
  x86/PCI/ACPI: Ignore resources consumed by host bridge itself
  cpuidle: Clean up fallback handling in cpuidle_idle_call()
  cpuidle / sleep: Do sanity checks in cpuidle_enter_freeze() too
  idle / sleep: Avoid excessive disabling and enabling interrupts
  PCI: versatile: Update for list_for_each_entry() API change
  genirq / PM: better describe IRQF_NO_SUSPEND semantics

Showing 21 changed files Side-by-side Diff

Documentation/power/suspend-and-interrupts.txt
... ... @@ -40,8 +40,10 @@
40 40  
41 41 The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when
42 42 requesting a special-purpose interrupt. It causes suspend_device_irqs() to
43   -leave the corresponding IRQ enabled so as to allow the interrupt to work all
44   -the time as expected.
  43 +leave the corresponding IRQ enabled so as to allow the interrupt to work as
  44 +expected during the suspend-resume cycle, but does not guarantee that the
  45 +interrupt will wake the system from a suspended state -- for such cases it is
  46 +necessary to use enable_irq_wake().
45 47  
46 48 Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one
47 49 user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed
... ... @@ -110,8 +112,9 @@
110 112 IRQF_NO_SUSPEND and enable_irq_wake()
111 113 -------------------------------------
112 114  
113   -There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND
114   -flag on the same IRQ.
  115 +There are very few valid reasons to use both enable_irq_wake() and the
  116 +IRQF_NO_SUSPEND flag on the same IRQ, and it is never valid to use both for the
  117 +same device.
115 118  
116 119 First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND
117 120 interrupts (interrupt handlers are invoked after suspend_device_irqs()) are
... ... @@ -120,5 +123,14 @@
120 123  
121 124 Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not
122 125 to individual interrupt handlers, so sharing an IRQ between a system wakeup
123   -interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense.
  126 +interrupt source and an IRQF_NO_SUSPEND interrupt source does not generally
  127 +make sense.
  128 +
  129 +In rare cases an IRQ can be shared between a wakeup device driver and an
  130 +IRQF_NO_SUSPEND user. In order for this to be safe, the wakeup device driver
  131 +must be able to discern spurious IRQs from genuine wakeup events (signalling
  132 +the latter to the core with pm_system_wakeup()), must use enable_irq_wake() to
  133 +ensure that the IRQ will function as a wakeup source, and must request the IRQ
  134 +with IRQF_COND_SUSPEND to tell the core that it meets these requirements. If
  135 +these requirements are not met, it is not valid to use IRQF_COND_SUSPEND.
... ... @@ -331,7 +331,7 @@
331 331 struct list_head *list)
332 332 {
333 333 int ret;
334   - struct resource_entry *entry;
  334 + struct resource_entry *entry, *tmp;
335 335  
336 336 sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum);
337 337 info->bridge = device;
... ... @@ -345,8 +345,13 @@
345 345 dev_dbg(&device->dev,
346 346 "no IO and memory resources present in _CRS\n");
347 347 else
348   - resource_list_for_each_entry(entry, list)
349   - entry->res->name = info->name;
  348 + resource_list_for_each_entry_safe(entry, tmp, list) {
  349 + if ((entry->res->flags & IORESOURCE_WINDOW) == 0 ||
  350 + (entry->res->flags & IORESOURCE_DISABLED))
  351 + resource_list_destroy_entry(entry);
  352 + else
  353 + entry->res->name = info->name;
  354 + }
350 355 }
351 356  
352 357 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
drivers/acpi/resource.c
... ... @@ -42,8 +42,10 @@
42 42 * CHECKME: len might be required to check versus a minimum
43 43 * length as well. 1 for io is fine, but for memory it does
44 44 * not make any sense at all.
  45 + * Note: some BIOSes report incorrect length for ACPI address space
  46 + * descriptor, so remove check of 'reslen == len' to avoid regression.
45 47 */
46   - if (len && reslen && reslen == len && start <= end)
  48 + if (len && reslen && start <= end)
47 49 return true;
48 50  
49 51 pr_debug("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n",
drivers/acpi/video.c
... ... @@ -2110,7 +2110,8 @@
2110 2110  
2111 2111 int acpi_video_register(void)
2112 2112 {
2113   - int result = 0;
  2113 + int ret;
  2114 +
2114 2115 if (register_count) {
2115 2116 /*
2116 2117 * if the function of acpi_video_register is already called,
... ... @@ -2122,9 +2123,9 @@
2122 2123 mutex_init(&video_list_lock);
2123 2124 INIT_LIST_HEAD(&video_bus_head);
2124 2125  
2125   - result = acpi_bus_register_driver(&acpi_video_bus);
2126   - if (result < 0)
2127   - return -ENODEV;
  2126 + ret = acpi_bus_register_driver(&acpi_video_bus);
  2127 + if (ret)
  2128 + return ret;
2128 2129  
2129 2130 /*
2130 2131 * When the acpi_video_bus is loaded successfully, increase
... ... @@ -2176,6 +2177,17 @@
2176 2177  
2177 2178 static int __init acpi_video_init(void)
2178 2179 {
  2180 + /*
  2181 + * Let the module load even if ACPI is disabled (e.g. due to
  2182 + * a broken BIOS) so that i915.ko can still be loaded on such
  2183 + * old systems without an AcpiOpRegion.
  2184 + *
  2185 + * acpi_video_register() will report -ENODEV later as well due
  2186 + * to acpi_disabled when i915.ko tries to register itself afterwards.
  2187 + */
  2188 + if (acpi_disabled)
  2189 + return 0;
  2190 +
2179 2191 dmi_check_system(video_dmi_table);
2180 2192  
2181 2193 if (intel_opregion_present())
drivers/base/power/domain.c
... ... @@ -2242,7 +2242,7 @@
2242 2242 }
2243 2243  
2244 2244 static int pm_genpd_summary_one(struct seq_file *s,
2245   - struct generic_pm_domain *gpd)
  2245 + struct generic_pm_domain *genpd)
2246 2246 {
2247 2247 static const char * const status_lookup[] = {
2248 2248 [GPD_STATE_ACTIVE] = "on",
2249 2249  
2250 2250  
2251 2251  
2252 2252  
2253 2253  
2254 2254  
... ... @@ -2256,26 +2256,26 @@
2256 2256 struct gpd_link *link;
2257 2257 int ret;
2258 2258  
2259   - ret = mutex_lock_interruptible(&gpd->lock);
  2259 + ret = mutex_lock_interruptible(&genpd->lock);
2260 2260 if (ret)
2261 2261 return -ERESTARTSYS;
2262 2262  
2263   - if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup)))
  2263 + if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup)))
2264 2264 goto exit;
2265   - seq_printf(s, "%-30s %-15s ", gpd->name, status_lookup[gpd->status]);
  2265 + seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]);
2266 2266  
2267 2267 /*
2268 2268 * Modifications on the list require holding locks on both
2269 2269 * master and slave, so we are safe.
2270   - * Also gpd->name is immutable.
  2270 + * Also genpd->name is immutable.
2271 2271 */
2272   - list_for_each_entry(link, &gpd->master_links, master_node) {
  2272 + list_for_each_entry(link, &genpd->master_links, master_node) {
2273 2273 seq_printf(s, "%s", link->slave->name);
2274   - if (!list_is_last(&link->master_node, &gpd->master_links))
  2274 + if (!list_is_last(&link->master_node, &genpd->master_links))
2275 2275 seq_puts(s, ", ");
2276 2276 }
2277 2277  
2278   - list_for_each_entry(pm_data, &gpd->dev_list, list_node) {
  2278 + list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
2279 2279 kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
2280 2280 if (kobj_path == NULL)
2281 2281 continue;
2282 2282  
... ... @@ -2287,14 +2287,14 @@
2287 2287  
2288 2288 seq_puts(s, "\n");
2289 2289 exit:
2290   - mutex_unlock(&gpd->lock);
  2290 + mutex_unlock(&genpd->lock);
2291 2291  
2292 2292 return 0;
2293 2293 }
2294 2294  
2295 2295 static int pm_genpd_summary_show(struct seq_file *s, void *data)
2296 2296 {
2297   - struct generic_pm_domain *gpd;
  2297 + struct generic_pm_domain *genpd;
2298 2298 int ret = 0;
2299 2299  
2300 2300 seq_puts(s, " domain status slaves\n");
... ... @@ -2305,8 +2305,8 @@
2305 2305 if (ret)
2306 2306 return -ERESTARTSYS;
2307 2307  
2308   - list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
2309   - ret = pm_genpd_summary_one(s, gpd);
  2308 + list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
  2309 + ret = pm_genpd_summary_one(s, genpd);
2310 2310 if (ret)
2311 2311 break;
2312 2312 }
drivers/base/power/wakeup.c
... ... @@ -730,6 +730,7 @@
730 730 pm_abort_suspend = true;
731 731 freeze_wake();
732 732 }
  733 +EXPORT_SYMBOL_GPL(pm_system_wakeup);
733 734  
734 735 void pm_wakeup_clear(void)
735 736 {
drivers/clk/at91/pmc.c
... ... @@ -89,12 +89,29 @@
89 89 return 0;
90 90 }
91 91  
  92 +static void pmc_irq_suspend(struct irq_data *d)
  93 +{
  94 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d);
  95 +
  96 + pmc->imr = pmc_read(pmc, AT91_PMC_IMR);
  97 + pmc_write(pmc, AT91_PMC_IDR, pmc->imr);
  98 +}
  99 +
  100 +static void pmc_irq_resume(struct irq_data *d)
  101 +{
  102 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d);
  103 +
  104 + pmc_write(pmc, AT91_PMC_IER, pmc->imr);
  105 +}
  106 +
92 107 static struct irq_chip pmc_irq = {
93 108 .name = "PMC",
94 109 .irq_disable = pmc_irq_mask,
95 110 .irq_mask = pmc_irq_mask,
96 111 .irq_unmask = pmc_irq_unmask,
97 112 .irq_set_type = pmc_irq_set_type,
  113 + .irq_suspend = pmc_irq_suspend,
  114 + .irq_resume = pmc_irq_resume,
98 115 };
99 116  
100 117 static struct lock_class_key pmc_lock_class;
... ... @@ -224,7 +241,8 @@
224 241 goto out_free_pmc;
225 242  
226 243 pmc_write(pmc, AT91_PMC_IDR, 0xffffffff);
227   - if (request_irq(pmc->virq, pmc_irq_handler, IRQF_SHARED, "pmc", pmc))
  244 + if (request_irq(pmc->virq, pmc_irq_handler,
  245 + IRQF_SHARED | IRQF_COND_SUSPEND, "pmc", pmc))
228 246 goto out_remove_irqdomain;
229 247  
230 248 return pmc;
drivers/clk/at91/pmc.h
... ... @@ -33,6 +33,7 @@
33 33 spinlock_t lock;
34 34 const struct at91_pmc_caps *caps;
35 35 struct irq_domain *irqdomain;
  36 + u32 imr;
36 37 };
37 38  
38 39 static inline void pmc_lock(struct at91_pmc *pmc)
drivers/cpufreq/ppc-corenet-cpufreq.c
... ... @@ -22,6 +22,8 @@
22 22 #include <linux/smp.h>
23 23 #include <sysdev/fsl_soc.h>
24 24  
  25 +#include <asm/smp.h> /* for get_hard_smp_processor_id() in UP configs */
  26 +
25 27 /**
26 28 * struct cpu_data - per CPU data struct
27 29 * @parent: the parent node of cpu clock
drivers/cpuidle/cpuidle.c
... ... @@ -44,6 +44,12 @@
44 44 off = 1;
45 45 }
46 46  
  47 +bool cpuidle_not_available(struct cpuidle_driver *drv,
  48 + struct cpuidle_device *dev)
  49 +{
  50 + return off || !initialized || !drv || !dev || !dev->enabled;
  51 +}
  52 +
47 53 /**
48 54 * cpuidle_play_dead - cpu off-lining
49 55 *
... ... @@ -66,14 +72,8 @@
66 72 return -ENODEV;
67 73 }
68 74  
69   -/**
70   - * cpuidle_find_deepest_state - Find deepest state meeting specific conditions.
71   - * @drv: cpuidle driver for the given CPU.
72   - * @dev: cpuidle device for the given CPU.
73   - * @freeze: Whether or not the state should be suitable for suspend-to-idle.
74   - */
75   -static int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
76   - struct cpuidle_device *dev, bool freeze)
  75 +static int find_deepest_state(struct cpuidle_driver *drv,
  76 + struct cpuidle_device *dev, bool freeze)
77 77 {
78 78 unsigned int latency_req = 0;
79 79 int i, ret = freeze ? -1 : CPUIDLE_DRIVER_STATE_START - 1;
... ... @@ -92,6 +92,17 @@
92 92 return ret;
93 93 }
94 94  
  95 +/**
  96 + * cpuidle_find_deepest_state - Find the deepest available idle state.
  97 + * @drv: cpuidle driver for the given CPU.
  98 + * @dev: cpuidle device for the given CPU.
  99 + */
  100 +int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
  101 + struct cpuidle_device *dev)
  102 +{
  103 + return find_deepest_state(drv, dev, false);
  104 +}
  105 +
95 106 static void enter_freeze_proper(struct cpuidle_driver *drv,
96 107 struct cpuidle_device *dev, int index)
97 108 {
98 109  
99 110  
100 111  
... ... @@ -113,15 +124,14 @@
113 124  
114 125 /**
115 126 * cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle.
  127 + * @drv: cpuidle driver for the given CPU.
  128 + * @dev: cpuidle device for the given CPU.
116 129 *
117 130 * If there are states with the ->enter_freeze callback, find the deepest of
118   - * them and enter it with frozen tick. Otherwise, find the deepest state
119   - * available and enter it normally.
  131 + * them and enter it with frozen tick.
120 132 */
121   -void cpuidle_enter_freeze(void)
  133 +int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev)
122 134 {
123   - struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices);
124   - struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
125 135 int index;
126 136  
127 137 /*
128 138  
129 139  
... ... @@ -129,24 +139,11 @@
129 139 * that interrupts won't be enabled when it exits and allows the tick to
130 140 * be frozen safely.
131 141 */
132   - index = cpuidle_find_deepest_state(drv, dev, true);
133   - if (index >= 0) {
  142 + index = find_deepest_state(drv, dev, true);
  143 + if (index >= 0)
134 144 enter_freeze_proper(drv, dev, index);
135   - return;
136   - }
137 145  
138   - /*
139   - * It is not safe to freeze the tick, find the deepest state available
140   - * at all and try to enter it normally.
141   - */
142   - index = cpuidle_find_deepest_state(drv, dev, false);
143   - if (index >= 0)
144   - cpuidle_enter(drv, dev, index);
145   - else
146   - arch_cpu_idle();
147   -
148   - /* Interrupts are enabled again here. */
149   - local_irq_disable();
  146 + return index;
150 147 }
151 148  
152 149 /**
... ... @@ -205,12 +202,6 @@
205 202 */
206 203 int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
207 204 {
208   - if (off || !initialized)
209   - return -ENODEV;
210   -
211   - if (!drv || !dev || !dev->enabled)
212   - return -EBUSY;
213   -
214 205 return cpuidle_curr_governor->select(drv, dev);
215 206 }
216 207  
drivers/pci/host/pci-versatile.c
... ... @@ -80,7 +80,7 @@
80 80 if (err)
81 81 return err;
82 82  
83   - resource_list_for_each_entry(win, res, list) {
  83 + resource_list_for_each_entry(win, res) {
84 84 struct resource *parent, *res = win->res;
85 85  
86 86 switch (resource_type(res)) {
drivers/rtc/rtc-at91rm9200.c
... ... @@ -31,6 +31,7 @@
31 31 #include <linux/io.h>
32 32 #include <linux/of.h>
33 33 #include <linux/of_device.h>
  34 +#include <linux/suspend.h>
34 35 #include <linux/uaccess.h>
35 36  
36 37 #include "rtc-at91rm9200.h"
... ... @@ -54,6 +55,10 @@
54 55 static int irq;
55 56 static DEFINE_SPINLOCK(at91_rtc_lock);
56 57 static u32 at91_rtc_shadow_imr;
  58 +static bool suspended;
  59 +static DEFINE_SPINLOCK(suspended_lock);
  60 +static unsigned long cached_events;
  61 +static u32 at91_rtc_imr;
57 62  
58 63 static void at91_rtc_write_ier(u32 mask)
59 64 {
60 65  
... ... @@ -290,7 +295,9 @@
290 295 struct rtc_device *rtc = platform_get_drvdata(pdev);
291 296 unsigned int rtsr;
292 297 unsigned long events = 0;
  298 + int ret = IRQ_NONE;
293 299  
  300 + spin_lock(&suspended_lock);
294 301 rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr();
295 302 if (rtsr) { /* this interrupt is shared! Is it ours? */
296 303 if (rtsr & AT91_RTC_ALARM)
297 304  
298 305  
299 306  
... ... @@ -304,14 +311,22 @@
304 311  
305 312 at91_rtc_write(AT91_RTC_SCCR, rtsr); /* clear status reg */
306 313  
307   - rtc_update_irq(rtc, 1, events);
  314 + if (!suspended) {
  315 + rtc_update_irq(rtc, 1, events);
308 316  
309   - dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", __func__,
310   - events >> 8, events & 0x000000FF);
  317 + dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n",
  318 + __func__, events >> 8, events & 0x000000FF);
  319 + } else {
  320 + cached_events |= events;
  321 + at91_rtc_write_idr(at91_rtc_imr);
  322 + pm_system_wakeup();
  323 + }
311 324  
312   - return IRQ_HANDLED;
  325 + ret = IRQ_HANDLED;
313 326 }
314   - return IRQ_NONE; /* not handled */
  327 + spin_lock(&suspended_lock);
  328 +
  329 + return ret;
315 330 }
316 331  
317 332 static const struct at91_rtc_config at91rm9200_config = {
... ... @@ -401,8 +416,8 @@
401 416 AT91_RTC_CALEV);
402 417  
403 418 ret = devm_request_irq(&pdev->dev, irq, at91_rtc_interrupt,
404   - IRQF_SHARED,
405   - "at91_rtc", pdev);
  419 + IRQF_SHARED | IRQF_COND_SUSPEND,
  420 + "at91_rtc", pdev);
406 421 if (ret) {
407 422 dev_err(&pdev->dev, "IRQ %d already in use.\n", irq);
408 423 return ret;
... ... @@ -454,8 +469,6 @@
454 469  
455 470 /* AT91RM9200 RTC Power management control */
456 471  
457   -static u32 at91_rtc_imr;
458   -
459 472 static int at91_rtc_suspend(struct device *dev)
460 473 {
461 474 /* this IRQ is shared with DBGU and other hardware which isn't
462 475  
463 476  
464 477  
465 478  
466 479  
... ... @@ -464,21 +477,42 @@
464 477 at91_rtc_imr = at91_rtc_read_imr()
465 478 & (AT91_RTC_ALARM|AT91_RTC_SECEV);
466 479 if (at91_rtc_imr) {
467   - if (device_may_wakeup(dev))
  480 + if (device_may_wakeup(dev)) {
  481 + unsigned long flags;
  482 +
468 483 enable_irq_wake(irq);
469   - else
  484 +
  485 + spin_lock_irqsave(&suspended_lock, flags);
  486 + suspended = true;
  487 + spin_unlock_irqrestore(&suspended_lock, flags);
  488 + } else {
470 489 at91_rtc_write_idr(at91_rtc_imr);
  490 + }
471 491 }
472 492 return 0;
473 493 }
474 494  
475 495 static int at91_rtc_resume(struct device *dev)
476 496 {
  497 + struct rtc_device *rtc = dev_get_drvdata(dev);
  498 +
477 499 if (at91_rtc_imr) {
478   - if (device_may_wakeup(dev))
  500 + if (device_may_wakeup(dev)) {
  501 + unsigned long flags;
  502 +
  503 + spin_lock_irqsave(&suspended_lock, flags);
  504 +
  505 + if (cached_events) {
  506 + rtc_update_irq(rtc, 1, cached_events);
  507 + cached_events = 0;
  508 + }
  509 +
  510 + suspended = false;
  511 + spin_unlock_irqrestore(&suspended_lock, flags);
  512 +
479 513 disable_irq_wake(irq);
480   - else
481   - at91_rtc_write_ier(at91_rtc_imr);
  514 + }
  515 + at91_rtc_write_ier(at91_rtc_imr);
482 516 }
483 517 return 0;
484 518 }
drivers/rtc/rtc-at91sam9.c
... ... @@ -23,6 +23,7 @@
23 23 #include <linux/io.h>
24 24 #include <linux/mfd/syscon.h>
25 25 #include <linux/regmap.h>
  26 +#include <linux/suspend.h>
26 27 #include <linux/clk.h>
27 28  
28 29 /*
... ... @@ -77,6 +78,9 @@
77 78 unsigned int gpbr_offset;
78 79 int irq;
79 80 struct clk *sclk;
  81 + bool suspended;
  82 + unsigned long events;
  83 + spinlock_t lock;
80 84 };
81 85  
82 86 #define rtt_readl(rtc, field) \
83 87  
84 88  
... ... @@ -271,14 +275,9 @@
271 275 return 0;
272 276 }
273 277  
274   -/*
275   - * IRQ handler for the RTC
276   - */
277   -static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc)
  278 +static irqreturn_t at91_rtc_cache_events(struct sam9_rtc *rtc)
278 279 {
279   - struct sam9_rtc *rtc = _rtc;
280 280 u32 sr, mr;
281   - unsigned long events = 0;
282 281  
283 282 /* Shared interrupt may be for another device. Note: reading
284 283 * SR clears it, so we must only read it in this irq handler!
285 284  
286 285  
287 286  
288 287  
289 288  
... ... @@ -290,18 +289,54 @@
290 289  
291 290 /* alarm status */
292 291 if (sr & AT91_RTT_ALMS)
293   - events |= (RTC_AF | RTC_IRQF);
  292 + rtc->events |= (RTC_AF | RTC_IRQF);
294 293  
295 294 /* timer update/increment */
296 295 if (sr & AT91_RTT_RTTINC)
297   - events |= (RTC_UF | RTC_IRQF);
  296 + rtc->events |= (RTC_UF | RTC_IRQF);
298 297  
299   - rtc_update_irq(rtc->rtcdev, 1, events);
  298 + return IRQ_HANDLED;
  299 +}
300 300  
  301 +static void at91_rtc_flush_events(struct sam9_rtc *rtc)
  302 +{
  303 + if (!rtc->events)
  304 + return;
  305 +
  306 + rtc_update_irq(rtc->rtcdev, 1, rtc->events);
  307 + rtc->events = 0;
  308 +
301 309 pr_debug("%s: num=%ld, events=0x%02lx\n", __func__,
302   - events >> 8, events & 0x000000FF);
  310 + rtc->events >> 8, rtc->events & 0x000000FF);
  311 +}
303 312  
304   - return IRQ_HANDLED;
  313 +/*
  314 + * IRQ handler for the RTC
  315 + */
  316 +static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc)
  317 +{
  318 + struct sam9_rtc *rtc = _rtc;
  319 + int ret;
  320 +
  321 + spin_lock(&rtc->lock);
  322 +
  323 + ret = at91_rtc_cache_events(rtc);
  324 +
  325 + /* We're called in suspended state */
  326 + if (rtc->suspended) {
  327 + /* Mask irqs coming from this peripheral */
  328 + rtt_writel(rtc, MR,
  329 + rtt_readl(rtc, MR) &
  330 + ~(AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN));
  331 + /* Trigger a system wakeup */
  332 + pm_system_wakeup();
  333 + } else {
  334 + at91_rtc_flush_events(rtc);
  335 + }
  336 +
  337 + spin_unlock(&rtc->lock);
  338 +
  339 + return ret;
305 340 }
306 341  
307 342 static const struct rtc_class_ops at91_rtc_ops = {
... ... @@ -421,7 +456,8 @@
421 456  
422 457 /* register irq handler after we know what name we'll use */
423 458 ret = devm_request_irq(&pdev->dev, rtc->irq, at91_rtc_interrupt,
424   - IRQF_SHARED, dev_name(&rtc->rtcdev->dev), rtc);
  459 + IRQF_SHARED | IRQF_COND_SUSPEND,
  460 + dev_name(&rtc->rtcdev->dev), rtc);
425 461 if (ret) {
426 462 dev_dbg(&pdev->dev, "can't share IRQ %d?\n", rtc->irq);
427 463 return ret;
428 464  
... ... @@ -482,7 +518,12 @@
482 518 rtc->imr = mr & (AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN);
483 519 if (rtc->imr) {
484 520 if (device_may_wakeup(dev) && (mr & AT91_RTT_ALMIEN)) {
  521 + unsigned long flags;
  522 +
485 523 enable_irq_wake(rtc->irq);
  524 + spin_lock_irqsave(&rtc->lock, flags);
  525 + rtc->suspended = true;
  526 + spin_unlock_irqrestore(&rtc->lock, flags);
486 527 /* don't let RTTINC cause wakeups */
487 528 if (mr & AT91_RTT_RTTINCIEN)
488 529 rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN);
489 530  
... ... @@ -499,10 +540,18 @@
499 540 u32 mr;
500 541  
501 542 if (rtc->imr) {
  543 + unsigned long flags;
  544 +
502 545 if (device_may_wakeup(dev))
503 546 disable_irq_wake(rtc->irq);
504 547 mr = rtt_readl(rtc, MR);
505 548 rtt_writel(rtc, MR, mr | rtc->imr);
  549 +
  550 + spin_lock_irqsave(&rtc->lock, flags);
  551 + rtc->suspended = false;
  552 + at91_rtc_cache_events(rtc);
  553 + at91_rtc_flush_events(rtc);
  554 + spin_unlock_irqrestore(&rtc->lock, flags);
506 555 }
507 556  
508 557 return 0;
drivers/tty/serial/atmel_serial.c
... ... @@ -47,6 +47,7 @@
47 47 #include <linux/gpio/consumer.h>
48 48 #include <linux/err.h>
49 49 #include <linux/irq.h>
  50 +#include <linux/suspend.h>
50 51  
51 52 #include <asm/io.h>
52 53 #include <asm/ioctls.h>
... ... @@ -173,6 +174,12 @@
173 174 bool ms_irq_enabled;
174 175 bool is_usart; /* usart or uart */
175 176 struct timer_list uart_timer; /* uart timer */
  177 +
  178 + bool suspended;
  179 + unsigned int pending;
  180 + unsigned int pending_status;
  181 + spinlock_t lock_suspended;
  182 +
176 183 int (*prepare_rx)(struct uart_port *port);
177 184 int (*prepare_tx)(struct uart_port *port);
178 185 void (*schedule_rx)(struct uart_port *port);
179 186  
180 187  
... ... @@ -1179,12 +1186,15 @@
1179 1186 {
1180 1187 struct uart_port *port = dev_id;
1181 1188 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
1182   - unsigned int status, pending, pass_counter = 0;
  1189 + unsigned int status, pending, mask, pass_counter = 0;
1183 1190 bool gpio_handled = false;
1184 1191  
  1192 + spin_lock(&atmel_port->lock_suspended);
  1193 +
1185 1194 do {
1186 1195 status = atmel_get_lines_status(port);
1187   - pending = status & UART_GET_IMR(port);
  1196 + mask = UART_GET_IMR(port);
  1197 + pending = status & mask;
1188 1198 if (!gpio_handled) {
1189 1199 /*
1190 1200 * Dealing with GPIO interrupt
1191 1201  
... ... @@ -1206,11 +1216,21 @@
1206 1216 if (!pending)
1207 1217 break;
1208 1218  
  1219 + if (atmel_port->suspended) {
  1220 + atmel_port->pending |= pending;
  1221 + atmel_port->pending_status = status;
  1222 + UART_PUT_IDR(port, mask);
  1223 + pm_system_wakeup();
  1224 + break;
  1225 + }
  1226 +
1209 1227 atmel_handle_receive(port, pending);
1210 1228 atmel_handle_status(port, pending, status);
1211 1229 atmel_handle_transmit(port, pending);
1212 1230 } while (pass_counter++ < ATMEL_ISR_PASS_LIMIT);
1213 1231  
  1232 + spin_unlock(&atmel_port->lock_suspended);
  1233 +
1214 1234 return pass_counter ? IRQ_HANDLED : IRQ_NONE;
1215 1235 }
1216 1236  
... ... @@ -1742,7 +1762,8 @@
1742 1762 /*
1743 1763 * Allocate the IRQ
1744 1764 */
1745   - retval = request_irq(port->irq, atmel_interrupt, IRQF_SHARED,
  1765 + retval = request_irq(port->irq, atmel_interrupt,
  1766 + IRQF_SHARED | IRQF_COND_SUSPEND,
1746 1767 tty ? tty->name : "atmel_serial", port);
1747 1768 if (retval) {
1748 1769 dev_err(port->dev, "atmel_startup - Can't get irq\n");
1749 1770  
... ... @@ -2513,8 +2534,14 @@
2513 2534  
2514 2535 /* we can not wake up if we're running on slow clock */
2515 2536 atmel_port->may_wakeup = device_may_wakeup(&pdev->dev);
2516   - if (atmel_serial_clk_will_stop())
  2537 + if (atmel_serial_clk_will_stop()) {
  2538 + unsigned long flags;
  2539 +
  2540 + spin_lock_irqsave(&atmel_port->lock_suspended, flags);
  2541 + atmel_port->suspended = true;
  2542 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags);
2517 2543 device_set_wakeup_enable(&pdev->dev, 0);
  2544 + }
2518 2545  
2519 2546 uart_suspend_port(&atmel_uart, port);
2520 2547  
2521 2548  
... ... @@ -2525,7 +2552,19 @@
2525 2552 {
2526 2553 struct uart_port *port = platform_get_drvdata(pdev);
2527 2554 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
  2555 + unsigned long flags;
2528 2556  
  2557 + spin_lock_irqsave(&atmel_port->lock_suspended, flags);
  2558 + if (atmel_port->pending) {
  2559 + atmel_handle_receive(port, atmel_port->pending);
  2560 + atmel_handle_status(port, atmel_port->pending,
  2561 + atmel_port->pending_status);
  2562 + atmel_handle_transmit(port, atmel_port->pending);
  2563 + atmel_port->pending = 0;
  2564 + }
  2565 + atmel_port->suspended = false;
  2566 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags);
  2567 +
2529 2568 uart_resume_port(&atmel_uart, port);
2530 2569 device_set_wakeup_enable(&pdev->dev, atmel_port->may_wakeup);
2531 2570  
... ... @@ -2592,6 +2631,8 @@
2592 2631 port = &atmel_ports[ret];
2593 2632 port->backup_imr = 0;
2594 2633 port->uart.line = ret;
  2634 +
  2635 + spin_lock_init(&port->lock_suspended);
2595 2636  
2596 2637 ret = atmel_init_gpios(port, &pdev->dev);
2597 2638 if (ret < 0)
drivers/watchdog/at91sam9_wdt.c
... ... @@ -208,7 +208,8 @@
208 208  
209 209 if ((tmp & AT91_WDT_WDFIEN) && wdt->irq) {
210 210 err = request_irq(wdt->irq, wdt_interrupt,
211   - IRQF_SHARED | IRQF_IRQPOLL,
  211 + IRQF_SHARED | IRQF_IRQPOLL |
  212 + IRQF_NO_SUSPEND,
212 213 pdev->name, wdt);
213 214 if (err)
214 215 return err;
include/linux/cpuidle.h
... ... @@ -126,6 +126,8 @@
126 126  
127 127 #ifdef CONFIG_CPU_IDLE
128 128 extern void disable_cpuidle(void);
  129 +extern bool cpuidle_not_available(struct cpuidle_driver *drv,
  130 + struct cpuidle_device *dev);
129 131  
130 132 extern int cpuidle_select(struct cpuidle_driver *drv,
131 133 struct cpuidle_device *dev);
132 134  
... ... @@ -150,11 +152,17 @@
150 152 extern int cpuidle_enable_device(struct cpuidle_device *dev);
151 153 extern void cpuidle_disable_device(struct cpuidle_device *dev);
152 154 extern int cpuidle_play_dead(void);
153   -extern void cpuidle_enter_freeze(void);
  155 +extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
  156 + struct cpuidle_device *dev);
  157 +extern int cpuidle_enter_freeze(struct cpuidle_driver *drv,
  158 + struct cpuidle_device *dev);
154 159  
155 160 extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev);
156 161 #else
157 162 static inline void disable_cpuidle(void) { }
  163 +static inline bool cpuidle_not_available(struct cpuidle_driver *drv,
  164 + struct cpuidle_device *dev)
  165 +{return true; }
158 166 static inline int cpuidle_select(struct cpuidle_driver *drv,
159 167 struct cpuidle_device *dev)
160 168 {return -ENODEV; }
... ... @@ -183,7 +191,12 @@
183 191 {return -ENODEV; }
184 192 static inline void cpuidle_disable_device(struct cpuidle_device *dev) { }
185 193 static inline int cpuidle_play_dead(void) {return -ENODEV; }
186   -static inline void cpuidle_enter_freeze(void) { }
  194 +static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
  195 + struct cpuidle_device *dev)
  196 +{return -ENODEV; }
  197 +static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv,
  198 + struct cpuidle_device *dev)
  199 +{return -ENODEV; }
187 200 static inline struct cpuidle_driver *cpuidle_get_cpu_driver(
188 201 struct cpuidle_device *dev) {return NULL; }
189 202 #endif
include/linux/interrupt.h
... ... @@ -52,11 +52,17 @@
52 52 * IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished.
53 53 * Used by threaded interrupts which need to keep the
54 54 * irq line disabled until the threaded handler has been run.
55   - * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend
  55 + * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend. Does not guarantee
  56 + * that this interrupt will wake the system from a suspended
  57 + * state. See Documentation/power/suspend-and-interrupts.txt
56 58 * IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set
57 59 * IRQF_NO_THREAD - Interrupt cannot be threaded
58 60 * IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device
59 61 * resume time.
  62 + * IRQF_COND_SUSPEND - If the IRQ is shared with a NO_SUSPEND user, execute this
  63 + * interrupt handler after suspending interrupts. For system
  64 + * wakeup devices users need to implement wakeup detection in
  65 + * their interrupt handlers.
60 66 */
61 67 #define IRQF_DISABLED 0x00000020
62 68 #define IRQF_SHARED 0x00000080
... ... @@ -70,6 +76,7 @@
70 76 #define IRQF_FORCE_RESUME 0x00008000
71 77 #define IRQF_NO_THREAD 0x00010000
72 78 #define IRQF_EARLY_RESUME 0x00020000
  79 +#define IRQF_COND_SUSPEND 0x00040000
73 80  
74 81 #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
75 82  
include/linux/irqdesc.h
... ... @@ -78,6 +78,7 @@
78 78 #ifdef CONFIG_PM_SLEEP
79 79 unsigned int nr_actions;
80 80 unsigned int no_suspend_depth;
  81 + unsigned int cond_suspend_depth;
81 82 unsigned int force_resume_depth;
82 83 #endif
83 84 #ifdef CONFIG_PROC_FS
... ... @@ -1474,8 +1474,13 @@
1474 1474 * otherwise we'll have trouble later trying to figure out
1475 1475 * which interrupt is which (messes up the interrupt freeing
1476 1476 * logic etc).
  1477 + *
  1478 + * Also IRQF_COND_SUSPEND only makes sense for shared interrupts and
  1479 + * it cannot be set along with IRQF_NO_SUSPEND.
1477 1480 */
1478   - if ((irqflags & IRQF_SHARED) && !dev_id)
  1481 + if (((irqflags & IRQF_SHARED) && !dev_id) ||
  1482 + (!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) ||
  1483 + ((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND)))
1479 1484 return -EINVAL;
1480 1485  
1481 1486 desc = irq_to_desc(irq);
... ... @@ -43,9 +43,12 @@
43 43  
44 44 if (action->flags & IRQF_NO_SUSPEND)
45 45 desc->no_suspend_depth++;
  46 + else if (action->flags & IRQF_COND_SUSPEND)
  47 + desc->cond_suspend_depth++;
46 48  
47 49 WARN_ON_ONCE(desc->no_suspend_depth &&
48   - desc->no_suspend_depth != desc->nr_actions);
  50 + (desc->no_suspend_depth +
  51 + desc->cond_suspend_depth) != desc->nr_actions);
49 52 }
50 53  
51 54 /*
... ... @@ -61,6 +64,8 @@
61 64  
62 65 if (action->flags & IRQF_NO_SUSPEND)
63 66 desc->no_suspend_depth--;
  67 + else if (action->flags & IRQF_COND_SUSPEND)
  68 + desc->cond_suspend_depth--;
64 69 }
65 70  
66 71 static bool suspend_device_irq(struct irq_desc *desc, int irq)
... ... @@ -82,6 +82,7 @@
82 82 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
83 83 int next_state, entered_state;
84 84 unsigned int broadcast;
  85 + bool reflect;
85 86  
86 87 /*
87 88 * Check if the idle task must be rescheduled. If it is the
... ... @@ -105,6 +106,9 @@
105 106 */
106 107 rcu_idle_enter();
107 108  
  109 + if (cpuidle_not_available(drv, dev))
  110 + goto use_default;
  111 +
108 112 /*
109 113 * Suspend-to-idle ("freeze") is a system state in which all user space
110 114 * has been frozen, all I/O devices have been suspended and the only
111 115  
112 116  
113 117  
114 118  
115 119  
... ... @@ -115,31 +119,25 @@
115 119 * until a proper wakeup interrupt happens.
116 120 */
117 121 if (idle_should_freeze()) {
118   - cpuidle_enter_freeze();
119   - local_irq_enable();
120   - goto exit_idle;
121   - }
  122 + entered_state = cpuidle_enter_freeze(drv, dev);
  123 + if (entered_state >= 0) {
  124 + local_irq_enable();
  125 + goto exit_idle;
  126 + }
122 127  
123   - /*
124   - * Ask the cpuidle framework to choose a convenient idle state.
125   - * Fall back to the default arch idle method on errors.
126   - */
127   - next_state = cpuidle_select(drv, dev);
128   - if (next_state < 0) {
129   -use_default:
  128 + reflect = false;
  129 + next_state = cpuidle_find_deepest_state(drv, dev);
  130 + } else {
  131 + reflect = true;
130 132 /*
131   - * We can't use the cpuidle framework, let's use the default
132   - * idle routine.
  133 + * Ask the cpuidle framework to choose a convenient idle state.
133 134 */
134   - if (current_clr_polling_and_test())
135   - local_irq_enable();
136   - else
137   - arch_cpu_idle();
138   -
139   - goto exit_idle;
  135 + next_state = cpuidle_select(drv, dev);
140 136 }
  137 + /* Fall back to the default arch idle method on errors. */
  138 + if (next_state < 0)
  139 + goto use_default;
141 140  
142   -
143 141 /*
144 142 * The idle task must be scheduled, it is pointless to
145 143 * go to idle, just update no idle residency and get
... ... @@ -183,7 +181,8 @@
183 181 /*
184 182 * Give the governor an opportunity to reflect on the outcome
185 183 */
186   - cpuidle_reflect(dev, entered_state);
  184 + if (reflect)
  185 + cpuidle_reflect(dev, entered_state);
187 186  
188 187 exit_idle:
189 188 __current_set_polling();
... ... @@ -196,6 +195,19 @@
196 195  
197 196 rcu_idle_exit();
198 197 start_critical_timings();
  198 + return;
  199 +
  200 +use_default:
  201 + /*
  202 + * We can't use the cpuidle framework, let's use the default
  203 + * idle routine.
  204 + */
  205 + if (current_clr_polling_and_test())
  206 + local_irq_enable();
  207 + else
  208 + arch_cpu_idle();
  209 +
  210 + goto exit_idle;
199 211 }
200 212  
201 213 /*