Commit 65a64464349883891e21e74af16c05d6e1eeb4e9

Authored by Andi Kleen
1 parent 5d5429af06

HWPOISON: Allow schedule_on_each_cpu() from keventd

Right now when calling schedule_on_each_cpu() from keventd there
is a deadlock because it tries to schedule a work item on the current CPU
too. This happens via lru_add_drain_all() in hwpoison.

Just call the function for the current CPU in this case. This is actually
faster too.

Debugging with Fengguang Wu & Max Asbock

Signed-off-by: Andi Kleen <ak@linux.intel.com>

Showing 1 changed file with 19 additions and 2 deletions Side-by-side Diff

... ... @@ -667,21 +667,38 @@
667 667 int schedule_on_each_cpu(work_func_t func)
668 668 {
669 669 int cpu;
  670 + int orig = -1;
670 671 struct work_struct *works;
671 672  
672 673 works = alloc_percpu(struct work_struct);
673 674 if (!works)
674 675 return -ENOMEM;
675 676  
  677 + /*
  678 + * when running in keventd don't schedule a work item on itself.
  679 + * Can just call directly because the work queue is already bound.
  680 + * This also is faster.
  681 + * Make this a generic parameter for other workqueues?
  682 + */
  683 + if (current_is_keventd()) {
  684 + orig = raw_smp_processor_id();
  685 + INIT_WORK(per_cpu_ptr(works, orig), func);
  686 + func(per_cpu_ptr(works, orig));
  687 + }
  688 +
676 689 get_online_cpus();
677 690 for_each_online_cpu(cpu) {
678 691 struct work_struct *work = per_cpu_ptr(works, cpu);
679 692  
  693 + if (cpu == orig)
  694 + continue;
680 695 INIT_WORK(work, func);
681 696 schedule_work_on(cpu, work);
682 697 }
683   - for_each_online_cpu(cpu)
684   - flush_work(per_cpu_ptr(works, cpu));
  698 + for_each_online_cpu(cpu) {
  699 + if (cpu != orig)
  700 + flush_work(per_cpu_ptr(works, cpu));
  701 + }
685 702 put_online_cpus();
686 703 free_percpu(works);
687 704 return 0;