08 Jan, 2021

1 commit

  • Use scs_alloc() to allocate also IRQ and SDEI shadow stacks instead of
    using statically allocated stacks.

    Bug: 169781940
    Change-Id: If3f38d603a7c1e8ebcf1e8655b70fa6bfde7c48d
    (cherry picked from commit ac20ffbb0279aae7be48567fb734eae7d050769e)
    Signed-off-by: Sami Tolvanen
    Acked-by: Will Deacon
    Link: https://lore.kernel.org/r/20201130233442.2562064-3-samitolvanen@google.com
    [will: Move CONFIG_SHADOW_CALL_STACK check into init_irq_scs()]
    Signed-off-by: Will Deacon

    Sami Tolvanen
     

30 Nov, 2020

1 commit

  • In preparation for reworking the EL1 irq/nmi entry code, move the
    existing logic to C. We no longer need the asm_nmi_enter() and
    asm_nmi_exit() wrappers, so these are removed. The new C functions are
    marked noinstr, which prevents compiler instrumentation and runtime
    probing.

    In subsequent patches we'll want the new C helpers to be called in all
    cases, so we don't bother wrapping the calls with ifdeferry. Even when
    the new C functions are stubs the trivial calls are unlikely to have a
    measurable impact on the IRQ or NMI paths anyway.

    Prototypes are added to as otherwise (in some
    configurations) GCC will complain about the lack of a forward
    declaration. We already do this for existing function, e.g.
    enter_from_user_mode().

    The new helpers are marked as noinstr (which prevents all
    instrumentation, tracing, and kprobes). Otherwise, there should be no
    functional change as a result of this patch.

    Signed-off-by: Mark Rutland
    Cc: Catalin Marinas
    Cc: James Morse
    Cc: Will Deacon
    Link: https://lore.kernel.org/r/20201130115950.22492-7-mark.rutland@arm.com
    Signed-off-by: Will Deacon

    Mark Rutland
     

17 Sep, 2020

1 commit


09 Jul, 2019

1 commit

  • Pull arm64 updates from Catalin Marinas:

    - arm64 support for syscall emulation via PTRACE_SYSEMU{,_SINGLESTEP}

    - Wire up VM_FLUSH_RESET_PERMS for arm64, allowing the core code to
    manage the permissions of executable vmalloc regions more strictly

    - Slight performance improvement by keeping softirqs enabled while
    touching the FPSIMD/SVE state (kernel_neon_begin/end)

    - Expose a couple of ARMv8.5 features to user (HWCAP): CondM (new
    XAFLAG and AXFLAG instructions for floating point comparison flags
    manipulation) and FRINT (rounding floating point numbers to integers)

    - Re-instate ARM64_PSEUDO_NMI support which was previously marked as
    BROKEN due to some bugs (now fixed)

    - Improve parking of stopped CPUs and implement an arm64-specific
    panic_smp_self_stop() to avoid warning on not being able to stop
    secondary CPUs during panic

    - perf: enable the ARM Statistical Profiling Extensions (SPE) on ACPI
    platforms

    - perf: DDR performance monitor support for iMX8QXP

    - cache_line_size() can now be set from DT or ACPI/PPTT if provided to
    cope with a system cache info not exposed via the CPUID registers

    - Avoid warning on hardware cache line size greater than
    ARCH_DMA_MINALIGN if the system is fully coherent

    - arm64 do_page_fault() and hugetlb cleanups

    - Refactor set_pte_at() to avoid redundant READ_ONCE(*ptep)

    - Ignore ACPI 5.1 FADTs reported as 5.0 (infer from the
    'arm_boot_flags' introduced in 5.1)

    - CONFIG_RANDOMIZE_BASE now enabled in defconfig

    - Allow the selection of ARM64_MODULE_PLTS, currently only done via
    RANDOMIZE_BASE (and an erratum workaround), allowing modules to spill
    over into the vmalloc area

    - Make ZONE_DMA32 configurable

    * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (54 commits)
    perf: arm_spe: Enable ACPI/Platform automatic module loading
    arm_pmu: acpi: spe: Add initial MADT/SPE probing
    ACPI/PPTT: Add function to return ACPI 6.3 Identical tokens
    ACPI/PPTT: Modify node flag detection to find last IDENTICAL
    x86/entry: Simplify _TIF_SYSCALL_EMU handling
    arm64: rename dump_instr as dump_kernel_instr
    arm64/mm: Drop [PTE|PMD]_TYPE_FAULT
    arm64: Implement panic_smp_self_stop()
    arm64: Improve parking of stopped CPUs
    arm64: Expose FRINT capabilities to userspace
    arm64: Expose ARMv8.5 CondM capability to userspace
    arm64: defconfig: enable CONFIG_RANDOMIZE_BASE
    arm64: ARM64_MODULES_PLTS must depend on MODULES
    arm64: bpf: do not allocate executable memory
    arm64/kprobes: set VM_FLUSH_RESET_PERMS on kprobe instruction pages
    arm64/mm: wire up CONFIG_ARCH_HAS_SET_DIRECT_MAP
    arm64: module: create module allocations without exec permissions
    arm64: Allow user selection of ARM64_MODULE_PLTS
    acpi/arm64: ignore 5.1 FADTs that are reported as 5.0
    arm64: Allow selecting Pseudo-NMI again
    ...

    Linus Torvalds
     

21 Jun, 2019

2 commits

  • When enabling ARM64_PSEUDO_NMI feature in kdump capture kernel, it will
    report a kernel stack overflow exception:

    [ 0.000000] CPU features: detected: IRQ priority masking
    [ 0.000000] alternatives: patching kernel code
    [ 0.000000] Insufficient stack space to handle exception!
    [ 0.000000] ESR: 0x96000044 -- DABT (current EL)
    [ 0.000000] FAR: 0x0000000000000040
    [ 0.000000] Task stack: [0xffff0000097f0000..0xffff0000097f4000]
    [ 0.000000] IRQ stack: [0x0000000000000000..0x0000000000004000]
    [ 0.000000] Overflow stack: [0xffff80002b7cf290..0xffff80002b7d0290]
    [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.34-lw+ #3
    [ 0.000000] pstate: 400003c5 (nZcv DAIF -PAN -UAO)
    [ 0.000000] pc : el1_sync+0x0/0xb8
    [ 0.000000] lr : el1_irq+0xb8/0x140
    [ 0.000000] sp : 0000000000000040
    [ 0.000000] pmr_save: 00000070
    [ 0.000000] x29: ffff0000097f3f60 x28: ffff000009806240
    [ 0.000000] x27: 0000000080000000 x26: 0000000000004000
    [ 0.000000] x25: 0000000000000000 x24: ffff000009329028
    [ 0.000000] x23: 0000000040000005 x22: ffff000008095c6c
    [ 0.000000] x21: ffff0000097f3f70 x20: 0000000000000070
    [ 0.000000] x19: ffff0000097f3e30 x18: ffffffffffffffff
    [ 0.000000] x17: 0000000000000000 x16: 0000000000000000
    [ 0.000000] x15: ffff0000097f9708 x14: ffff000089a382ef
    [ 0.000000] x13: ffff000009a382fd x12: ffff000009824000
    [ 0.000000] x11: ffff0000097fb7b0 x10: ffff000008730028
    [ 0.000000] x9 : ffff000009440018 x8 : 000000000000000d
    [ 0.000000] x7 : 6b20676e69686374 x6 : 000000000000003b
    [ 0.000000] x5 : 0000000000000000 x4 : ffff000008093600
    [ 0.000000] x3 : 0000000400000008 x2 : 7db2e689fc2b8e00
    [ 0.000000] x1 : 0000000000000000 x0 : ffff0000097f3e30
    [ 0.000000] Kernel panic - not syncing: kernel stack overflow
    [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.34-lw+ #3
    [ 0.000000] Call trace:
    [ 0.000000] dump_backtrace+0x0/0x1b8
    [ 0.000000] show_stack+0x24/0x30
    [ 0.000000] dump_stack+0xa8/0xcc
    [ 0.000000] panic+0x134/0x30c
    [ 0.000000] __stack_chk_fail+0x0/0x28
    [ 0.000000] handle_bad_stack+0xfc/0x108
    [ 0.000000] __bad_stack+0x90/0x94
    [ 0.000000] el1_sync+0x0/0xb8
    [ 0.000000] init_gic_priority_masking+0x4c/0x70
    [ 0.000000] smp_prepare_boot_cpu+0x60/0x68
    [ 0.000000] start_kernel+0x1e8/0x53c
    [ 0.000000] ---[ end Kernel panic - not syncing: kernel stack overflow ]---

    The reason is init_gic_priority_masking() may unmask PSR.I while the
    irq stacks are not inited yet. Some "NMI" could be raised unfortunately
    and it will just go into this exception.

    In this patch, we just write the PMR in smp_prepare_boot_cpu(), and delay
    unmasking PSR.I after irq stacks inited in init_IRQ().

    Fixes: e79321883842 ("arm64: Switch to PMR masking when starting CPUs")
    Cc: Will Deacon
    Reviewed-by: Marc Zyngier
    Signed-off-by: Wei Li
    [JT: make init_gic_priority_masking() not modify daif, rebase on other
    priority masking fixes]
    Signed-off-by: Julien Thierry
    Signed-off-by: Catalin Marinas

    Wei Li
     
  • In the presence of any form of instrumentation, nmi_enter() should be
    done before calling any traceable code and any instrumentation code.

    Currently, nmi_enter() is done in handle_domain_nmi(), which is much
    too late as instrumentation code might get called before. Move the
    nmi_enter/exit() calls to the arch IRQ vector handler.

    On arm64, it is not possible to know if the IRQ vector handler was
    called because of an NMI before acknowledging the interrupt. However, It
    is possible to know whether normal interrupts could be taken in the
    interrupted context (i.e. if taking an NMI in that context could
    introduce a potential race condition).

    When interrupting a context with IRQs disabled, call nmi_enter() as soon
    as possible. In contexts with IRQs enabled, defer this to the interrupt
    controller, which is in a better position to know if an interrupt taken
    is an NMI.

    Fixes: bc3c03ccb464 ("arm64: Enable the support of pseudo-NMIs")
    Cc: # 5.1.x-
    Cc: Will Deacon
    Cc: Thomas Gleixner
    Cc: Jason Cooper
    Cc: Mark Rutland
    Reviewed-by: Marc Zyngier
    Signed-off-by: Julien Thierry
    Signed-off-by: Catalin Marinas

    Julien Thierry
     

19 Jun, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation this program is
    distributed in the hope that it will be useful but without any
    warranty without even the implied warranty of merchantability or
    fitness for a particular purpose see the gnu general public license
    for more details you should have received a copy of the gnu general
    public license along with this program if not see http www gnu org
    licenses

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 503 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Alexios Zavras
    Reviewed-by: Allison Randal
    Reviewed-by: Enrico Weigelt
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

06 Feb, 2019

1 commit

  • When using VHE, the host needs to clear HCR_EL2.TGE bit in order
    to interact with guest TLBs, switching from EL2&0 translation regime
    to EL1&0.

    However, some non-maskable asynchronous event could happen while TGE is
    cleared like SDEI. Because of this address translation operations
    relying on EL2&0 translation regime could fail (tlb invalidation,
    userspace access, ...).

    Fix this by properly setting HCR_EL2.TGE when entering NMI context and
    clear it if necessary when returning to the interrupted context.

    Signed-off-by: Julien Thierry
    Suggested-by: Marc Zyngier
    Reviewed-by: Marc Zyngier
    Reviewed-by: James Morse
    Cc: Arnd Bergmann
    Cc: Will Deacon
    Cc: Marc Zyngier
    Cc: James Morse
    Cc: linux-arch@vger.kernel.org
    Cc: stable@vger.kernel.org
    Signed-off-by: Catalin Marinas

    Julien Thierry
     

03 Aug, 2018

1 commit

  • It appears arm64 copied arm's GENERIC_IRQ_MULTI_HANDLER code, but made
    it unconditional.

    Converts the arm64 code to use the new generic code, which simply consists
    of deleting the arm64 code and setting MULTI_IRQ_HANDLER instead.

    Signed-off-by: Palmer Dabbelt
    Signed-off-by: Thomas Gleixner
    Reviewed-by: Christoph Hellwig
    Cc: linux@armlinux.org.uk
    Cc: catalin.marinas@arm.com
    Cc: Will Deacon
    Cc: jonas@southpole.se
    Cc: stefan.kristiansson@saunalahti.fi
    Cc: shorne@gmail.com
    Cc: jason@lakedaemon.net
    Cc: marc.zyngier@arm.com
    Cc: Arnd Bergmann
    Cc: nicolas.pitre@linaro.org
    Cc: vladimir.murzin@arm.com
    Cc: keescook@chromium.org
    Cc: jinb.park7@gmail.com
    Cc: yamada.masahiro@socionext.com
    Cc: alexandre.belloni@bootlin.com
    Cc: pombredanne@nexb.com
    Cc: Greg KH
    Cc: kstewart@linuxfoundation.org
    Cc: jhogan@kernel.org
    Cc: mark.rutland@arm.com
    Cc: ard.biesheuvel@linaro.org
    Cc: james.morse@arm.com
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: openrisc@lists.librecores.org
    Link: https://lkml.kernel.org/r/20180622170126.6308-4-palmer@sifive.com

    Palmer Dabbelt
     

13 Jan, 2018

1 commit

  • Today the arm64 arch code allocates an extra IRQ stack per-cpu. If we
    also have SDEI and VMAP stacks we need two extra per-cpu VMAP stacks.

    Move the VMAP stack allocation out to a helper in a new header file.
    This avoids missing THREADINFO_GFP, or getting the all-important alignment
    wrong.

    Signed-off-by: James Morse
    Reviewed-by: Catalin Marinas
    Reviewed-by: Mark Rutland
    Signed-off-by: Catalin Marinas

    James Morse
     

16 Aug, 2017

2 commits

  • This patch enables arm64 to be built with vmap'd task and IRQ stacks.

    As vmap'd stacks are mapped at page granularity, stacks must be a multiple of
    PAGE_SIZE. This means that a 64K page kernel must use stacks of at least 64K in
    size.

    To minimize the increase in Image size, IRQ stacks are dynamically allocated at
    boot time, rather than embedding the boot CPU's IRQ stack in the kernel image.

    This patch was co-authored by Ard Biesheuvel and Mark Rutland.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Mark Rutland
    Reviewed-by: Will Deacon
    Tested-by: Laura Abbott
    Cc: Catalin Marinas
    Cc: James Morse

    Mark Rutland
     
  • We allocate our IRQ stacks using a percpu array. This allows us to generate our
    IRQ stack pointers with adr_this_cpu, but bloats the kernel Image with the boot
    CPU's IRQ stack. Additionally, these are packed with other percpu variables,
    and aren't guaranteed to have guard pages.

    When we enable VMAP_STACK we'll want to vmap our IRQ stacks also, in order to
    provide guard pages and to permit more stringent alignment requirements. Doing
    so will require that we use a percpu pointer to each IRQ stack, rather than
    allocating a percpu IRQ stack in the kernel image.

    This patch updates our IRQ stack code to use a percpu pointer to the base of
    each IRQ stack. This will allow us to change the way the stack is allocated
    with minimal changes elsewhere. In some cases we may try to backtrace before
    the IRQ stack pointers are initialised, so on_irq_stack() is updated to account
    for this.

    In testing with cyclictest, there was no measureable difference between using
    adr_this_cpu (for irq_stack) and ldr_this_cpu (for irq_stack_ptr) in the IRQ
    entry path.

    Signed-off-by: Mark Rutland
    Reviewed-by: Will Deacon
    Tested-by: Laura Abbott
    Cc: Ard Biesheuvel
    Cc: Catalin Marinas
    Cc: James Morse

    Mark Rutland
     

22 Dec, 2015

1 commit

  • sysrq_handle_reboot() re-enables interrupts while on the irq stack. The
    irq_stack implementation wrongly assumed this would only ever happen
    via the softirq path, allowing it to update irq_count late, in
    do_softirq_own_stack().

    This means if an irq occurs in sysrq_handle_reboot(), during
    emergency_restart() the stack will be corrupted, as irq_count wasn't
    updated.

    Lose the optimisation, and instead of moving the adding/subtracting of
    irq_count into irq_stack_entry/irq_stack_exit, remove it, and compare
    sp_el0 (struct thread_info) with sp & ~(THREAD_SIZE - 1). This tells us
    if we are on a task stack, if so, we can safely switch to the irq stack.
    Finally, remove do_softirq_own_stack(), we don't need it anymore.

    Reported-by: Will Deacon
    Signed-off-by: James Morse
    [will: use get_thread_info macro]
    Signed-off-by: Will Deacon

    James Morse
     

08 Dec, 2015

2 commits

  • entry.S is modified to switch to the per_cpu irq_stack during el{0,1}_irq.
    irq_count is used to detect recursive interrupts on the irq_stack, it is
    updated late by do_softirq_own_stack(), when called on the irq_stack, before
    __do_softirq() re-enables interrupts to process softirqs.

    do_softirq_own_stack() is added by this patch, but does not yet switch
    stack.

    This patch adds the dummy stack frame and data needed by the previous
    stack tracing patches.

    Reviewed-by: Catalin Marinas
    Signed-off-by: James Morse
    Signed-off-by: Will Deacon

    James Morse
     
  • This patch allows unwind_frame() to traverse from interrupt stack to task
    stack correctly. It requires data from a dummy stack frame, created
    during irq_stack_entry(), added by a later patch.

    A similar approach is taken to modify dump_backtrace(), which expects to
    find struct pt_regs underneath any call to functions marked __exception.
    When on an irq_stack, the struct pt_regs is stored on the old task stack,
    the location of which is stored in the dummy stack frame.

    Reviewed-by: Catalin Marinas
    Signed-off-by: AKASHI Takahiro
    [james.morse: merged two patches, reworked for per_cpu irq_stacks, and
    no alignment guarantees, added irq_stack definitions]
    Signed-off-by: James Morse
    Signed-off-by: Will Deacon

    AKASHI Takahiro
     

10 Oct, 2015

1 commit

  • When cpu is disabled, all irqs will be migratged to another cpu.
    In some cases, a new affinity is different, the old affinity need
    to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
    the old affinity can not be updated. Fix it by using irq_do_set_affinity.

    And migrating interrupts is a core code matter, so use the generic
    function irq_migrate_all_off_this_cpu() to migrate interrupts in
    kernel/irq/migration.c.

    Cc: Jiang Liu
    Cc: Thomas Gleixner
    Cc: Mark Rutland
    Cc: Will Deacon
    Cc: Russell King - ARM Linux
    Cc: Hanjun Guo
    Acked-by: Marc Zyngier
    Signed-off-by: Yang Yingliang
    Signed-off-by: Catalin Marinas

    Yang Yingliang
     

27 Jul, 2015

1 commit


22 Jul, 2015

1 commit


25 Nov, 2014

1 commit

  • handle_arch_irq isn't actually text, it's just a function pointer.
    It doesn't need to be stored in the text section and doing so
    causes problesm if we ever want to make the kernel text read only.
    Declare handle_arch_irq as a proper function pointer stored in
    the data section.

    Reviewed-by: Kees Cook
    Reviewed-by: Mark Rutland
    Acked-by: Ard Biesheuvel
    Tested-by: Mark Rutland
    Tested-by: Kees Cook
    Signed-off-by: Laura Abbott
    Signed-off-by: Will Deacon

    Laura Abbott
     

09 Oct, 2014

1 commit

  • Pull irq updates from Thomas Gleixner:
    "The irq departement delivers:

    - a cleanup series to get rid of mindlessly copied code.

    - another bunch of new pointlessly different interrupt chip drivers.

    Adding homebrewn irq chips (and timers) to SoCs must provide a
    value add which is beyond the imagination of mere mortals.

    - the usual SoC irq controller updates, IOW my second cat herding
    project"

    * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (44 commits)
    irqchip: gic-v3: Implement CPU PM notifier
    irqchip: gic-v3: Refactor gic_enable_redist to support both enabling and disabling
    irqchip: renesas-intc-irqpin: Add minimal runtime PM support
    irqchip: renesas-intc-irqpin: Add helper variable dev = &pdev->dev
    irqchip: atmel-aic5: Add sama5d4 support
    irqchip: atmel-aic5: The sama5d3 has 48 IRQs
    Documentation: bcm7120-l2: Add Broadcom BCM7120-style L2 binding
    irqchip: bcm7120-l2: Add Broadcom BCM7120-style Level 2 interrupt controller
    irqchip: renesas-irqc: Add binding docs for new R-Car Gen2 SoCs
    irqchip: renesas-irqc: Add DT binding documentation
    irqchip: renesas-intc-irqpin: Document SoC-specific bindings
    openrisc: Get rid of handle_IRQ
    arm64: Get rid of handle_IRQ
    ARM: omap2: irq: Convert to handle_domain_irq
    ARM: imx: tzic: Convert to handle_domain_irq
    ARM: imx: avic: Convert to handle_domain_irq
    irqchip: or1k-pic: Convert to handle_domain_irq
    irqchip: atmel-aic5: Convert to handle_domain_irq
    irqchip: atmel-aic: Convert to handle_domain_irq
    irqchip: gic-v3: Convert to handle_domain_irq
    ...

    Linus Torvalds
     

04 Sep, 2014

1 commit

  • The arm64 interrupt migration code on cpu offline calls
    irqchip.irq_set_affinity() with the argument force=true. Originally
    this argument had no effect because it was not used by any interrupt
    chip driver and there was no semantics defined.

    This changed with commit 01f8fa4f01d8 ("genirq: Allow forcing cpu
    affinity of interrupts") which made the force argument useful to route
    interrupts to not yet online cpus without checking the target cpu
    against the cpu online mask. The following commit ffde1de64012
    ("irqchip: gic: Support forced affinity setting") implemented this for
    the GIC interrupt controller.

    As a consequence the cpu offline irq migration fails if CPU0 is
    offlined, because CPU0 is still set in the affinity mask and the
    validation against cpu online mask is skipped to the force argument
    being true. The following first_cpu(mask) selection always selects
    CPU0 as the target.

    Commit 601c942176d8("arm64: use cpu_online_mask when using forced
    irq_set_affinity") intended to fix the above mentioned issue but
    introduced another issue where affinity can be migrated to a wrong
    CPU due to unconditional copy of cpu_online_mask.

    As with for arm, solve the issue by calling irq_set_affinity() with
    force=false from the CPU offline irq migration code so the GIC driver
    validates the affinity mask against CPU online mask and therefore
    removes CPU0 from the possible target candidates. Also revert the
    changes done in the commit 601c942176d8 as it's no longer needed.

    Tested on Juno platform.

    Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced
    irq_set_affinity")
    Signed-off-by: Sudeep Holla
    Acked-by: Mark Rutland
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: # 3.10.x
    Signed-off-by: Will Deacon

    Sudeep Holla
     

03 Sep, 2014

2 commits

  • All the arm64 irqchip drivers have been converted to handle_domain_irq,
    making it possible to remove the handle_IRQ stub entierely.

    Signed-off-by: Marc Zyngier
    Acked-by: Catalin Marinas
    Link: https://lkml.kernel.org/r/1409047421-27649-26-git-send-email-marc.zyngier@arm.com
    Signed-off-by: Jason Cooper

    Marc Zyngier
     
  • In order to limit code duplication, convert the architecture specific
    handle_IRQ to use the generic __handle_domain_irq function.

    Signed-off-by: Marc Zyngier
    Acked-by: Catalin Marinas
    Link: https://lkml.kernel.org/r/1409047421-27649-3-git-send-email-marc.zyngier@arm.com
    Signed-off-by: Jason Cooper

    Marc Zyngier
     

12 May, 2014

1 commit

  • Commit 01f8fa4f01d8("genirq: Allow forcing cpu affinity of interrupts")
    enabled the forced irq_set_affinity which previously refused to route an
    interrupt to an offline cpu.

    Commit ffde1de64012("irqchip: Gic: Support forced affinity setting")
    implements this force logic and disables the cpu online check for GIC
    interrupt controller.

    When __cpu_disable calls migrate_irqs, it disables the current cpu in
    cpu_online_mask and uses forced irq_set_affinity to migrate the IRQs
    away from the cpu but passes affinity mask with the cpu being offlined
    also included in it.

    When calling irq_set_affinity with force == true in a cpu hotplug path,
    the caller must ensure that the cpu being offlined is not present in the
    affinity mask or it may be selected as the target CPU, leading to the
    interrupt not being migrated.

    This patch uses cpu_online_mask when using forced irq_set_affinity so
    that the IRQs are properly migrated away.

    Signed-off-by: Sudeep Holla
    Acked-by: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Catalin Marinas

    Sudeep Holla
     

25 Oct, 2013

1 commit

  • This patch adds the basic infrastructure necessary to support
    CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug
    support will depend on an implementation's cpu_operations (e.g. PSCI).

    Signed-off-by: Mark Rutland
    Signed-off-by: Catalin Marinas

    Mark Rutland
     

27 Mar, 2013

1 commit


17 Sep, 2012

1 commit

  • This patch adds the support for IRQ handling. The actual interrupt
    controller will be part of a separate patch (going into
    drivers/irqchip/).

    Signed-off-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Acked-by: Tony Lindgren
    Acked-by: Nicolas Pitre
    Acked-by: Olof Johansson
    Acked-by: Santosh Shilimkar

    Marc Zyngier