13 Jun, 2018

4 commits

  • Pull more overflow updates from Kees Cook:
    "The rest of the overflow changes for v4.18-rc1.

    This includes the explicit overflow fixes from Silvio, further
    struct_size() conversions from Matthew, and a bug fix from Dan.

    But the bulk of it is the treewide conversions to use either the
    2-factor argument allocators (e.g. kmalloc(a * b, ...) into
    kmalloc_array(a, b, ...) or the array_size() macros (e.g. vmalloc(a *
    b) into vmalloc(array_size(a, b)).

    Coccinelle was fighting me on several fronts, so I've done a bunch of
    manual whitespace updates in the patches as well.

    Summary:

    - Error path bug fix for overflow tests (Dan)

    - Additional struct_size() conversions (Matthew, Kees)

    - Explicitly reported overflow fixes (Silvio, Kees)

    - Add missing kvcalloc() function (Kees)

    - Treewide conversions of allocators to use either 2-factor argument
    variant when available, or array_size() and array3_size() as needed
    (Kees)"

    * tag 'overflow-v4.18-rc1-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (26 commits)
    treewide: Use array_size in f2fs_kvzalloc()
    treewide: Use array_size() in f2fs_kzalloc()
    treewide: Use array_size() in f2fs_kmalloc()
    treewide: Use array_size() in sock_kmalloc()
    treewide: Use array_size() in kvzalloc_node()
    treewide: Use array_size() in vzalloc_node()
    treewide: Use array_size() in vzalloc()
    treewide: Use array_size() in vmalloc()
    treewide: devm_kzalloc() -> devm_kcalloc()
    treewide: devm_kmalloc() -> devm_kmalloc_array()
    treewide: kvzalloc() -> kvcalloc()
    treewide: kvmalloc() -> kvmalloc_array()
    treewide: kzalloc_node() -> kcalloc_node()
    treewide: kzalloc() -> kcalloc()
    treewide: kmalloc() -> kmalloc_array()
    mm: Introduce kvcalloc()
    video: uvesafb: Fix integer overflow in allocation
    UBIFS: Fix potential integer overflow in allocation
    leds: Use struct_size() in allocation
    Convert intel uncore to struct_size
    ...

    Linus Torvalds
     
  • The vmalloc() function has no 2-factor argument form, so multiplication
    factors need to be wrapped in array_size(). This patch replaces cases of:

    vmalloc(a * b)

    with:
    vmalloc(array_size(a, b))

    as well as handling cases of:

    vmalloc(a * b * c)

    with:

    vmalloc(array3_size(a, b, c))

    This does, however, attempt to ignore constant size factors like:

    vmalloc(4 * 1024)

    though any constants defined via macros get caught up in the conversion.

    Any factors with a sizeof() of "unsigned char", "char", and "u8" were
    dropped, since they're redundant.

    The Coccinelle script used for this was:

    // Fix redundant parens around sizeof().
    @@
    type TYPE;
    expression THING, E;
    @@

    (
    vmalloc(
    - (sizeof(TYPE)) * E
    + sizeof(TYPE) * E
    , ...)
    |
    vmalloc(
    - (sizeof(THING)) * E
    + sizeof(THING) * E
    , ...)
    )

    // Drop single-byte sizes and redundant parens.
    @@
    expression COUNT;
    typedef u8;
    typedef __u8;
    @@

    (
    vmalloc(
    - sizeof(u8) * (COUNT)
    + COUNT
    , ...)
    |
    vmalloc(
    - sizeof(__u8) * (COUNT)
    + COUNT
    , ...)
    |
    vmalloc(
    - sizeof(char) * (COUNT)
    + COUNT
    , ...)
    |
    vmalloc(
    - sizeof(unsigned char) * (COUNT)
    + COUNT
    , ...)
    |
    vmalloc(
    - sizeof(u8) * COUNT
    + COUNT
    , ...)
    |
    vmalloc(
    - sizeof(__u8) * COUNT
    + COUNT
    , ...)
    |
    vmalloc(
    - sizeof(char) * COUNT
    + COUNT
    , ...)
    |
    vmalloc(
    - sizeof(unsigned char) * COUNT
    + COUNT
    , ...)
    )

    // 2-factor product with sizeof(type/expression) and identifier or constant.
    @@
    type TYPE;
    expression THING;
    identifier COUNT_ID;
    constant COUNT_CONST;
    @@

    (
    vmalloc(
    - sizeof(TYPE) * (COUNT_ID)
    + array_size(COUNT_ID, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE) * COUNT_ID
    + array_size(COUNT_ID, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE) * (COUNT_CONST)
    + array_size(COUNT_CONST, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE) * COUNT_CONST
    + array_size(COUNT_CONST, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * (COUNT_ID)
    + array_size(COUNT_ID, sizeof(THING))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * COUNT_ID
    + array_size(COUNT_ID, sizeof(THING))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * (COUNT_CONST)
    + array_size(COUNT_CONST, sizeof(THING))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * COUNT_CONST
    + array_size(COUNT_CONST, sizeof(THING))
    , ...)
    )

    // 2-factor product, only identifiers.
    @@
    identifier SIZE, COUNT;
    @@

    vmalloc(
    - SIZE * COUNT
    + array_size(COUNT, SIZE)
    , ...)

    // 3-factor product with 1 sizeof(type) or sizeof(expression), with
    // redundant parens removed.
    @@
    expression THING;
    identifier STRIDE, COUNT;
    type TYPE;
    @@

    (
    vmalloc(
    - sizeof(TYPE) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    vmalloc(
    - sizeof(THING) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    )

    // 3-factor product with 2 sizeof(variable), with redundant parens removed.
    @@
    expression THING1, THING2;
    identifier COUNT;
    type TYPE1, TYPE2;
    @@

    (
    vmalloc(
    - sizeof(TYPE1) * sizeof(TYPE2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    vmalloc(
    - sizeof(THING1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    vmalloc(
    - sizeof(THING1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    |
    vmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    )

    // 3-factor product, only identifiers, with redundant parens removed.
    @@
    identifier STRIDE, SIZE, COUNT;
    @@

    (
    vmalloc(
    - (COUNT) * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    vmalloc(
    - COUNT * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    vmalloc(
    - COUNT * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    vmalloc(
    - (COUNT) * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    vmalloc(
    - COUNT * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    vmalloc(
    - (COUNT) * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    vmalloc(
    - (COUNT) * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    vmalloc(
    - COUNT * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    )

    // Any remaining multi-factor products, first at least 3-factor products
    // when they're not all constants...
    @@
    expression E1, E2, E3;
    constant C1, C2, C3;
    @@

    (
    vmalloc(C1 * C2 * C3, ...)
    |
    vmalloc(
    - E1 * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    )

    // And then all remaining 2 factors products when they're not all constants.
    @@
    expression E1, E2;
    constant C1, C2;
    @@

    (
    vmalloc(C1 * C2, ...)
    |
    vmalloc(
    - E1 * E2
    + array_size(E1, E2)
    , ...)
    )

    Signed-off-by: Kees Cook

    Kees Cook
     
  • The kzalloc() function has a 2-factor argument form, kcalloc(). This
    patch replaces cases of:

    kzalloc(a * b, gfp)

    with:
    kcalloc(a * b, gfp)

    as well as handling cases of:

    kzalloc(a * b * c, gfp)

    with:

    kzalloc(array3_size(a, b, c), gfp)

    as it's slightly less ugly than:

    kzalloc_array(array_size(a, b), c, gfp)

    This does, however, attempt to ignore constant size factors like:

    kzalloc(4 * 1024, gfp)

    though any constants defined via macros get caught up in the conversion.

    Any factors with a sizeof() of "unsigned char", "char", and "u8" were
    dropped, since they're redundant.

    The Coccinelle script used for this was:

    // Fix redundant parens around sizeof().
    @@
    type TYPE;
    expression THING, E;
    @@

    (
    kzalloc(
    - (sizeof(TYPE)) * E
    + sizeof(TYPE) * E
    , ...)
    |
    kzalloc(
    - (sizeof(THING)) * E
    + sizeof(THING) * E
    , ...)
    )

    // Drop single-byte sizes and redundant parens.
    @@
    expression COUNT;
    typedef u8;
    typedef __u8;
    @@

    (
    kzalloc(
    - sizeof(u8) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(__u8) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(char) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(unsigned char) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(u8) * COUNT
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(__u8) * COUNT
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(char) * COUNT
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(unsigned char) * COUNT
    + COUNT
    , ...)
    )

    // 2-factor product with sizeof(type/expression) and identifier or constant.
    @@
    type TYPE;
    expression THING;
    identifier COUNT_ID;
    constant COUNT_CONST;
    @@

    (
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * (COUNT_ID)
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * COUNT_ID
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * (COUNT_CONST)
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * COUNT_CONST
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * (COUNT_ID)
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * COUNT_ID
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * (COUNT_CONST)
    + COUNT_CONST, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * COUNT_CONST
    + COUNT_CONST, sizeof(THING)
    , ...)
    )

    // 2-factor product, only identifiers.
    @@
    identifier SIZE, COUNT;
    @@

    - kzalloc
    + kcalloc
    (
    - SIZE * COUNT
    + COUNT, SIZE
    , ...)

    // 3-factor product with 1 sizeof(type) or sizeof(expression), with
    // redundant parens removed.
    @@
    expression THING;
    identifier STRIDE, COUNT;
    type TYPE;
    @@

    (
    kzalloc(
    - sizeof(TYPE) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    )

    // 3-factor product with 2 sizeof(variable), with redundant parens removed.
    @@
    expression THING1, THING2;
    identifier COUNT;
    type TYPE1, TYPE2;
    @@

    (
    kzalloc(
    - sizeof(TYPE1) * sizeof(TYPE2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kzalloc(
    - sizeof(THING1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kzalloc(
    - sizeof(THING1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    )

    // 3-factor product, only identifiers, with redundant parens removed.
    @@
    identifier STRIDE, SIZE, COUNT;
    @@

    (
    kzalloc(
    - (COUNT) * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - (COUNT) * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - (COUNT) * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - (COUNT) * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    )

    // Any remaining multi-factor products, first at least 3-factor products,
    // when they're not all constants...
    @@
    expression E1, E2, E3;
    constant C1, C2, C3;
    @@

    (
    kzalloc(C1 * C2 * C3, ...)
    |
    kzalloc(
    - (E1) * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kzalloc(
    - (E1) * (E2) * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kzalloc(
    - (E1) * (E2) * (E3)
    + array3_size(E1, E2, E3)
    , ...)
    |
    kzalloc(
    - E1 * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    )

    // And then all remaining 2 factors products when they're not all constants,
    // keeping sizeof() as the second factor argument.
    @@
    expression THING, E1, E2;
    type TYPE;
    constant C1, C2, C3;
    @@

    (
    kzalloc(sizeof(THING) * C2, ...)
    |
    kzalloc(sizeof(TYPE) * C2, ...)
    |
    kzalloc(C1 * C2 * C3, ...)
    |
    kzalloc(C1 * C2, ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * (E2)
    + E2, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * E2
    + E2, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * (E2)
    + E2, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * E2
    + E2, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - (E1) * E2
    + E1, E2
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - (E1) * (E2)
    + E1, E2
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - E1 * E2
    + E1, E2
    , ...)
    )

    Signed-off-by: Kees Cook

    Kees Cook
     
  • Pull KVM updates from Paolo Bonzini:
    "Small update for KVM:

    ARM:
    - lazy context-switching of FPSIMD registers on arm64
    - "split" regions for vGIC redistributor

    s390:
    - cleanups for nested
    - clock handling
    - crypto
    - storage keys
    - control register bits

    x86:
    - many bugfixes
    - implement more Hyper-V super powers
    - implement lapic_timer_advance_ns even when the LAPIC timer is
    emulated using the processor's VMX preemption timer.
    - two security-related bugfixes at the top of the branch"

    * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (79 commits)
    kvm: fix typo in flag name
    kvm: x86: use correct privilege level for sgdt/sidt/fxsave/fxrstor access
    KVM: x86: pass kvm_vcpu to kvm_read_guest_virt and kvm_write_guest_virt_system
    KVM: x86: introduce linear_{read,write}_system
    kvm: nVMX: Enforce cpl=0 for VMX instructions
    kvm: nVMX: Add support for "VMWRITE to any supported field"
    kvm: nVMX: Restrict VMX capability MSR changes
    KVM: VMX: Optimize tscdeadline timer latency
    KVM: docs: nVMX: Remove known limitations as they do not exist now
    KVM: docs: mmu: KVM support exposing SLAT to guests
    kvm: no need to check return value of debugfs_create functions
    kvm: Make VM ioctl do valloc for some archs
    kvm: Change return type to vm_fault_t
    KVM: docs: mmu: Fix link to NPT presentation from KVM Forum 2008
    kvm: x86: Amend the KVM_GET_SUPPORTED_CPUID API documentation
    KVM: x86: hyperv: declare KVM_CAP_HYPERV_TLBFLUSH capability
    KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE}_EX implementation
    KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation
    KVM: introduce kvm_make_vcpus_request_mask() API
    KVM: x86: hyperv: do rep check for each hypercall separately
    ...

    Linus Torvalds
     

09 Jun, 2018

1 commit

  • Pull arm64 updates from Catalin Marinas:
    "Apart from the core arm64 and perf changes, the Spectre v4 mitigation
    touches the arm KVM code and the ACPI PPTT support touches drivers/
    (acpi and cacheinfo). I should have the maintainers' acks in place.

    Summary:

    - Spectre v4 mitigation (Speculative Store Bypass Disable) support
    for arm64 using SMC firmware call to set a hardware chicken bit

    - ACPI PPTT (Processor Properties Topology Table) parsing support and
    enable the feature for arm64

    - Report signal frame size to user via auxv (AT_MINSIGSTKSZ). The
    primary motivation is Scalable Vector Extensions which requires
    more space on the signal frame than the currently defined
    MINSIGSTKSZ

    - ARM perf patches: allow building arm-cci as module, demote
    dev_warn() to dev_dbg() in arm-ccn event_init(), miscellaneous
    cleanups

    - cmpwait() WFE optimisation to avoid some spurious wakeups

    - L1_CACHE_BYTES reverted back to 64 (for performance reasons that
    have to do with some network allocations) while keeping
    ARCH_DMA_MINALIGN to 128. cache_line_size() returns the actual
    hardware Cache Writeback Granule

    - Turn LSE atomics on by default in Kconfig

    - Kernel fault reporting tidying

    - Some #include and miscellaneous cleanups"

    * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (53 commits)
    arm64: Fix syscall restarting around signal suppressed by tracer
    arm64: topology: Avoid checking numa mask for scheduler MC selection
    ACPI / PPTT: fix build when CONFIG_ACPI_PPTT is not enabled
    arm64: cpu_errata: include required headers
    arm64: KVM: Move VCPU_WORKAROUND_2_FLAG macros to the top of the file
    arm64: signal: Report signal frame size to userspace via auxv
    arm64/sve: Thin out initialisation sanity-checks for sve_max_vl
    arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID
    arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests
    arm64: KVM: Add ARCH_WORKAROUND_2 support for guests
    arm64: KVM: Add HYP per-cpu accessors
    arm64: ssbd: Add prctl interface for per-thread mitigation
    arm64: ssbd: Introduce thread flag to control userspace mitigation
    arm64: ssbd: Restore mitigation status on CPU resume
    arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation
    arm64: ssbd: Add global mitigation state accessor
    arm64: Add 'ssbd' command-line option
    arm64: Add ARCH_WORKAROUND_2 probing
    arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2
    arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1
    ...

    Linus Torvalds
     

05 Jun, 2018

2 commits

  • …iederm/user-namespace

    Pull siginfo updates from Eric Biederman:
    "This set of changes close the known issues with setting si_code to an
    invalid value, and with not fully initializing struct siginfo. There
    remains work to do on nds32, arc, unicore32, powerpc, arm, arm64, ia64
    and x86 to get the code that generates siginfo into a simpler and more
    maintainable state. Most of that work involves refactoring the signal
    handling code and thus careful code review.

    Also not included is the work to shrink the in kernel version of
    struct siginfo. That depends on getting the number of places that
    directly manipulate struct siginfo under control, as it requires the
    introduction of struct kernel_siginfo for the in kernel things.

    Overall this set of changes looks like it is making good progress, and
    with a little luck I will be wrapping up the siginfo work next
    development cycle"

    * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (46 commits)
    signal/sh: Stop gcc warning about an impossible case in do_divide_error
    signal/mips: Report FPE_FLTUNK for undiagnosed floating point exceptions
    signal/um: More carefully relay signals in relay_signal.
    signal: Extend siginfo_layout with SIL_FAULT_{MCEERR|BNDERR|PKUERR}
    signal: Remove unncessary #ifdef SEGV_PKUERR in 32bit compat code
    signal/signalfd: Add support for SIGSYS
    signal/signalfd: Remove __put_user from signalfd_copyinfo
    signal/xtensa: Use force_sig_fault where appropriate
    signal/xtensa: Consistenly use SIGBUS in do_unaligned_user
    signal/um: Use force_sig_fault where appropriate
    signal/sparc: Use force_sig_fault where appropriate
    signal/sparc: Use send_sig_fault where appropriate
    signal/sh: Use force_sig_fault where appropriate
    signal/s390: Use force_sig_fault where appropriate
    signal/riscv: Replace do_trap_siginfo with force_sig_fault
    signal/riscv: Use force_sig_fault where appropriate
    signal/parisc: Use force_sig_fault where appropriate
    signal/parisc: Use force_sig_mceerr where appropriate
    signal/openrisc: Use force_sig_fault where appropriate
    signal/nios2: Use force_sig_fault where appropriate
    ...

    Linus Torvalds
     
  • Pull aio updates from Al Viro:
    "Majority of AIO stuff this cycle. aio-fsync and aio-poll, mostly.

    The only thing I'm holding back for a day or so is Adam's aio ioprio -
    his last-minute fixup is trivial (missing stub in !CONFIG_BLOCK case),
    but let it sit in -next for decency sake..."

    * 'work.aio-1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (46 commits)
    aio: sanitize the limit checking in io_submit(2)
    aio: fold do_io_submit() into callers
    aio: shift copyin of iocb into io_submit_one()
    aio_read_events_ring(): make a bit more readable
    aio: all callers of aio_{read,write,fsync,poll} treat 0 and -EIOCBQUEUED the same way
    aio: take list removal to (some) callers of aio_complete()
    aio: add missing break for the IOCB_CMD_FDSYNC case
    random: convert to ->poll_mask
    timerfd: convert to ->poll_mask
    eventfd: switch to ->poll_mask
    pipe: convert to ->poll_mask
    crypto: af_alg: convert to ->poll_mask
    net/rxrpc: convert to ->poll_mask
    net/iucv: convert to ->poll_mask
    net/phonet: convert to ->poll_mask
    net/nfc: convert to ->poll_mask
    net/caif: convert to ->poll_mask
    net/bluetooth: convert to ->poll_mask
    net/sctp: convert to ->poll_mask
    net/tipc: convert to ->poll_mask
    ...

    Linus Torvalds
     

02 Jun, 2018

4 commits

  • When calling debugfs functions, there is no need to ever check the
    return value. The function can work or not, but the code logic should
    never do something different based on this.

    This cleans up the error handling a lot, as this code will never get
    hit.

    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: Michael Ellerman
    Cc: Christoffer Dall
    Cc: Marc Zyngier
    Cc: Paolo Bonzini
    Cc: "Radim KrÄmář"
    Cc: Arvind Yadav
    Cc: Eric Auger
    Cc: Andre Przywara
    Cc: kvm-ppc@vger.kernel.org
    Cc: linuxppc-dev@lists.ozlabs.org
    Cc: linux-kernel@vger.kernel.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: kvmarm@lists.cs.columbia.edu
    Cc: kvm@vger.kernel.org
    Signed-off-by: Greg Kroah-Hartman
    Signed-off-by: Paolo Bonzini

    Greg Kroah-Hartman
     
  • The kvm struct has been bloating. For example, it's tens of kilo-bytes
    for x86, which turns out to be a large amount of memory to allocate
    contiguously via kzalloc. Thus, this patch does the following:
    1. Uses architecture-specific routines to allocate the kvm struct via
    vzalloc for x86.
    2. Switches arm to __KVM_HAVE_ARCH_VM_ALLOC so that it can use vzalloc
    when has_vhe() is true.

    Other architectures continue to default to kalloc, as they have a
    dependency on kalloc or have a small-enough struct kvm.

    Signed-off-by: Marc Orr
    Reviewed-by: Marc Zyngier
    Signed-off-by: Paolo Bonzini

    Marc Orr
     
  • Use new return type vm_fault_t for fault handler. For
    now, this is just documenting that the function returns
    a VM_FAULT value rather than an errno. Once all instances
    are converted, vm_fault_t will become a distinct type.

    commit 1c8f422059ae ("mm: change return type to vm_fault_t")

    Signed-off-by: Souptick Joarder
    Reviewed-by: Matthew Wilcox
    Signed-off-by: Paolo Bonzini

    Souptick Joarder
     
  • …marm/kvmarm into HEAD

    KVM/ARM updates for 4.18

    - Lazy context-switching of FPSIMD registers on arm64
    - Allow virtual redistributors to be part of two or more MMIO ranges

    Paolo Bonzini
     

01 Jun, 2018

2 commits

  • Now that all our infrastructure is in place, let's expose the
    availability of ARCH_WORKAROUND_2 to guests. We take this opportunity
    to tidy up a couple of SMCCC constants.

    Acked-by: Christoffer Dall
    Reviewed-by: Mark Rutland
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas

    Marc Zyngier
     
  • In order to offer ARCH_WORKAROUND_2 support to guests, we need
    a bit of infrastructure.

    Let's add a flag indicating whether or not the guest uses
    SSBD mitigation. Depending on the state of this flag, allow
    KVM to disable ARCH_WORKAROUND_2 before entering the guest,
    and enable it when exiting it.

    Reviewed-by: Christoffer Dall
    Reviewed-by: Mark Rutland
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas

    Marc Zyngier
     

26 May, 2018

2 commits


25 May, 2018

14 commits

  • Now all the internals are ready to handle multiple redistributor
    regions, let's allow the userspace to register them.

    Signed-off-by: Eric Auger
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • On vcpu first run, we eventually know the actual number of vcpus.
    This is a synchronization point to check all redistributors
    were assigned. On kvm_vgic_map_resources() we check both dist and
    redist were set, eventually check potential base address inconsistencies.

    Signed-off-by: Eric Auger
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • As we are going to register several redist regions,
    vgic_register_all_redist_iodevs() may be called several times. We need
    to register a redist_iodev for a given vcpu only once. So let's
    check if the base address has already been set. Initialize this latter
    in kvm_vgic_vcpu_init().

    Signed-off-by: Eric Auger
    Acked-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • kvm_vgic_vcpu_early_init gets called after kvm_vgic_cpu_init which
    is confusing. The call path is as follows:
    kvm_vm_ioctl_create_vcpu
    |_ kvm_arch_cpu_create
    |_ kvm_vcpu_init
    |_ kvm_arch_vcpu_init
    |_ kvm_vgic_vcpu_init
    |_ kvm_arch_vcpu_postcreate
    |_ kvm_vgic_vcpu_early_init

    Static initialization currently done in kvm_vgic_vcpu_early_init()
    can be moved to kvm_vgic_vcpu_init(). So let's move the code and
    remove kvm_vgic_vcpu_early_init(). kvm_arch_vcpu_postcreate() does
    nothing.

    Signed-off-by: Eric Auger
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • We introduce a new helper that creates and inserts a new redistributor
    region into the rdist region list. This helper both handles the case
    where the redistributor region size is known at registration time
    and the legacy case where it is not (eventually depending on the number
    of online vcpus). Depending on pfns, we perform all the possible checks
    that we can do:

    - end of memory crossing
    - incorrect alignment of the base address
    - collision with distributor region if already defined
    - collision with already registered rdist regions
    - check of the new index

    Rdist regions must be inserted by increasing order of indices. Indices
    must be contiguous.

    Signed-off-by: Eric Auger
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • vgic_v3_check_base() currently only handles the case of a unique
    legacy redistributor region whose size is not explicitly set but
    inferred, instead, from the number of online vcpus.

    We adapt it to handle the case of multiple redistributor regions
    with explicitly defined size. We rely on two new helpers:
    - vgic_v3_rdist_overlap() is used to detect overlap with the dist
    region if defined
    - vgic_v3_rd_region_size computes the size of the redist region,
    would it be a legacy unique region or a new explicitly sized
    region.

    Signed-off-by: Eric Auger
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • The TYPER of an redistributor reflects whether the rdist is
    the last one of the redistributor region. Let's compare the TYPER
    GPA against the address of the last occupied slot within the
    redistributor region.

    Signed-off-by: Eric Auger
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • We introduce vgic_v3_rdist_free_slot to help identifying
    where we can place a new 2x64KB redistributor.

    Signed-off-by: Eric Auger
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • At the moment KVM supports a single rdist region. We want to
    support several separate rdist regions so let's introduce a list
    of them. This patch currently only cares about a single
    entry in this list as the functionality to register several redist
    regions is not yet there. So this only translates the existing code
    into something functionally similar using that new data struct.

    The redistributor region handle is stored in the vgic_cpu structure
    to allow later computation of the TYPER last bit.

    Signed-off-by: Eric Auger
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • in case kvm_vgic_map_resources() fails, typically if the vgic
    distributor is not defined, __kvm_vgic_destroy will be called
    several times. Indeed kvm_vgic_map_resources() is called on
    first vcpu run. As a result dist->spis is freeed more than once
    and on the second time it causes a "kernel BUG at mm/slub.c:3912!"

    Set dist->spis to NULL to avoid the crash.

    Fixes: ad275b8bb1e6 ("KVM: arm/arm64: vgic-new: vgic_init: implement
    vgic_init")

    Signed-off-by: Eric Auger
    Reviewed-by: Marc Zyngier
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Eric Auger
     
  • Now that the host SVE context can be saved on demand from Hyp,
    there is no longer any need to save this state in advance before
    entering the guest.

    This patch removes the relevant call to
    kvm_fpsimd_flush_cpu_state().

    Since the problem that function was intended to solve now no longer
    exists, the function and its dependencies are also deleted.

    Signed-off-by: Dave Martin
    Reviewed-by: Alex Bennée
    Acked-by: Christoffer Dall
    Acked-by: Marc Zyngier
    Acked-by: Catalin Marinas
    Signed-off-by: Marc Zyngier

    Dave Martin
     
  • This patch adds SVE context saving to the hyp FPSIMD context switch
    path. This means that it is no longer necessary to save the host
    SVE state in advance of entering the guest, when in use.

    In order to avoid adding pointless complexity to the code, VHE is
    assumed if SVE is in use. VHE is an architectural prerequisite for
    SVE, so there is no good reason to turn CONFIG_ARM64_VHE off in
    kernels that support both SVE and KVM.

    Historically, software models exist that can expose the
    architecturally invalid configuration of SVE without VHE, so if
    this situation is detected at kvm_init() time then KVM will be
    disabled.

    Signed-off-by: Dave Martin
    Reviewed-by: Alex Bennée
    Acked-by: Catalin Marinas
    Signed-off-by: Marc Zyngier

    Dave Martin
     
  • This patch refactors KVM to align the host and guest FPSIMD
    save/restore logic with each other for arm64. This reduces the
    number of redundant save/restore operations that must occur, and
    reduces the common-case IRQ blackout time during guest exit storms
    by saving the host state lazily and optimising away the need to
    restore the host state before returning to the run loop.

    Four hooks are defined in order to enable this:

    * kvm_arch_vcpu_run_map_fp():
    Called on PID change to map necessary bits of current to Hyp.

    * kvm_arch_vcpu_load_fp():
    Set up FP/SIMD for entering the KVM run loop (parse as
    "vcpu_load fp").

    * kvm_arch_vcpu_ctxsync_fp():
    Get FP/SIMD into a safe state for re-enabling interrupts after a
    guest exit back to the run loop.

    For arm64 specifically, this involves updating the host kernel's
    FPSIMD context tracking metadata so that kernel-mode NEON use
    will cause the vcpu's FPSIMD state to be saved back correctly
    into the vcpu struct. This must be done before re-enabling
    interrupts because kernel-mode NEON may be used by softirqs.

    * kvm_arch_vcpu_put_fp():
    Save guest FP/SIMD state back to memory and dissociate from the
    CPU ("vcpu_put fp").

    Also, the arm64 FPSIMD context switch code is updated to enable it
    to save back FPSIMD state for a vcpu, not just current. A few
    helpers drive this:

    * fpsimd_bind_state_to_cpu(struct user_fpsimd_state *fp):
    mark this CPU as having context fp (which may belong to a vcpu)
    currently loaded in its registers. This is the non-task
    equivalent of the static function fpsimd_bind_to_cpu() in
    fpsimd.c.

    * task_fpsimd_save():
    exported to allow KVM to save the guest's FPSIMD state back to
    memory on exit from the run loop.

    * fpsimd_flush_state():
    invalidate any context's FPSIMD state that is currently loaded.
    Used to disassociate the vcpu from the CPU regs on run loop exit.

    These changes allow the run loop to enable interrupts (and thus
    softirqs that may use kernel-mode NEON) without having to save the
    guest's FPSIMD state eagerly.

    Some new vcpu_arch fields are added to make all this work. Because
    host FPSIMD state can now be saved back directly into current's
    thread_struct as appropriate, host_cpu_context is no longer used
    for preserving the FPSIMD state. However, it is still needed for
    preserving other things such as the host's system registers. To
    avoid ABI churn, the redundant storage space in host_cpu_context is
    not removed for now.

    arch/arm is not addressed by this patch and continues to use its
    current save/restore logic. It could provide implementations of
    the helpers later if desired.

    Signed-off-by: Dave Martin
    Reviewed-by: Marc Zyngier
    Reviewed-by: Christoffer Dall
    Reviewed-by: Alex Bennée
    Acked-by: Catalin Marinas
    Signed-off-by: Marc Zyngier

    Dave Martin
     
  • KVM/ARM differs from other architectures in having to maintain an
    additional virtual address space from that of the host and the
    guest, because we split the execution of KVM across both EL1 and
    EL2.

    This results in a need to explicitly map data structures into EL2
    (hyp) which are accessed from the hyp code. As we are about to be
    more clever with our FPSIMD handling on arm64, which stores data in
    the task struct and uses thread_info flags, we will have to map
    parts of the currently executing task struct into the EL2 virtual
    address space.

    However, we don't want to do this on every KVM_RUN, because it is a
    fairly expensive operation to walk the page tables, and the common
    execution mode is to map a single thread to a VCPU. By introducing
    a hook that architectures can select with
    HAVE_KVM_VCPU_RUN_PID_CHANGE, we do not introduce overhead for
    other architectures, but have a simple way to only map the data we
    need when required for arm64.

    This patch introduces the framework only, and wires it up in the
    arm/arm64 KVM common code.

    No functional change.

    Signed-off-by: Christoffer Dall
    Signed-off-by: Dave Martin
    Reviewed-by: Marc Zyngier
    Reviewed-by: Alex Bennée
    Signed-off-by: Marc Zyngier

    Christoffer Dall
     

15 May, 2018

4 commits

  • kvm_read_guest() will eventually look up in kvm_memslots(), which requires
    either to hold the kvm->slots_lock or to be inside a kvm->srcu critical
    section.
    In contrast to x86 and s390 we don't take the SRCU lock on every guest
    exit, so we have to do it individually for each kvm_read_guest() call.
    Use the newly introduced wrapper for that.

    Cc: Stable # 4.12+
    Reported-by: Jan Glauber
    Signed-off-by: Andre Przywara
    Acked-by: Christoffer Dall
    Signed-off-by: Paolo Bonzini

    Andre Przywara
     
  • kvm_read_guest() will eventually look up in kvm_memslots(), which requires
    either to hold the kvm->slots_lock or to be inside a kvm->srcu critical
    section.
    In contrast to x86 and s390 we don't take the SRCU lock on every guest
    exit, so we have to do it individually for each kvm_read_guest() call.

    Provide a wrapper which does that and use that everywhere.

    Note that ending the SRCU critical section before returning from the
    kvm_read_guest() wrapper is safe, because the data has been *copied*, so
    we don't need to rely on valid references to the memslot anymore.

    Cc: Stable # 4.8+
    Reported-by: Jan Glauber
    Signed-off-by: Andre Przywara
    Acked-by: Christoffer Dall
    Signed-off-by: Paolo Bonzini

    Andre Przywara
     
  • Apparently the development of update_affinity() overlapped with the
    promotion of irq_lock to be _irqsave, so the patch didn't convert this
    lock over. This will make lockdep complain.

    Fix this by disabling IRQs around the lock.

    Cc: stable@vger.kernel.org
    Fixes: 08c9fd042117 ("KVM: arm/arm64: vITS: Add a helper to update the affinity of an LPI")
    Reported-by: Jan Glauber
    Signed-off-by: Andre Przywara
    Acked-by: Christoffer Dall
    Signed-off-by: Paolo Bonzini

    Andre Przywara
     
  • As Jan reported [1], lockdep complains about the VGIC not being bullet
    proof. This seems to be due to two issues:
    - When commit 006df0f34930 ("KVM: arm/arm64: Support calling
    vgic_update_irq_pending from irq context") promoted irq_lock and
    ap_list_lock to _irqsave, we forgot two instances of irq_lock.
    lockdeps seems to pick those up.
    - If a lock is _irqsave, any other locks we take inside them should be
    _irqsafe as well. So the lpi_list_lock needs to be promoted also.

    This fixes both issues by simply making the remaining instances of those
    locks _irqsave.
    One irq_lock is addressed in a separate patch, to simplify backporting.

    [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-May/575718.html

    Cc: stable@vger.kernel.org
    Fixes: 006df0f34930 ("KVM: arm/arm64: Support calling vgic_update_irq_pending from irq context")
    Reported-by: Jan Glauber
    Acked-by: Christoffer Dall
    Signed-off-by: Andre Przywara
    Signed-off-by: Paolo Bonzini

    Andre Przywara
     

06 May, 2018

1 commit


04 May, 2018

1 commit


28 Apr, 2018

1 commit

  • Pull KVM fixes from Radim Krčmář:
    "ARM:
    - PSCI selection API, a leftover from 4.16 (for stable)
    - Kick vcpu on active interrupt affinity change
    - Plug a VMID allocation race on oversubscribed systems
    - Silence debug messages
    - Update Christoffer's email address (linaro -> arm)

    x86:
    - Expose userspace-relevant bits of a newly added feature
    - Fix TLB flushing on VMX with VPID, but without EPT"

    * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
    x86/headers/UAPI: Move DISABLE_EXITS KVM capability bits to the UAPI
    kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use
    arm/arm64: KVM: Add PSCI version selection API
    KVM: arm/arm64: vgic: Kick new VCPU on interrupt migration
    arm64: KVM: Demote SVE and LORegion warnings to debug only
    MAINTAINERS: Update e-mail address for Christoffer Dall
    KVM: arm/arm64: Close VMID generation race

    Linus Torvalds
     

27 Apr, 2018

3 commits

  • Now that we make sure we don't inject multiple instances of the
    same GICv2 SGI at the same time, we've made another bug more
    obvious:

    If we exit with an active SGI, we completely lose track of which
    vcpu it came from. On the next entry, we restore it with 0 as a
    source, and if that wasn't the right one, too bad. While this
    doesn't seem to trouble GIC-400, the architectural model gets
    offended and doesn't deactivate the interrupt on EOI.

    Another connected issue is that we will happilly make pending
    an interrupt from another vcpu, overriding the above zero with
    something that is just as inconsistent. Don't do that.

    The final issue is that we signal a maintenance interrupt when
    no pending interrupts are present in the LR. Assuming we've fixed
    the two issues above, we end-up in a situation where we keep
    exiting as soon as we've reached the active state, and not be
    able to inject the following pending.

    The fix comes in 3 parts:
    - GICv2 SGIs have their source vcpu saved if they are active on
    exit, and restored on entry
    - Multi-SGIs cannot go via the Pending+Active state, as this would
    corrupt the source field
    - Multi-SGIs are converted to using MI on EOI instead of NPIE

    Fixes: 16ca6a607d84bef0 ("KVM: arm/arm64: vgic: Don't populate multiple LRs with the same vintid")
    Reported-by: Mark Rutland
    Tested-by: Mark Rutland
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier

    Marc Zyngier
     
  • It's possible for userspace to control n. Sanitize n when using it as an
    array index.

    Note that while it appears that n must be bound to the interval [0,3]
    due to the way it is extracted from addr, we cannot guarantee that
    compiler transformations (and/or future refactoring) will ensure this is
    the case, and given this is a slow path it's better to always perform
    the masking.

    Found by smatch.

    Signed-off-by: Mark Rutland
    Acked-by: Christoffer Dall
    Acked-by: Marc Zyngier
    Cc: kvmarm@lists.cs.columbia.edu
    Signed-off-by: Will Deacon

    Mark Rutland
     
  • It's possible for userspace to control intid. Sanitize intid when using
    it as an array index.

    At the same time, sort the includes when adding .

    Found by smatch.

    Signed-off-by: Mark Rutland
    Acked-by: Christoffer Dall
    Acked-by: Marc Zyngier
    Cc: kvmarm@lists.cs.columbia.edu
    Signed-off-by: Will Deacon

    Mark Rutland
     

25 Apr, 2018

1 commit

  • Call clear_siginfo to ensure every stack allocated siginfo is properly
    initialized before being passed to the signal sending functions.

    Note: It is not safe to depend on C initializers to initialize struct
    siginfo on the stack because C is allowed to skip holes when
    initializing a structure.

    The initialization of struct siginfo in tracehook_report_syscall_exit
    was moved from the helper user_single_step_siginfo into
    tracehook_report_syscall_exit itself, to make it clear that the local
    variable siginfo gets fully initialized.

    In a few cases the scope of struct siginfo has been reduced to make it
    clear that siginfo siginfo is not used on other paths in the function
    in which it is declared.

    Instances of using memset to initialize siginfo have been replaced
    with calls clear_siginfo for clarity.

    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman