17 Oct, 2020

1 commit

  • Fix multiple occurrences of duplicated words in kernel/.

    Fix one typo/spello on the same line as a duplicate word. Change one
    instance of "the the" to "that the". Otherwise just drop one of the
    repeated words.

    Signed-off-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Link: https://lkml.kernel.org/r/98202fa6-8919-ef63-9efe-c0fad5ca7af1@infradead.org
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

31 Aug, 2020

1 commit

  • In the core runtime, we must minimize any calls to external library
    functions to avoid any kind of recursion. This can happen even though
    instrumentation is disabled for called functions, but tracing is
    enabled.

    Most recently, prandom_u32() added a tracepoint, which can cause
    problems for KCSAN even if the rcuidle variant is used. For example:
    kcsan -> prandom_u32() -> trace_prandom_u32_rcuidle ->
    srcu_read_lock_notrace -> __srcu_read_lock -> kcsan ...

    While we could disable KCSAN in kcsan_setup_watchpoint(), this does not
    solve other unexpected behaviour we may get due recursing into functions
    that may not be tolerant to such recursion:
    __srcu_read_lock -> kcsan -> ... -> __srcu_read_lock

    Therefore, switch to using prandom_u32_state(), which is uninstrumented,
    and does not have a tracepoint.

    Link: https://lkml.kernel.org/r/20200821063043.1949509-1-elver@google.com
    Link: https://lkml.kernel.org/r/20200820172046.GA177701@elver.google.com
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

25 Aug, 2020

12 commits

  • Remove kcsan_counter_inc/dec() functions, as they perform no other
    logic, and are no longer needed.

    This avoids several calls in kcsan_setup_watchpoint() and
    kcsan_found_watchpoint(), as well as lets the compiler warn us about
    potential out-of-bounds accesses as the array's size is known at all
    usage sites at compile-time.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Use the same pr_fmt throughout for consistency. [ The only exception is
    report.c, where the format must be kept precisely as-is. ]

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Show a message in the kernel log if KCSAN was enabled early.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Remove the debugfs test command, as it is no longer needed now that we
    have the KUnit+Torture based kcsan-test module. This is to avoid
    confusion around how KCSAN should be tested, as only the kcsan-test
    module is maintained.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Simplify checking prefixes and length calculation of constant strings.
    For the former, the kernel provides str_has_prefix(), and the latter we
    should just use strlen("..") because GCC and Clang have optimizations
    that optimize these into constants.

    No functional change intended.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Simplify counter ID to name mapping by using an array with designated
    inits. This way, we can turn a run-time BUG() into a compile-time static
    assertion failure if a counter name is missing.

    No functional change intended.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Changes kcsan-test module to support checking reports that include
    compound instrumentation. Since we should not fail the test if this
    support is unavailable, we have to add a config variable that the test
    can use to decide what to check for.

    Acked-by: Peter Zijlstra (Intel)
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Add missing CONFIG_KCSAN_IGNORE_ATOMICS checks for the builtin atomics
    instrumentation.

    Acked-by: Peter Zijlstra (Intel)
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • For compound instrumentation and assert accesses, skew the watchpoint
    delay to be longer if randomized. This is useful to improve race
    detection for such accesses.

    For compound accesses we should increase the delay as we've aggregated
    both read and write instrumentation. By giving up 1 call into the
    runtime, we're less likely to set up a watchpoint and thus less likely
    to detect a race. We can balance this by increasing the watchpoint
    delay.

    For assert accesses, we know these are of increased interest, and we
    wish to increase our chances of detecting races for such checks.

    Note that, kcsan_udelay_{task,interrupt} define the upper bound delays.
    When randomized, delays are uniformly distributed between [0, delay].
    Skewing the delay does not break this promise as long as the defined
    upper bounds are still adhered to. The current skew results in delays
    uniformly distributed between [delay/2, delay].

    Acked-by: Peter Zijlstra (Intel)
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Add support for compounded read-write instrumentation if supported by
    the compiler. Adds the necessary instrumentation functions, and a new
    type which is used to generate a more descriptive report.

    Furthermore, such compounded memory access instrumentation is excluded
    from the "assume aligned writes up to word size are atomic" rule,
    because we cannot assume that the compiler emits code that is atomic for
    compound ops.

    LLVM/Clang added support for the feature in:
    https://github.com/llvm/llvm-project/commit/785d41a261d136b64ab6c15c5d35f2adc5ad53e3

    The new instrumentation is emitted for sets of memory accesses in the
    same basic block to the same address with at least one read appearing
    before a write. These typically result from compound operations such as
    ++, --, +=, -=, |=, &=, etc. but also equivalent forms such as "var =
    var + 1". Where the compiler determines that it is equivalent to emit a
    call to a single __tsan_read_write instead of separate __tsan_read and
    __tsan_write, we can then benefit from improved performance and better
    reporting for such access patterns.

    The new reports now show that the ops are both reads and writes, for
    example:

    read-write to 0xffffffff90548a38 of 8 bytes by task 143 on cpu 3:
    test_kernel_rmw_array+0x45/0xa0
    access_thread+0x71/0xb0
    kthread+0x21e/0x240
    ret_from_fork+0x22/0x30

    read-write to 0xffffffff90548a38 of 8 bytes by task 144 on cpu 2:
    test_kernel_rmw_array+0x45/0xa0
    access_thread+0x71/0xb0
    kthread+0x21e/0x240
    ret_from_fork+0x22/0x30

    Acked-by: Peter Zijlstra (Intel)
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Adds test case to kcsan-test module, to test atomic builtin
    instrumentation works.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Some architectures (currently e.g. s390 partially) implement atomics
    using the compiler's atomic builtins (__atomic_*, __sync_*). To support
    enabling KCSAN on such architectures in future, or support experimental
    use of these builtins, implement support for them.

    We should also avoid breaking KCSAN kernels due to use (accidental or
    otherwise) of atomic builtins in drivers, as has happened in the past:
    https://lkml.kernel.org/r/5231d2c0-41d9-6721-e15f-a7eedf3ce69e@infradead.org

    The instrumentation is subtly different from regular reads/writes: TSAN
    instrumentation replaces the use of atomic builtins with a call into the
    runtime, and the runtime's job is to also execute the desired atomic
    operation. We rely on the __atomic_* compiler builtins, available with
    all KCSAN-supported compilers, to implement each TSAN atomic
    instrumentation function.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

01 Aug, 2020

1 commit


31 Jul, 2020

1 commit

  • To improve the general usefulness of the IRQ state trace events with
    KCSAN enabled, save and restore the trace information when entering and
    exiting the KCSAN runtime as well as when generating a KCSAN report.

    Without this, reporting the IRQ trace events (whether via a KCSAN report
    or outside of KCSAN via a lockdep report) is rather useless due to
    continuously being touched by KCSAN. This is because if KCSAN is
    enabled, every instrumented memory access causes changes to IRQ trace
    events (either by KCSAN disabling/enabling interrupts or taking
    report_lock when generating a report).

    Before "lockdep: Prepare for NMI IRQ state tracking", KCSAN avoided
    touching the IRQ trace events via raw_local_irq_save/restore() and
    lockdep_off/on().

    Fixes: 248591f5d257 ("kcsan: Make KCSAN compatible with new IRQ state tracking")
    Signed-off-by: Marco Elver
    Signed-off-by: Ingo Molnar
    Link: https://lore.kernel.org/r/20200729110916.3920464-2-elver@google.com
    Signed-off-by: Ingo Molnar

    Marco Elver
     

10 Jul, 2020

1 commit

  • The new IRQ state tracking code does not honor lockdep_off(), and as
    such we should again permit tracing by using non-raw functions in
    core.c. Update the lockdep_off() comment in report.c, to reflect the
    fact there is still a potential risk of deadlock due to using printk()
    from scheduler code.

    Suggested-by: Peter Zijlstra (Intel)
    Signed-off-by: Marco Elver
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Ingo Molnar
    Link: https://lkml.kernel.org/r/20200624113246.GA170324@elver.google.com

    Marco Elver
     

30 Jun, 2020

8 commits

  • Disable branch tracing in core KCSAN runtime if branches are being
    traced (TRACE_BRANCH_PROFILING). This it to avoid its performance
    impact, but also avoid recursion in case KCSAN is enabled for the branch
    tracing runtime.

    The latter had already been a problem for KASAN:
    https://lore.kernel.org/lkml/CANpmjNOeXmD5E3O50Z3MjkiuCYaYOPyi+1rq=GZvEKwBvLR0Ug@mail.gmail.com/

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Simplify the set of compiler flags for the runtime by removing cc-option
    from -fno-stack-protector, because all supported compilers support it.
    This saves us one compiler invocation during build.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Add a test that KCSAN nor the compiler gets confused about accesses to
    jiffies on different architectures.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Remove existing special atomic rules from kcsan_is_atomic_special()
    because they are no longer needed. Since we rely on the compiler
    emitting instrumentation distinguishing volatile accesses, the rules
    have become redundant.

    Let's keep kcsan_is_atomic_special() around, so that we have an obvious
    place to add special rules should the need arise in future.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Rename 'test.c' to 'selftest.c' to better reflect its purpose (Kconfig
    variable and code inside already match this). This is to avoid confusion
    with the test suite module in 'kcsan-test.c'.

    No functional change.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • The functions here should not be forward declared for explicit use
    elsewhere in the kernel, as they should only be emitted by the compiler
    due to sanitizer instrumentation. Add forward declarations a line above
    their definition to shut up warnings in W=1 builds.

    Link: https://lkml.kernel.org/r/202006060103.jSCpnV1g%lkp@intel.com
    Reported-by: kernel test robot
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Instead of __no_kcsan_or_inline, prefer '__no_kcsan inline' in test --
    this is in case we decide to remove __no_kcsan_or_inline.

    Suggested-by: Peter Zijlstra
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • This adds KCSAN test focusing on behaviour of the integrated runtime.
    Tests various race scenarios, and verifies the reports generated to
    console. Makes use of KUnit for test organization, and the Torture
    framework for test thread control.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

12 Jun, 2020

1 commit

  • In the kernel, the "volatile" keyword is used in various concurrent
    contexts, whether in low-level synchronization primitives or for
    legacy reasons. If supported by the compiler, it will be assumed
    that aligned volatile accesses up to sizeof(long long) (matching
    compiletime_assert_rwonce_type()) are atomic.

    Recent versions of Clang [1] (GCC tentative [2]) can instrument
    volatile accesses differently. Add the option (required) to enable the
    instrumentation, and provide the necessary runtime functions. None of
    the updated compilers are widely available yet (Clang 11 will be the
    first release to support the feature).

    [1] https://github.com/llvm/llvm-project/commit/5a2c31116f412c3b6888be361137efd705e05814
    [2] https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544452.html

    This change allows removing of any explicit checks in primitives such as
    READ_ONCE() and WRITE_ONCE().

    [ bp: Massage commit message a bit. ]

    Signed-off-by: Marco Elver
    Signed-off-by: Borislav Petkov
    Signed-off-by: Thomas Gleixner
    Acked-by: Will Deacon
    Acked-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200521142047.169334-4-elver@google.com

    Marco Elver
     

07 May, 2020

1 commit

  • The __kcsan_{enable,disable}_current() variants only call into KCSAN if
    KCSAN is enabled for the current compilation unit. Note: This is
    typically not what we want, as we usually want to ensure that even calls
    into other functions still have KCSAN disabled.

    These variants may safely be used in header files that are shared
    between regular kernel code and code that does not link the KCSAN
    runtime.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

28 Apr, 2020

1 commit


14 Apr, 2020

6 commits

  • Reporting hides KCSAN runtime functions in the stack trace, with
    filtering done based on function names. Currently this included all
    functions (or modules) that would match "kcsan_". Make the filter aware
    of KCSAN tests, which contain "kcsan_test", and are no longer skipped in
    the report.

    This is in preparation for adding a KCSAN test module.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Pass string length as returned by scnprintf() to strnstr(), since
    strnstr() searches exactly len bytes in haystack, even if it contains a
    NUL-terminator before haystack+len.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Introduce ASSERT_EXCLUSIVE_*_SCOPED(), which provide an intuitive
    interface to use the scoped-access feature, without having to explicitly
    mark the start and end of the desired scope. Basing duration of the
    checks on scope avoids accidental misuse and resulting false positives,
    which may be hard to debug. See added comments for usage.

    The macros are implemented using __attribute__((__cleanup__(func))),
    which is supported by all compilers that currently support KCSAN.

    Suggested-by: Boqun Feng
    Suggested-by: Paul E. McKenney
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • This adds support for scoped accesses, where the memory range is checked
    for the duration of the scope. The feature is implemented by inserting
    the relevant access information into a list of scoped accesses for
    the current execution context, which are then checked (until removed)
    on every call (through instrumentation) into the KCSAN runtime.

    An alternative, more complex, implementation could set up a watchpoint for
    the scoped access, and keep the watchpoint set up. This, however, would
    require first exposing a handle to the watchpoint, as well as dealing
    with cases such as accesses by the same thread while the watchpoint is
    still set up (and several more cases). It is also doubtful if this would
    provide any benefit, since the majority of delay where the watchpoint
    is set up is likely due to the injected delays by KCSAN. Therefore,
    the implementation in this patch is simpler and avoids hurting KCSAN's
    main use-case (normal data race detection); it also implicitly increases
    scoped-access race-detection-ability due to increased probability of
    setting up watchpoints by repeatedly calling __kcsan_check_access()
    throughout the scope of the access.

    The implementation required adding an additional conditional branch to
    the fast-path. However, the microbenchmark showed a *speedup* of ~5%
    on the fast-path. This appears to be due to subtly improved codegen by
    GCC from moving get_ctx() and associated load of preempt_count earlier.

    Suggested-by: Boqun Feng
    Suggested-by: Paul E. McKenney
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • To avoid deadlock in case watchers can be interrupted, we need to ensure
    that producers of the struct other_info can never be blocked by an
    unrelated consumer. (Likely to occur with KCSAN_INTERRUPT_WATCHER.)

    There are several cases that can lead to this scenario, for example:

    1. A watchpoint A was set up by task T1, but interrupted by
    interrupt I1. Some other thread (task or interrupt) finds
    watchpoint A consumes it, and sets other_info. Then I1 also
    finds some unrelated watchpoint B, consumes it, but is blocked
    because other_info is in use. T1 cannot consume other_info
    because I1 never returns -> deadlock.

    2. A watchpoint A was set up by task T1, but interrupted by
    interrupt I1, which also sets up a watchpoint B. Some other
    thread finds watchpoint A, and consumes it and sets up
    other_info with its information. Similarly some other thread
    finds watchpoint B and consumes it, but is then blocked because
    other_info is in use. When I1 continues it sees its watchpoint
    was consumed, and that it must wait for other_info, which
    currently contains information to be consumed by T1. However, T1
    cannot unblock other_info because I1 never returns -> deadlock.

    To avoid this, we need to ensure that producers of struct other_info
    always have a usable other_info entry. This is obviously not the case
    with only a single instance of struct other_info, as concurrent
    producers must wait for the entry to be released by some consumer (which
    may be locked up as illustrated above).

    While it would be nice if producers could simply call kmalloc() and
    append their instance of struct other_info to a list, we are very
    limited in this code path: since KCSAN can instrument the allocators
    themselves, calling kmalloc() could lead to deadlock or corrupted
    allocator state.

    Since producers of the struct other_info will always succeed at
    try_consume_watchpoint(), preceding the call into kcsan_report(), we
    know that the particular watchpoint slot cannot simply be reused or
    consumed by another potential other_info producer. If we move removal of
    a watchpoint after reporting (by the consumer of struct other_info), we
    can see a consumed watchpoint as a held lock on elements of other_info,
    if we create a one-to-one mapping of a watchpoint to an other_info
    element.

    Therefore, the simplest solution is to create an array of struct
    other_info that is as large as the watchpoints array in core.c, and pass
    the watchpoint index to kcsan_report() for producers and consumers, and
    change watchpoints to be removed after reporting is done.

    With a default config on a 64-bit system, the array other_infos consumes
    ~37KiB. For most systems today this is not a problem. On smaller memory
    constrained systems, the config value CONFIG_KCSAN_NUM_WATCHPOINTS can
    be reduced appropriately.

    Overall, this change is a simplification of the prepare_report() code,
    and makes some of the checks (such as checking if at least one access is
    a write) redundant.

    Tested:
    $ tools/testing/selftests/rcutorture/bin/kvm.sh \
    --cpus 12 --duration 10 --kconfig "CONFIG_DEBUG_INFO=y \
    CONFIG_KCSAN=y CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n \
    CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n \
    CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y \
    CONFIG_KCSAN_INTERRUPT_WATCHER=y CONFIG_PROVE_LOCKING=y" \
    --configs TREE03
    => No longer hangs and runs to completion as expected.

    Reported-by: Paul E. McKenney
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Improve readability by introducing access_info and other_info structs,
    and in preparation of the following commit in this series replaces the
    single instance of other_info with an array of size 1.

    No functional change intended.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

26 Mar, 2020

4 commits

  • s/slots slots/slots/

    Signed-off-by: Qiujun Huang
    Reviewed-by: Nick Desaulniers
    [elver: commit message]
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Qiujun Huang
     
  • Add volatile current->state to list of implicitly atomic accesses. This
    is in preparation to eventually enable KCSAN on kernel/sched (which
    currently still has KCSAN_SANITIZE := n).

    Since accesses that match the special check in atomic.h are rare, it
    makes more sense to move this check to the slow-path, avoiding the
    additional compare in the fast-path. With the microbenchmark, a speedup
    of ~6% is measured.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Adds CONFIG_KCSAN_VERBOSE to optionally enable more verbose reports.
    Currently information about the reporting task's held locks and IRQ
    trace events are shown, if they are enabled.

    Signed-off-by: Marco Elver
    Suggested-by: Qian Cai
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Add option to allow interrupts while a watchpoint is set up. This can be
    enabled either via CONFIG_KCSAN_INTERRUPT_WATCHER or via the boot
    parameter 'kcsan.interrupt_watcher=1'.

    Note that, currently not all safe per-CPU access primitives and patterns
    are accounted for, which could result in false positives. For example,
    asm-generic/percpu.h uses plain operations, which by default are
    instrumented. On interrupts and subsequent accesses to the same
    variable, KCSAN would currently report a data race with this option.

    Therefore, this option should currently remain disabled by default, but
    may be enabled for specific test scenarios.

    To avoid new warnings, changes all uses of smp_processor_id() to use the
    raw version (as already done in kcsan_found_watchpoint()). The exact SMP
    processor id is for informational purposes in the report, and
    correctness is not affected.

    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

21 Mar, 2020

2 commits

  • Previously the system would lock up if ftrace was enabled together with
    KCSAN. This is due to recursion on reporting if the tracer code is
    instrumented with KCSAN.

    To avoid this for all types of tracing, disable KCSAN instrumentation
    for all of kernel/trace.

    Furthermore, since KCSAN relies on udelay() to introduce delay, we have
    to disable ftrace for udelay() (currently done for x86) in case KCSAN is
    used together with lockdep and ftrace. The reason is that it may corrupt
    lockdep IRQ flags tracing state due to a peculiar case of recursion
    (details in Makefile comment).

    Reported-by: Qian Cai
    Tested-by: Qian Cai
    Acked-by: Steven Rostedt (VMware)
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Marco Elver
     
  • This introduces ASSERT_EXCLUSIVE_BITS(var, mask).
    ASSERT_EXCLUSIVE_BITS(var, mask) will cause KCSAN to assume that the
    following access is safe w.r.t. data races (however, please see the
    docbook comment for disclaimer here).

    For more context on why this was considered necessary, please see:

    http://lkml.kernel.org/r/1580995070-25139-1-git-send-email-cai@lca.pw

    In particular, before this patch, data races between reads (that use
    @mask bits of an access that should not be modified concurrently) and
    writes (that change ~@mask bits not used by the readers) would have been
    annotated with "data_race()" (or "READ_ONCE()"). However, doing so would
    then hide real problems: we would no longer be able to detect harmful
    races between reads to @mask bits and writes to @mask bits.

    Therefore, by using ASSERT_EXCLUSIVE_BITS(var, mask), we accomplish:

    1. Avoid proliferation of specific macros at the call sites: by
    including a single mask in the argument list, we can use the same
    macro in a wide variety of call sites, regardless of how and which
    bits in a field each call site actually accesses.

    2. The existing code does not need to be modified (although READ_ONCE()
    may still be advisable if we cannot prove that the data race is
    always safe).

    3. We catch bugs where the exclusive bits are modified concurrently.

    4. We document properties of the current code.

    Acked-by: John Hubbard
    Signed-off-by: Marco Elver
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar
    Cc: David Hildenbrand
    Cc: Jan Kara
    Cc: Qian Cai

    Marco Elver