23 Oct, 2019

1 commit

  • A recent commit removed the NULL pointer check from the clock_getres()
    implementation causing a test case to fault.

    POSIX requires an explicit NULL pointer check for clock_getres() aside of
    the validity check of the clock_id argument for obscure reasons.

    Add it back for both 32bit and 64bit.

    Note, this is only a partial revert of the offending commit which does not
    bring back the broken fallback invocation in the the 32bit compat
    implementations of clock_getres() and clock_gettime().

    Fixes: a9446a906f52 ("lib/vdso/32: Remove inconsistent NULL pointer checks")
    Reported-by: Andreas Schwab
    Signed-off-by: Thomas Gleixner
    Tested-by: Christophe Leroy
    Cc: stable@vger.kernel.org
    Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1910211202260.1904@nanos.tec.linutronix.de

    Thomas Gleixner
     

07 Oct, 2019

1 commit

  • arm64 was the last architecture using CROSS_COMPILE_COMPAT_VDSO config
    option. With this patch series the dependency in the architecture has
    been removed.

    Remove CROSS_COMPILE_COMPAT_VDSO from the Unified vDSO library code.

    Cc: Thomas Gleixner
    Cc: Andy Lutomirski
    Signed-off-by: Vincenzo Frascino
    Signed-off-by: Will Deacon

    Vincenzo Frascino
     

31 Jul, 2019

3 commits

  • To address the regression which causes seccomp to deny applications the
    access to clock_gettime64() and clock_getres64() syscalls because they
    are not enabled in the existing filters.

    That trips over the fact that 32bit VDSOs use the new clock_gettime64() and
    clock_getres64() syscalls in the fallback path.

    Add a conditional to invoke the 32bit legacy fallback syscalls instead of
    the new 64bit variants. The conditional can go away once all architectures
    are converted.

    Fixes: 00b26474c2f1 ("lib/vdso: Provide generic VDSO implementation")
    Signed-off-by: Thomas Gleixner
    Tested-by: Sean Christopherson
    Reviewed-by: Sean Christopherson
    Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1907301134470.1738@nanos.tec.linutronix.de

    Thomas Gleixner
     
  • To allow syscall fallbacks using the legacy 32bit syscall for 32bit VDSO
    builds, move the fallback invocation out into the callers.

    Split the common code out of __cvdso_clock_gettime/getres() and invoke the
    syscall fallback in the 64bit and 32bit variants.

    Preparatory work for using legacy syscalls in 32bit VDSO. No functional
    change.

    Fixes: 00b26474c2f1 ("lib/vdso: Provide generic VDSO implementation")
    Signed-off-by: Thomas Gleixner
    Tested-by: Vincenzo Frascino
    Reviewed-by: Andy Lutomirski
    Reviewed-by: Vincenzo Frascino
    Link: https://lkml.kernel.org/r/20190728131648.695579736@linutronix.de

    Thomas Gleixner
     
  • The 32bit variants of vdso_clock_gettime()/getres() have a NULL pointer
    check for the timespec pointer. That's inconsistent vs. 64bit.

    But the vdso implementation will never be consistent versus the syscall
    because the only case which it can handle is NULL. Any other invalid
    pointer will cause a segfault. So special casing NULL is not really useful.

    Remove it along with the superflouos syscall fallback invocation as that
    will return -EFAULT anyway. That also gets rid of the dubious typecast
    which only works because the pointer is NULL.

    Fixes: 00b26474c2f1 ("lib/vdso: Provide generic VDSO implementation")
    Signed-off-by: Thomas Gleixner
    Tested-by: Vincenzo Frascino
    Reviewed-by: Vincenzo Frascino
    Reviewed-by: Andy Lutomirski
    Link: https://lkml.kernel.org/r/20190728131648.587523358@linutronix.de

    Thomas Gleixner
     

26 Jun, 2019

1 commit

  • The x86 vdso implementation on which the generic vdso library is based on
    has subtle (unfortunately undocumented) twists:

    1) The code assumes that the clocksource mask is U64_MAX which means that
    no bits are masked. Which is true for any valid x86 VDSO clocksource.
    Stupidly it still did the mask operation for no reason and at the wrong
    place right after reading the clocksource.

    2) It contains a sanity check to catch the case where slightly
    unsynchronized TSC values can be observed which would cause the delta
    calculation to make a huge jump. It therefore checks whether the
    current TSC value is larger than the value on which the current
    conversion is based on. If it's not larger the base value is used to
    prevent time jumps.

    #1 Is not only stupid for the X86 case because it does the masking for no
    reason it is also completely wrong for clocksources with a smaller mask
    which can legitimately wrap around during a conversion period. The core
    timekeeping code does it correct by applying the mask after the delta
    calculation:

    (now - base) & mask

    #2 is equally broken for clocksources which have smaller masks and can wrap
    around during a conversion period because there the now > base check is
    just wrong and causes stale time stamps and time going backwards issues.

    Unbreak it by:

    1) Removing the mask operation from the clocksource read which makes the
    fallback detection work for all clocksources

    2) Replacing the conditional delta calculation with a overrideable inline
    function.

    #2 could reuse clocksource_delta() from the timekeeping code but that
    results in a significant performance hit for the x86 VSDO. The timekeeping
    core code must have the non optimized version as it has to operate
    correctly with clocksources which have smaller masks as well to handle the
    case where TSC is discarded as timekeeper clocksource and replaced by HPET
    or pmtimer. For the VDSO there is no replacement clocksource. If TSC is
    unusable the syscall is enforced which does the right thing.

    To accommodate to the needs of various architectures provide an
    override-able inline function which defaults to the regular delta
    calculation with masking:

    (now - base) & mask

    Override it for x86 with the non-masking and checking version.

    This unbreaks the ARM64 syscall fallback operation, allows to use
    clocksources with arbitrary width and preserves the performance
    optimization for x86.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Vincenzo Frascino
    Cc: linux-arch@vger.kernel.org
    Cc: LAK
    Cc: linux-mips@vger.kernel.org
    Cc: linux-kselftest@vger.kernel.org
    Cc: catalin.marinas@arm.com
    Cc: Will Deacon
    Cc: Arnd Bergmann
    Cc: linux@armlinux.org.uk
    Cc: Ralf Baechle
    Cc: paul.burton@mips.com
    Cc: Daniel Lezcano
    Cc: salyzyn@android.com
    Cc: pcc@google.com
    Cc: shuah@kernel.org
    Cc: 0x7f454c46@gmail.com
    Cc: linux@rasmusvillemoes.dk
    Cc: huw@codeweavers.com
    Cc: sthotton@marvell.com
    Cc: andre.przywara@arm.com
    Cc: Andy Lutomirski
    Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1906261159230.32342@nanos.tec.linutronix.de

    Thomas Gleixner
     

23 Jun, 2019

2 commits

  • Some 64 bit architectures have support for 32 bit applications that
    require a separate version of the vDSOs.

    Add support to the generic code for compat fallback functions.

    Signed-off-by: Vincenzo Frascino
    Signed-off-by: Thomas Gleixner
    Tested-by: Shijith Thotton
    Tested-by: Andre Przywara
    Cc: linux-arch@vger.kernel.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: linux-mips@vger.kernel.org
    Cc: linux-kselftest@vger.kernel.org
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Arnd Bergmann
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: Paul Burton
    Cc: Daniel Lezcano
    Cc: Mark Salyzyn
    Cc: Peter Collingbourne
    Cc: Shuah Khan
    Cc: Dmitry Safonov
    Cc: Rasmus Villemoes
    Cc: Huw Davies
    Link: https://lkml.kernel.org/r/20190621095252.32307-10-vincenzo.frascino@arm.com

    Vincenzo Frascino
     
  • In the last few years the kernel gained quite some architecture specific
    vdso implementations which contain very similar code.

    Introduce a generic VDSO implementation of gettimeofday() which will be
    shareable between architectures once they are converted over.

    The implementation is based on the current x86 VDSO code.

    [ tglx: Massaged changelog and made the kernel doc tabular ]

    Signed-off-by: Vincenzo Frascino
    Signed-off-by: Thomas Gleixner
    Tested-by: Shijith Thotton
    Tested-by: Andre Przywara
    Cc: linux-arch@vger.kernel.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: linux-mips@vger.kernel.org
    Cc: linux-kselftest@vger.kernel.org
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Arnd Bergmann
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: Paul Burton
    Cc: Daniel Lezcano
    Cc: Mark Salyzyn
    Cc: Peter Collingbourne
    Cc: Shuah Khan
    Cc: Dmitry Safonov
    Cc: Rasmus Villemoes
    Cc: Huw Davies
    Link: https://lkml.kernel.org/r/20190621095252.32307-3-vincenzo.frascino@arm.com

    Vincenzo Frascino