15 May, 2019

1 commit

  • Convert to use vm_map_pages_zero() to map range of kernel memory to user
    vma.

    This driver has ignored vm_pgoff. We could later "fix" these drivers to
    behave according to the normal vm_pgoff offsetting simply by removing the
    _zero suffix on the function name and if that causes regressions, it gives
    us an easy way to revert.

    Link: http://lkml.kernel.org/r/acf678e81d554d01a9b590716ac0ccbdcdf71c25.1552921225.git.jrdr.linux@gmail.com
    Signed-off-by: Souptick Joarder
    Reviewed-by: Boris Ostrovsky
    Cc: David Airlie
    Cc: Heiko Stuebner
    Cc: Joerg Roedel
    Cc: Joonsoo Kim
    Cc: Juergen Gross
    Cc: Kees Cook
    Cc: "Kirill A. Shutemov"
    Cc: Kyungmin Park
    Cc: Marek Szyprowski
    Cc: Matthew Wilcox
    Cc: Mauro Carvalho Chehab
    Cc: Michal Hocko
    Cc: Mike Rapoport
    Cc: Oleksandr Andrushchenko
    Cc: Pawel Osciak
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: Robin Murphy
    Cc: Russell King
    Cc: Sandy Huang
    Cc: Stefan Richter
    Cc: Stephen Rothwell
    Cc: Thierry Reding
    Cc: Vlastimil Babka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Souptick Joarder
     

05 Apr, 2019

1 commit

  • struct privcmd_buf_vma_private has a zero-sized array at the end
    (pages), use the new struct_size() helper to determine the proper
    allocation size and avoid potential type mistakes.

    Signed-off-by: Andrea Righi
    Reviewed-by: Juergen Gross
    Signed-off-by: Juergen Gross

    Andrea Righi
     

09 Nov, 2018

1 commit

  • Currently the size of hypercall buffers allocated via
    /dev/xen/hypercall is limited to a default of 64 memory pages. For live
    migration of guests this might be too small as the page dirty bitmask
    needs to be sized according to the size of the guest. This means
    migrating a 8GB sized guest is already exhausting the default buffer
    size for the dirty bitmap.

    There is no sensible way to set a sane limit, so just remove it
    completely. The device node's usage is limited to root anyway, so there
    is no additional DOS scenario added by allowing unlimited buffers.

    While at it make the error path for the -ENOMEM case a little bit
    cleaner by setting n_pages to the number of successfully allocated
    pages instead of the target size.

    Fixes: c51b3c639e01f2 ("xen: add new hypercall buffer mapping device")
    Cc: #4.18
    Signed-off-by: Juergen Gross
    Reviewed-by: Boris Ostrovsky
    Signed-off-by: Juergen Gross

    Juergen Gross
     

22 Jun, 2018

1 commit

  • For passing arbitrary data from user land to the Xen hypervisor the
    Xen tools today are using mlock()ed buffers. Unfortunately the kernel
    might change access rights of such buffers for brief periods of time
    e.g. for page migration or compaction, leading to access faults in the
    hypervisor, as the hypervisor can't use the locks of the kernel.

    In order to solve this problem add a new device node to the Xen privcmd
    driver to easily allocate hypercall buffers via mmap(). The memory is
    allocated in the kernel and just mapped into user space. Marked as
    VM_IO the user mapping will not be subject to page migration et al.

    Signed-off-by: Juergen Gross
    Reviewed-by: Boris Ostrovsky
    Signed-off-by: Juergen Gross

    Juergen Gross