04 Sep, 2019
1 commit
-
This reverts commit 7f466032dc ("vhost: access vq metadata through
kernel virtual address"). The commit caused a bunch of issues, and
while commit 73f628ec9e ("vhost: disable metadata prefetch
optimization") disabled the optimization it's not nice to keep lots of
dead code around.Signed-off-by: Michael S. Tsirkin
26 Jul, 2019
1 commit
-
This seems to cause guest and host memory corruption.
Disable for now until we get a better handle on that.Signed-off-by: Michael S. Tsirkin
07 Jun, 2019
1 commit
-
Clang warns:
drivers/vhost/vhost.c:2085:5: warning: macro expansion producing
'defined' has undefined behavior [-Wexpansion-to-defined]
#if VHOST_ARCH_CAN_ACCEL_UACCESS
^
drivers/vhost/vhost.h:98:38: note: expanded from macro
'VHOST_ARCH_CAN_ACCEL_UACCESS'
#define VHOST_ARCH_CAN_ACCEL_UACCESS defined(CONFIG_MMU_NOTIFIER) && \
^It's being pedantic for the sake of portability, but the fix is easy
enough.Rework the definition of VHOST_ARCH_CAN_ACCEL_UACCESS to expand to a constant.
Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual address")
Link: https://github.com/ClangBuiltLinux/linux/issues/508
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Nathan Chancellor
Tested-by: Nathan Chancellor
06 Jun, 2019
2 commits
-
It was noticed that the copy_to/from_user() friends that was used to
access virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software checks,
speculation barriers, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets since
the time spent on metadata accessing become more significant.This patch tries to eliminate those overheads by accessing them
through direct mapping of those pages. Invalidation callbacks is
implemented for co-operation with general VM management (swap, KSM,
THP or NUMA balancing). We will try to get the direct mapping of vq
metadata before each round of packet processing if it doesn't
exist. If we fail, we will simplely fallback to copy_to/from_user()
friends.This invalidation and direct mapping access are synchronized through
spinlock and RCU. All matedata accessing through direct map is
protected by RCU, and the setup or invalidation are done under
spinlock.This method might does not work for high mem page which requires
temporary mapping so we just fallback to normal
copy_to/from_user() and may not for arch that has virtual tagged cache
since extra cache flushing is needed to eliminate the alias. This will
result complex logic and bad performance. For those archs, this patch
simply go for copy_to/from_user() friends. This is done by ruling out
kernel mapping codes through ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE.Note that this is only done when device IOTLB is not enabled. We
could use similar method to optimize IOTLB in the future.Tests shows at most about 23% improvement on TX PPS when using
virtio-user + vhost_net + xdp1 + TAP on 2.6GHz Broadwell:SMAP on | SMAP off
Before: 5.2Mpps | 7.1Mpps
After: 6.4Mpps | 8.2MppsCc: Andrea Arcangeli
Cc: James Bottomley
Cc: Christoph Hellwig
Cc: David Miller
Cc: Jerome Glisse
Cc: linux-mm@kvack.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-parisc@vger.kernel.org
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin -
Rename the function to be more accurate since it actually tries to
prefetch vq metadata address in IOTLB. And this will be used by
following patch to prefetch metadata virtual addresses.Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
27 May, 2019
1 commit
-
We used to have vhost_exceeds_weight() for vhost-net to:
- prevent vhost kthread from hogging the cpu
- balance the time spent between TX and RXThis function could be useful for vsock and scsi as well. So move it
to vhost.c. Device must specify a weight which counts the number of
requests, or it can also specific a byte_weight which counts the
number of bytes that has been processed.Signed-off-by: Jason Wang
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Michael S. Tsirkin
29 Jan, 2019
1 commit
-
After batched used ring updating was introduced in commit e2b3b35eb989
("vhost_net: batch used ring update in rx"). We tend to batch heads in
vq->heads for more than one packet. But the quota passed to
get_rx_bufs() was not correctly limited, which can result a OOB write
in vq->heads.headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
vhost_len, &in, vq_log, &log,
likely(mergeable) ? UIO_MAXIOV : 1);UIO_MAXIOV was still used which is wrong since we could have batched
used in vq->heads, this will cause OOB if the next buffer needs more
than 960 (1024 (UIO_MAXIOV) - 64 (VHOST_NET_BATCH)) heads after we've
batched 64 (VHOST_NET_BATCH) heads:
Acked-by: Stefan Hajnoczi=============================================================================
BUG kmalloc-8k (Tainted: G B ): Redzone overwritten
-----------------------------------------------------------------------------INFO: 0x00000000fd93b7a2-0x00000000f0713384. First byte 0xa9 instead of 0xcc
INFO: Allocated in alloc_pd+0x22/0x60 age=3933677 cpu=2 pid=2674
kmem_cache_alloc_trace+0xbb/0x140
alloc_pd+0x22/0x60
gen8_ppgtt_create+0x11d/0x5f0
i915_ppgtt_create+0x16/0x80
i915_gem_create_context+0x248/0x390
i915_gem_context_create_ioctl+0x4b/0xe0
drm_ioctl_kernel+0xa5/0xf0
drm_ioctl+0x2ed/0x3a0
do_vfs_ioctl+0x9f/0x620
ksys_ioctl+0x6b/0x80
__x64_sys_ioctl+0x11/0x20
do_syscall_64+0x43/0xf0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
INFO: Slab 0x00000000d13e87af objects=3 used=3 fp=0x (null) flags=0x200000000010201
INFO: Object 0x0000000003278802 @offset=17064 fp=0x00000000e2e6652bFixing this by allocating UIO_MAXIOV + VHOST_NET_BATCH iovs for
vhost-net. This is done through set the limitation through
vhost_dev_init(), then set_owner can allocate the number of iov in a
per device manner.This fixes CVE-2018-16880.
Fixes: e2b3b35eb989 ("vhost_net: batch used ring update in rx")
Signed-off-by: Jason Wang
Signed-off-by: David S. Miller
18 Jan, 2019
1 commit
-
Vhost dirty page logging API is designed to sync through GPA. But we
try to log GIOVA when device IOTLB is enabled. This is wrong and may
lead to missing data after migration.To solve this issue, when logging with device IOTLB enabled, we will:
1) reuse the device IOTLB translation result of GIOVA->HVA mapping to
get HVA, for writable descriptor, get HVA through iovec. For used
ring update, translate its GIOVA to HVA
2) traverse the GPA->HVA mapping to get the possible GPA and log
through GPA. Pay attention this reverse mapping is not guaranteed
to be unique, so we should log each possible GPA in this case.This fix the failure of scp to guest during migration. In -next, we
will probably support passing GIOVA->GPA instead of GIOVA->HVA.Fixes: 6b1e6cc7855b ("vhost: new device IOTLB API")
Reported-by: Jintack Lim
Cc: Jintack Lim
Signed-off-by: Jason Wang
Acked-by: Michael S. Tsirkin
Signed-off-by: David S. Miller
07 Aug, 2018
1 commit
-
We use to have message like:
struct vhost_msg {
int type;
union {
struct vhost_iotlb_msg iotlb;
__u8 padding[64];
};
};Unfortunately, there will be a hole of 32bit in 64bit machine because
of the alignment. This leads a different formats between 32bit API and
64bit API. What's more it will break 32bit program running on 64bit
machine.So fixing this by introducing a new message type with an explicit
32bit reserved field after type like:struct vhost_msg_v2 {
__u32 type;
__u32 reserved;
union {
struct vhost_iotlb_msg iotlb;
__u8 padding[64];
};
};We will have a consistent ABI after switching to use this. To enable
this capability, introduce a new ioctl (VHOST_SET_BAKCEND_FEATURE) for
userspace to enable this feature (VHOST_BACKEND_F_IOTLB_V2).Fixes: 6b1e6cc7855b ("vhost: new device IOTLB API")
Signed-off-by: Jason Wang
Signed-off-by: David S. Miller
11 Apr, 2018
1 commit
-
Currently vhost *_access_ok() functions return int. This is error-prone
because there are two popular conventions:1. 0 means failure, 1 means success
2. -errno means failure, 0 means successAlthough vhost mostly uses #1, it does not do so consistently.
umem_access_ok() uses #2.This patch changes the return type from int to bool so that false means
failure and true means success. This eliminates a potential source of
errors.Suggested-by: Linus Torvalds
Signed-off-by: Stefan Hajnoczi
Acked-by: Michael S. Tsirkin
Signed-off-by: David S. Miller
20 Mar, 2018
1 commit
-
Clang is particularly anal about signed vs unsigned comparisons and
doesn't like the fact that some ioctl numbers set the MSB, so we get
this error when trying to build vhost on aarch64:drivers/vhost/vhost.c:1400:7: error: overflow converting case value to
switch condition type (3221794578 to 18446744072636378898)
[-Werror, -Wswitch]
case VHOST_GET_VRING_BASE:3221794578 is 0xC008AF12 in hex
18446744072636378898 is 0xFFFFFFFFC008AF12 in hexFix this by using unsigned ints in the function signature for
vhost_vring_ioctl().Signed-off-by: Sonny Rao
Reviewed-by: Darren Kenny
Signed-off-by: Michael S. Tsirkin
09 Feb, 2018
1 commit
-
Pull virtio/vhost updates from Michael Tsirkin:
"virtio, vhost: fixes, cleanups, featuresThis includes the disk/cache memory stats for for the virtio balloon,
as well as multiple fixes and cleanups"* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
vhost: don't hold onto file pointer for VHOST_SET_LOG_FD
vhost: don't hold onto file pointer for VHOST_SET_VRING_ERR
vhost: don't hold onto file pointer for VHOST_SET_VRING_CALL
ringtest: ring.c malloc & memset to calloc
virtio_vop: don't kfree device on register failure
virtio_pci: don't kfree device on register failure
virtio: split device_register into device_initialize and device_add
vhost: remove unused lock check flag in vhost_dev_cleanup()
vhost: Remove the unused variable.
virtio_blk: print capacity at probe time
virtio: make VIRTIO a menuconfig to ease disabling it all
virtio/ringtest: virtio_ring: fix up need_event math
virtio/ringtest: fix up need_event math
virtio: virtio_mmio: make of_device_ids const.
firmware: Use PTR_ERR_OR_ZERO()
virtio-mmio: Use PTR_ERR_OR_ZERO()
vhost/scsi: Improve a size determination in four functions
virtio_balloon: include disk/file caches memory statistics
01 Feb, 2018
5 commits
-
We already hold a reference to the eventfd_ctx, which is sufficient;
there's no need to hold a reference to the struct file as well. So get
rid of vhost_dev->log_file.Signed-off-by: Eric Biggers
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Jason Wang -
We already hold a reference to the eventfd_ctx, which is sufficient;
there's no need to hold a reference to the struct file as well. So get
rid of vhost_virtqueue->error.Signed-off-by: Eric Biggers
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Jason Wang -
We already hold a reference to the eventfd_ctx, which is sufficient;
there's no need to hold a reference to the struct file as well. So get
rid of vhost_virtqueue->call.Signed-off-by: Eric Biggers
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Jason Wang -
In commit ea5d404655ba ("vhost: fix release path lockdep checks"),
Michael added a flag to check whether we should hold a lock in
vhost_dev_cleanup(), however, in commit 47283bef7ed3 ("vhost: move
memory pointer to VQs"), RCU operations have been replaced by
mutex, we can remove the no-longer-used `locked' parameter now.Signed-off-by: Caspar Zhang
Signed-off-by: Michael S. Tsirkin -
The patch (7235acdb1) changed the way of the work
flushing in which the queued seq, done seq, and the
flushing are not used anymore. Then remove them now.Fixes: 7235acdb1 ("vhost: simplify work flushing")
Cc: Jason Wang
Signed-off-by: Tonghao Zhang
Acked-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
29 Nov, 2017
1 commit
-
Signed-off-by: Al Viro
28 Nov, 2017
1 commit
-
its ->mask is POLL... bitmap
Signed-off-by: Al Viro
02 Nov, 2017
1 commit
-
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.By default all files without license information are under the default
license of the kernel, which is GPL version 2.Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if
Reviewed-by: Philippe Ombredanne
Reviewed-by: Thomas Gleixner
Signed-off-by: Greg Kroah-Hartman
09 Sep, 2017
1 commit
-
Allow interval trees to quickly check for overlaps to avoid unnecesary
tree lookups in interval_tree_iter_first().As of this patch, all interval tree flavors will require using a
'rb_root_cached' such that we can have the leftmost node easily
available. While most users will make use of this feature, those with
special functions (in addition to the generic insert, delete, search
calls) will avoid using the cached option as they can do funky things
with insertions -- for example, vma_interval_tree_insert_after().[jglisse@redhat.com: fix deadlock from typo vm_lock_anon_vma()]
Link: http://lkml.kernel.org/r/20170808225719.20723-1-jglisse@redhat.com
Link: http://lkml.kernel.org/r/20170719014603.19029-12-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso
Signed-off-by: Jérôme Glisse
Acked-by: Christian König
Acked-by: Peter Zijlstra (Intel)
Acked-by: Doug Ledford
Acked-by: Michael S. Tsirkin
Cc: David Airlie
Cc: Jason Wang
Cc: Christian Benvenuti
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 Jul, 2017
1 commit
-
This reverts commit 809ecb9bca6a9424ccd392d67e368160f8b76c92. Since it
was reported to break vhost_net. We want to cache used event and use
it to check for notification. The assumption was that guest won't move
the event idx back, but this could happen in fact when 16 bit index
wraps around after 64K entries.Signed-off-by: Jason Wang
Acked-by: Michael S. Tsirkin
Signed-off-by: David S. Miller
20 Jun, 2017
1 commit
-
Rename:
wait_queue_t => wait_queue_entry_t
'wait_queue_t' was always a slight misnomer: its name implies that it's a "queue",
but in reality it's a queue *entry*. The 'real' queue is the wait queue head,
which had to carry the name.Start sorting this out by renaming it to 'wait_queue_entry_t'.
This also allows the real structure name 'struct __wait_queue' to
lose its double underscore and become 'struct wait_queue_entry',
which is the more canonical nomenclature for such data types.Cc: Linus Torvalds
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar
02 Mar, 2017
1 commit
-
When device IOTLB is enabled, all address translations were stored in
interval tree. O(lgN) searching time could be slow for virtqueue
metadata (avail, used and descriptors) since they were accessed much
often than other addresses. So this patch introduces an O(1) array
which points to the interval tree nodes that store the translations of
vq metadata. Those array were update during vq IOTLB prefetching and
were reset during each invalidation and tlb update. Each time we want
to access vq metadata, this small array were queried before interval
tree. This would be sufficient for static mappings but not dynamic
mappings, we could do optimizations on top.Test were done with l2fwd in guest (2M hugepage):
noiommu | before | after
tx 1.32Mpps | 1.06Mpps(82%) | 1.30Mpps(98%)
rx 2.33Mpps | 1.46Mpps(63%) | 2.29Mpps(98%)We can almost reach the same performance as noiommu mode.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
16 Dec, 2016
1 commit
-
When event index was enabled, we need to fetch used event from
userspace memory each time. This userspace fetch (with memory
barrier) could be saved sometime when 1) caching used event and 2)
if used event is ahead of new and old to new updating does not cross
it, we're sure there's no need to notify guest.This will be useful for heavy tx load e.g guest pktgen test with Linux
driver shows ~3.5% improvement.Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
02 Aug, 2016
3 commits
-
This patch tries to implement an device IOTLB for vhost. This could be
used with userspace(qemu) implementation of DMA remapping
to emulate an IOMMU for the guest.The idea is simple, cache the translation in a software device IOTLB
(which is implemented as an interval tree) in vhost and use vhost_net
file descriptor for reporting IOTLB miss and IOTLB
update/invalidation. When vhost meets an IOTLB miss, the fault
address, size and access can be read from the file. After userspace
finishes the translation, it writes the translated address to the
vhost_net file to update the device IOTLB.When device IOTLB is enabled by setting VIRTIO_F_IOMMU_PLATFORM all vq
addresses set by ioctl are treated as iova instead of virtual address and
the accessing can only be done through IOTLB instead of direct userspace
memory access. Before each round or vq processing, all vq metadata is
prefetched in device IOTLB to make sure no translation fault happens
during vq processing.In most cases, virtqueues are contiguous even in virtual address space.
The IOTLB translation for virtqueue itself may make it a little
slower. We might add fast path cache on top of this patch.Signed-off-by: Jason Wang
[mst: use virtio feature bit: VHOST_F_DEVICE_IOTLB -> VIRTIO_F_IOMMU_PLATFORM ]
[mst: fix build warnings ]
Signed-off-by: Michael S. Tsirkin
[ weiyj.lk: missing unlock on error ]
Signed-off-by: Wei Yongjun -
Current pre-sorted memory region array has some limitations for future
device IOTLB conversion:1) need extra work for adding and removing a single region, and it's
expected to be slow because of sorting or memory re-allocation.
2) need extra work of removing a large range which may intersect
several regions with different size.
3) need trick for a replacement policy like LRUTo overcome the above shortcomings, this patch convert it to interval
tree which can easily address the above issue with almost no extra
work.The patch could be used for:
- Extend the current API and only let the userspace to send diffs of
memory table.
- Simplify Device IOTLB implementation.Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin -
We use spinlock to synchronize the work list now which may cause
unnecessary contentions. So this patch switch to use llist to remove
this contention. Pktgen tests shows about 5% improvement:Before:
~1300000 pps
After:
~1370000 ppsSigned-off-by: Jason Wang
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Michael S. Tsirkin
11 Mar, 2016
3 commits
-
This patch tries to poll for new added tx buffer or socket receive
queue for a while at the end of tx/rx processing. The maximum time
spent on polling were specified through a new kind of vring ioctl.Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin -
This patch introduces a helper which will return true if we're sure
that the available ring is empty for a specific vq. When we're not
sure, e.g vq access failure, return false instead. This could be used
for busy polling code to exit the busy loop.Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin -
This path introduces a helper which can give a hint for whether or not
there's a work queued in the work list. This could be used for busy
polling code to exit the busy loop.Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
02 Mar, 2016
1 commit
-
Looking at how callers use this, maybe we should just rename init_used
to vhost_vq_init_access. The _used suffix was a hint that we
access the vq used ring. But maybe what callers care about is
that it must be called after access_ok.Also, this function manipulates the vq->is_le field which isn't related
to the vq used ring.This patch simply renames vhost_init_used() to vhost_vq_init_access() as
suggested by Michael.No behaviour change.
Signed-off-by: Greg Kurz
Signed-off-by: Michael S. Tsirkin
28 Oct, 2015
1 commit
-
commit 2751c9882b947292fcfb084c4f604e01724af804 ("vhost: cross-endian
support for legacy devices") introduced a minor regression: even with
cross-endian disabled, and even on LE host, vhost_is_little_endian is
checking is_le flag so there's always a branch.To fix, simply check virtio_legacy_is_little_endian first.
Cc: Greg Kurz
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Greg Kurz
Signed-off-by: David S. Miller
16 Sep, 2015
1 commit
-
virtio 1 and any layout are core features, move them
there. This fixes vhost test.Signed-off-by: Michael S. Tsirkin
01 Jun, 2015
3 commits
-
This patch brings cross-endian support to vhost when used to implement
legacy virtio devices. Since it is a relatively rare situation, the
feature availability is controlled by a kernel config option (not set
by default).The vq->is_le boolean field is added to cache the endianness to be
used for ring accesses. It defaults to native endian, as expected
by legacy virtio devices. When the ring gets active, we force little
endian if the device is modern. When the ring is deactivated, we
revert to the native endian default.If cross-endian was compiled in, a vq->user_be boolean field is added
so that userspace may request a specific endianness. This field is
used to override the default when activating the ring of a legacy
device. It has no effect on modern devices.Signed-off-by: Greg Kurz
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Cornelia Huck
Reviewed-by: David Gibson -
The current memory accessors logic is:
- little endian if little_endian
- native endian (i.e. no byteswap) if !little_endianIf we want to fully support cross-endian vhost, we also need to be
able to convert to big endian.Instead of changing the little_endian argument to some 3-value enum, this
patch changes the logic to:
- little endian if little_endian
- big endian if !little_endianThe native endian case is handled by all users with a trivial helper. This
patch doesn't change any functionality, nor it does add overhead.Signed-off-by: Greg Kurz
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Cornelia Huck
Reviewed-by: David Gibson -
Signed-off-by: Greg Kurz
Signed-off-by: Michael S. Tsirkin
Acked-by: Cornelia Huck
Reviewed-by: David Gibson
09 Dec, 2014
3 commits
-
Signed-off-by: Jason Wang
Acked-by: Michael S. Tsirkin
Signed-off-by: Michael S. Tsirkin -
Add guest memory access wrappers to handle virtio endianness
conversions.Signed-off-by: Michael S. Tsirkin
Reviewed-by: Jason Wang
Reviewed-by: Cornelia Huck -
We need to use bit 32 for virtio 1.0.
Make vhost_has_feature bool to avoid discarding high bits.Cc: Sergei Shtylyov
Cc: Ben Hutchings
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Jason Wang