02 Mar, 2016

1 commit

  • This leaves vring_new_virtqueue alone for compatbility, but it
    adds two new improved APIs:

    vring_create_virtqueue: Creates a virtqueue backed by automatically
    allocated coherent memory. (Some day it this could be extended to
    support non-coherent memory, too, if there ends up being a platform
    on which it's worthwhile.)

    __vring_new_virtqueue: Creates a virtqueue with a manually-specified
    layout. This should allow mic_virtio to work much more cleanly.

    Signed-off-by: Andy Lutomirski
    Signed-off-by: Michael S. Tsirkin

    Andy Lutomirski
     

13 Jan, 2016

3 commits

  • We need a full barrier after writing out event index, using
    virt_store_mb there seems better than open-coding. As usual, we need a
    wrapper to account for strong barriers.

    It's tempting to use this in vhost as well, for that, we'll
    need a variant of smp_store_mb that works on __user pointers.

    Signed-off-by: Michael S. Tsirkin
    Acked-by: Peter Zijlstra (Intel)

    Michael S. Tsirkin
     
  • virtio ring uses smp_wmb on SMP and wmb on !SMP,
    the reason for the later being that it might be
    talking to another kernel on the same SMP machine.

    This is exactly what virt_xxx barriers do,
    so switch to these instead of homegrown ifdef hacks.

    Cc: Peter Zijlstra
    Cc: Alexander Duyck
    Signed-off-by: Michael S. Tsirkin
    Acked-by: Peter Zijlstra (Intel)

    Michael S. Tsirkin
     
  • This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.

    While that commit optimizes !CONFIG_SMP, it mixes
    up DMA and SMP concepts, making the code hard
    to figure out.

    A better way to optimize this is with the new __smp_XXX
    barriers.

    As a first step, go back to full rmb/wmb barriers
    for !SMP.
    We switch to __smp_XXX barriers in the next patch.

    Cc: Peter Zijlstra
    Cc: Alexander Duyck
    Signed-off-by: Michael S. Tsirkin
    Acked-by: Peter Zijlstra (Intel)

    Michael S. Tsirkin
     

13 Apr, 2015

1 commit

  • This change makes it so that instead of using smp_wmb/rmb which varies
    depending on the kernel configuration we can can use dma_wmb/rmb which for
    most architectures should be equal to or slightly more strict than
    smp_wmb/rmb.

    The advantage to this is that these barriers are available to uniprocessor
    builds as well so the performance should improve under such a
    configuration.

    Signed-off-by: Alexander Duyck
    Signed-off-by: Rusty Russell

    Alexander Duyck
     

29 Oct, 2013

1 commit

  • Currently a host kick error is silently ignored and not reflected in
    the virtqueue of a particular virtio device.

    Changing the notify API for guest->host notification seems to be one
    prerequisite in order to be able to handle such errors in the context
    where the kick is triggered.

    This patch changes the notify API. The notify function must return a
    bool return value. It returns false if the host notification failed.

    Signed-off-by: Heinz Graalfs
    Signed-off-by: Rusty Russell

    Heinz Graalfs
     

09 Jul, 2013

1 commit


20 Mar, 2013

1 commit


13 Oct, 2012

1 commit


28 Sep, 2012

1 commit

  • Instead of storing the queue index in transport-specific virtio structs,
    this patch moves them to vring_virtqueue and introduces an helper to get
    the value. This lets drivers simplify their management and tracing of
    virtqueues.

    Signed-off-by: Jason Wang
    Signed-off-by: Paolo Bonzini
    Signed-off-by: Rusty Russell

    Jason Wang
     

12 Jan, 2012

1 commit

  • We were cheating with our barriers; using the smp ones rather than the
    real device ones. That was fine, until rpmsg came along, which is
    used to talk to a real device (a non-SMP CPU).

    Unfortunately, just putting back the real barriers (reverting
    d57ed95d) causes a performance regression on virtio-pci. In
    particular, Amos reports netbench's TCP_RR over virtio_net CPU
    utilization increased up to 35% while throughput went down by up to
    14%.

    By comparison, this branch is in the noise.

    Reference: https://lkml.org/lkml/2011/12/11/22

    Signed-off-by: Rusty Russell

    Rusty Russell
     

02 Nov, 2011

1 commit


30 May, 2011

3 commits

  • With the new used_event and avail_event and features, both
    host and guest need similar logic to check whether events are
    enabled, so it helps to put the common code in the header.

    Note that Xen has similar logic for notification hold-off
    in include/xen/interface/io/ring.h with req_event and req_prod
    corresponding to event_idx + 1 and new_idx respectively.
    +1 comes from the fact that req_event and req_prod in Xen start at 1,
    while event index in virtio starts at 0.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: Rusty Russell

    Michael S. Tsirkin
     
  • Define a new feature bit for the guest and host to utilize
    an event index (like Xen) instead if a flag bit to enable/disable
    interrupts and kicks.

    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: Rusty Russell

    Michael S. Tsirkin
     
  • It's unclear to me if it's important, but it's obviously causing my
    technical colleages some headaches and I'd hate such imprecision to
    slow virtio adoption.

    I've emailed this to all non-trivial contributors for approval, too.

    Signed-off-by: Rusty Russell
    Acked-by: Grant Likely
    Acked-by: Ryan Harper
    Acked-by: Anthony Liguori
    Acked-by: Eric Van Hensbergen
    Acked-by: john cooper
    Acked-by: Aneesh Kumar K.V
    Acked-by: Christian Borntraeger
    Acked-by: Fernando Luis Vazquez Cao

    Rusty Russell
     

30 Jul, 2009

1 commit


12 Jun, 2009

2 commits

  • Add a new feature flag for indirect ring entries. These are ring
    entries which point to a table of buffer descriptors.

    The idea here is to increase the ring capacity by allowing a larger
    effective ring size whereby the ring size dictates the number of
    requests that may be outstanding, rather than the size of those
    requests.

    This should be most effective in the case of block I/O where we can
    potentially benefit by concurrently dispatching a large number of
    large requests. Even in the simple case of single segment block
    requests, this results in a threefold increase in ring capacity.

    Signed-off-by: Mark McLoughlin
    Signed-off-by: Rusty Russell

    Mark McLoughlin
     
  • Add a linked list of all virtqueues for a virtio device: this helps for
    debugging and is also needed for upcoming interface change.

    Also, add a "name" field for clearer debug messages.

    Signed-off-by: Rusty Russell

    Rusty Russell
     

30 Dec, 2008

2 commits


25 Jul, 2008

1 commit


04 Feb, 2008

3 commits

  • The other side (host) can set the NO_NOTIFY flag as an optimization,
    to say "no need to kick me when you add things". Make it clear that
    this is advisory only; especially that we should always notify when
    the ring is full.

    Signed-off-by: Rusty Russell

    Rusty Russell
     
  • Using unsigned int resulted in silent truncation of the upper 32-bit
    on x86_64 resulting in an OOPS since the ring was being initialized
    wrong.

    Please reconsider my previous patch to just use PAGE_ALIGN(). Open
    coding this sort of stuff, no matter how simple it seems, is just
    asking for this sort of trouble.

    Signed-off-by: Anthony Liguori
    Signed-off-by: Rusty Russell

    Anthony Liguori
     
  • It seems that virtio_net wants to disable callbacks (interrupts) before
    calling netif_rx_schedule(), so we can't use the return value to do so.

    Rename "restart" to "cb_enable" and introduce "cb_disable" hook: callback
    now returns void, rather than a boolean.

    Signed-off-by: Rusty Russell

    Rusty Russell
     

12 Nov, 2007

2 commits

  • The virtio descriptor rings of size N-1 were nicely set up to be
    aligned to an N-byte boundary. But as Anthony Liguori points out, the
    free-running indices used by virtio require that the sizes be a power
    of 2, otherwise we get problems on wrap (demonstrated with lguest).

    So we replace the clever "2^n-1" scheme with a simple "align to page
    boundary" scheme: this means that all virtio rings take at least two
    pages, but it's safer than guessing cache alignment.

    Signed-off-by: Rusty Russell

    Rusty Russell
     
  • This patch fixes a typo in vring_init(). This happens to work today in lguest
    because the sizeof(struct vring_desc) is 16 and struct vring contains 3
    pointers and an unsigned int so on 32-bit
    sizeof(struct vring_desc) == sizeof(struct vring). However, this is no longer
    true on 64-bit where the bug is exposed.

    Signed-off-by: Anthony Liguori
    Signed-off-by: Rusty Russell

    Anthony Liguori
     

23 Oct, 2007

1 commit

  • These helper routines supply most of the virtqueue_ops for hypervisors
    which want to use a ring for virtio. Unlike the previous lguest
    implementation:

    1) The rings are variable sized (2^n-1 elements).
    2) They have an unfortunate limit of 65535 bytes per sg element.
    3) The page numbers are always 64 bit (PAE anyone?)
    4) They no longer place used[] on a separate page, just a separate
    cacheline.
    5) We do a modulo on a variable. We could be tricky if we cared.
    6) Interrupts and notifies are suppressed using flags within the rings.

    Users need only get the ring pages and provide a notify hook (KVM
    wants the guest to allocate the rings, lguest does it sanely).

    Signed-off-by: Rusty Russell
    Cc: Dor Laor

    Rusty Russell