summaryrefslogtreecommitdiff
path: root/Documentation/virt
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/virt')
-rw-r--r--Documentation/virt/hyperv/vmbus.rst28
-rw-r--r--Documentation/virt/kvm/api.rst5
-rw-r--r--Documentation/virt/uml/user_mode_linux_howto_v2.rst47
3 files changed, 36 insertions, 44 deletions
diff --git a/Documentation/virt/hyperv/vmbus.rst b/Documentation/virt/hyperv/vmbus.rst
index 1dcef6a7fda3..654bb4849972 100644
--- a/Documentation/virt/hyperv/vmbus.rst
+++ b/Documentation/virt/hyperv/vmbus.rst
@@ -250,10 +250,18 @@ interrupts are not Linux IRQs, there are no entries in /proc/interrupts
or /proc/irq corresponding to individual VMBus channel interrupts.
An online CPU in a Linux guest may not be taken offline if it has
-VMBus channel interrupts assigned to it. Any such channel
-interrupts must first be manually reassigned to another CPU as
-described above. When no channel interrupts are assigned to the
-CPU, it can be taken offline.
+VMBus channel interrupts assigned to it. Starting in kernel v6.15,
+any such interrupts are automatically reassigned to some other CPU
+at the time of offlining. The "other" CPU is chosen by the
+implementation and is not load balanced or otherwise intelligently
+determined. If the CPU is onlined again, channel interrupts previously
+assigned to it are not moved back. As a result, after multiple CPUs
+have been offlined, and perhaps onlined again, the interrupt-to-CPU
+mapping may be scrambled and non-optimal. In such a case, optimal
+assignments must be re-established manually. For kernels v6.14 and
+earlier, any conflicting channel interrupts must first be manually
+reassigned to another CPU as described above. Then when no channel
+interrupts are assigned to the CPU, it can be taken offline.
The VMBus channel interrupt handling code is designed to work
correctly even if an interrupt is received on a CPU other than the
@@ -324,3 +332,15 @@ rescinded, neither Hyper-V nor Linux retains any state about
its previous existence. Such a device might be re-added later,
in which case it is treated as an entirely new device. See
vmbus_onoffer_rescind().
+
+For some devices, such as the KVP device, Hyper-V automatically
+sends a rescind message when the primary channel is closed,
+likely as a result of unbinding the device from its driver.
+The rescind causes Linux to remove the device. But then Hyper-V
+immediately reoffers the device to the guest, causing a new
+instance of the device to be created in Linux. For other
+devices, such as the synthetic SCSI and NIC devices, closing the
+primary channel does *not* result in Hyper-V sending a rescind
+message. The device continues to exist in Linux on the VMBus,
+but with no driver bound to it. The same driver or a new driver
+can subsequently be bound to the existing instance of the device.
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 6fb1870f0999..1bd2d42e6424 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8001,6 +8001,11 @@ apply some other policy-based mitigation. When exiting to userspace, KVM sets
KVM_RUN_X86_BUS_LOCK in vcpu-run->flags, and conditionally sets the exit_reason
to KVM_EXIT_X86_BUS_LOCK.
+Due to differences in the underlying hardware implementation, the vCPU's RIP at
+the time of exit diverges between Intel and AMD. On Intel hosts, RIP points at
+the next instruction, i.e. the exit is trap-like. On AMD hosts, RIP points at
+the offending instruction, i.e. the exit is fault-like.
+
Note! Detected bus locks may be coincident with other exits to userspace, i.e.
KVM_RUN_X86_BUS_LOCK should be checked regardless of the primary exit reason if
userspace wants to take action on all detected bus locks.
diff --git a/Documentation/virt/uml/user_mode_linux_howto_v2.rst b/Documentation/virt/uml/user_mode_linux_howto_v2.rst
index 584000b743f3..c37e8e594d12 100644
--- a/Documentation/virt/uml/user_mode_linux_howto_v2.rst
+++ b/Documentation/virt/uml/user_mode_linux_howto_v2.rst
@@ -147,18 +147,12 @@ The image hostname will be set to the same as the host on which you
are creating its image. It is a good idea to change that to avoid
"Oh, bummer, I rebooted the wrong machine".
-UML supports two classes of network devices - the older uml_net ones
-which are scheduled for obsoletion. These are called ethX. It also
-supports the newer vector IO devices which are significantly faster
-and have support for some standard virtual network encapsulations like
-Ethernet over GRE and Ethernet over L2TPv3. These are called vec0.
+UML supports vector I/O high performance network devices which have
+support for some standard virtual network encapsulations like
+Ethernet over GRE and Ethernet over L2TPv3. These are called vecX.
-Depending on which one is in use, ``/etc/network/interfaces`` will
-need entries like::
-
- # legacy UML network devices
- auto eth0
- iface eth0 inet dhcp
+When vector network devices are in use, ``/etc/network/interfaces``
+will need entries like::
# vector UML network devices
auto vec0
@@ -219,16 +213,6 @@ remote UML and other VM instances.
+-----------+--------+------------------------------------+------------+
| vde | vector | dep. on VDE VPN: Virt.Net Locator | varies |
+-----------+--------+------------------------------------+------------+
-| tuntap | legacy | none | ~ 500Mbit |
-+-----------+--------+------------------------------------+------------+
-| daemon | legacy | none | ~ 450Mbit |
-+-----------+--------+------------------------------------+------------+
-| socket | legacy | none | ~ 450Mbit |
-+-----------+--------+------------------------------------+------------+
-| ethertap | legacy | obsolete | ~ 500Mbit |
-+-----------+--------+------------------------------------+------------+
-| vde | legacy | obsolete | ~ 500Mbit |
-+-----------+--------+------------------------------------+------------+
* All transports which have tso and checksum offloads can deliver speeds
approaching 10G on TCP streams.
@@ -236,27 +220,16 @@ remote UML and other VM instances.
* All transports which have multi-packet rx and/or tx can deliver pps
rates of up to 1Mps or more.
-* All legacy transports are generally limited to ~600-700MBit and 0.05Mps.
-
* GRE and L2TPv3 allow connections to all of: local machine, remote
machines, remote network devices and remote UML instances.
-* Socket allows connections only between UML instances.
-
-* Daemon and bess require running a local switch. This switch may be
- connected to the host as well.
-
Network configuration privileges
================================
The majority of the supported networking modes need ``root`` privileges.
-For example, in the legacy tuntap networking mode, users were required
-to be part of the group associated with the tunnel device.
-
-For newer network drivers like the vector transports, ``root`` privilege
-is required to fire an ioctl to setup the tun interface and/or use
-raw sockets where needed.
+For example, for vector transports, ``root`` privilege is required to fire
+an ioctl to setup the tun interface and/or use raw sockets where needed.
This can be achieved by granting the user a particular capability instead
of running UML as root. In case of vector transport, a user can add the
@@ -610,12 +583,6 @@ connect to a local area cloud (all the UML nodes using the same
multicast address running on hosts in the same multicast domain (LAN)
will be automagically connected together to a virtual LAN.
-Configuring Legacy transports
-=============================
-
-Legacy transports are now considered obsolete. Please use the vector
-versions.
-
***********
Running UML
***********