Closed haosanzi closed 2 years ago
I haven't looked closely at the vhost-vsock driver. It needs to use the DMA APIs to make memory available to be shared with the hypervisor. If it is merely pointing at allocated memory and sharing that with the hypervisor, then the hypervisor will end up reading encrypted data and things won't function properly. Many of the other virtio drivers support usage of the DMA APIs, I'm just not sure about the vhost-vsock driver.
@tlendacky Thank you for your reply. I still have three problems, could you help me?
What virtio drivers does the SEV VM definitely support?
About SEV VM using vhost-vsock driver, I have found some information in this link.
However, When I add -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3, disable-legacy=on,iommu_platform=on
args to start AMD SEV VM, the connection between host and guest is still failed. Is there any relevant information you can provide about this problem?
As far as I know, Intel TDX supports guest and host communication via vsock. Does AMD have plans to support communication between SEV guest and host via vsock? If not, are there other solutions?
Hi,
(I'm the creator of the patch you linked)
vsock had problems at that time (kernel 5.10) because there was no feature support for VIRTIO_F_ACCESS_PLATFORM
. That's why I modified the guest kernel and created that patch. If I understand it correctly this should be fixed with this patch and the options -device amd-iommu,intremap=on,device-iotlb=on -device vhost-vsock-pci,disable-legacy=on,guest-cid=1,iommu_platform=on,ats=on
, but I don't have access to an EPYC server anymore to test this.
If that's not the problem, it could be nearly anything, because your information is a bit vague. Then I would ask you:
console=ttyS0 earlyprintk=serial
For reference: At that time I used something like this (with additional options for kernel path, initrd and such things):
qemu-system-x86_64 -enable-kvm -cpu host \
-machine q35,memory-encryption=sev0 \
-no-reboot -nographic -nodefaults -serial stdio \
-global virtio-mmio.force-legacy=off \
-device vhost-vsock-pci,disable-legacy=on,guest-cid=1 \
-object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1,policy=0x3
Hope that helps.
Thank you for your reply.
I have transmitted messages between guest and host using vsock successfully with your advice!
The following are my environment:
-device vhost-vsock-pci,disable-legacy=on,guest-cid=3
Thanks a lot!
I want to start a VM with SEV enabled on sev enabled hardware. And running the following python scripts to transmit messages between guest and host using vsock.
The command to start SEV VM:
The client code in guest VM:
The Server code in Host
However, the connection between host and guest is failed.
In addition, if I start normal VM on the same environment, it can establish connection between guest and host through vsock successfully! (PS: I delete
-object sev-guest,id=sev0,cbitpos=51,reduced-phys-bits=1 -machine memory-encryption=sev0
args to start normal VM.)If there is any problem, please let me know. Thank you very much!