Closed nyh closed 1 month ago
What would it take to get these drivers in place?
FreeBSD has been ported, here are their notes. http://www.daemonology.net/blog/2017-11-17-FreeBSD-EC2-C5-instances.html
-greg
On Fri, Dec 1, 2017, 3:28 PM rodlogic notifications@github.com wrote:
What would it take to get these drivers in place?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cloudius-systems/osv/issues/924#issuecomment-348606251, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAbENAqjy59jikWyHE5n3LsUxuSkbmpks5s8GFqgaJpZM4QgKwo .
ena
driver. This is harder since you have to test it on AWS. If someone is curious what Amazon did in these new instance types, and why, Anthony Liguori has a very good explanation (38-minute video) here:
https://www.youtube.com/watch?time_continue=2&v=LabltEXk0VQ
He explains why they have thesse NVMe and ENA devices with a hardware backend (created by a startup they bought, Annapurna Labs) instead of software in Xen, and that they already have done this incrementally for several years as an additional option, but now they took one final step - dropping the old Xen device support (and Xen itself). They also replaced Xen with KVM, but did not use QEMU, and thus none of QEMU's virtio code is available. By not supporting the older Xen paravirtual protocols and using hardware accelerators, more CPU cores (and more CPU time per core) are available for the users. There is no real reason why they cannot provide slower virtio emulation, but also no real reason for them to do it...
Dear Friends,
I don't know if it's still relevant but maybe we could consider the NVMe/ENA driver from FreeBSD: https://github.com/amzn/amzn-drivers/tree/master/kernel/fbsd/ena https://github.com/freebsd/freebsd/tree/master/sys/dev/nvme
Kind Regards, Geraldo Netto
Almost 7 years after creating this issue, I can gladly report that we can now deploy and run OSv on the KVM-based Nitro instances with both NVMe and ENA drivers working:
2 CPUs detected
Firmware vendor: Amazon EC2
bsd: initializing - done
VFS: mounting ramfs at /
VFS: mounting devfs at /dev
net: initializing - done
vga: Add VGA device instance
[I/22 nvme]: Identified namespace with nsid=1, blockcount=2097152, blocksize=512
nvme: Created I/O queue pair for qid:1 with size:32
nvme: Created I/O queue pair for qid:2 with size:32
[I/22 nvme]: Enabled interrupt coalescing
devfs: created device vblk0.1 for a partition at offset:6291456 with size:127926272
nvme: Add device instances 0 as vblk0, devsize=1073741824, serial number:vol0fa7f8d44e69f3a4fAmazon Elastic Block Store 1.0 ??
eth0: ethernet address: 16:ff:ed:ba:ae:5f
random: intel drng, rdrand registered as a source.
random: <Software, Yarrow> initialized
VFS: unmounting /dev
zfs: driver has been initialized!
VFS: mounting zfs at /zfs
zfs: mounting osv/zfs from device /dev/vblk0.1
random: device unblocked.
VFS: mounting devfs at /dev
VFS: mounting procfs at /proc
VFS: mounting sysfs at /sys
BSD shrinker: event handler list found: 0x6000011e6a00
BSD shrinker found: 1
BSD shrinker: unlocked, running
[I/22 dhcp]: Broadcasting DHCPDISCOVER message with xid: [1891216235]
[I/22 dhcp]: Waiting for IP...
[I/206 dhcp]: DHCP received hostname: ip-172-31-85-219
[I/206 dhcp]: Received DHCPOFFER message from DHCP server: 172.31.80.1 regarding offerred IP address: 172.31.85.219
[I/206 dhcp]: Broadcasting DHCPREQUEST message with xid: [1891216235] to SELECT offered IP: 172.31.85.219
[I/206 dhcp]: DHCP received hostname: ip-172-31-85-219
[I/206 dhcp]: Received DHCPACK message from DHCP server: 172.31.80.1 regarding offerred IP address: 172.31.85.219
[I/206 dhcp]: Server acknowledged IP 172.31.85.219 for interface eth0 with time to lease in seconds: 3600
[I/206 dhcp]: Configuring eth0: ip 172.31.85.219 subnet mask 255.255.240.0 gateway 172.31.80.1 MTU 9001
[I/206 dhcp]: Set hostname to: ip-172-31-85-219
Running from /init/30-auto-00: /libhttpserver-api.so --access-allow=true &!
Amazon recently switched their new instances to using KVM instead of Xen - see for example https://www.theregister.co.uk/2017/11/07/aws_writes_new_kvm_based_hypervisor_to_make_its_cloud_go_faster/
We want OSv to be able to run on these new instances. @avikivity says that these instances will not support virtio-net or virtio-blk, and OSv will need NVMe and ENA drivers to support the disk and network, respectively, on these VMs :-(