Open a89h4ya opened 4 days ago
@a89h4ya does Red Hat Virtualization System
means Red Hat OpenShift
or Podman
? Could you provide more details about the virtualization system you're using? Thx!
hi @zcobol . Its literally "Red Hat Virtualization" (https://access.redhat.com/products/red-hat-virtualization)
"Red Hat Virtualization is an enterprise virtualization platform that supports key virtualization workloads including resource-intensive and critical applications, built on Red Hat Enterprise Linux and KVM and fully supported by Red Hat."
So that's kernel based virtualization. Amazon is providing kvm
images at https://cdn.amazonlinux.com/al2023/os-images/2023.5.20240624.0/kvm/ for on premise use. You'll need a seed.iso
to boot those images. See details at https://docs.aws.amazon.com/linux/al2023/ug/outside-ec2.html
Hey, @zcobol I am aware of https://cdn.amazonlinux.com/al2023/os-images/2023.5.20240624.0/kvm/. However, this would be a blank image. Missing all the customizations and hardening that we made to our template/goldenimage AMI. We dont have any issue booting it, I also already use a seed.iso to configure it with cloud init, which works perfectly fine when using it with QEMU. Its just that "Red Hat Virtualization" is not able to make any console input once its completly booted.
I can see the hostname, I can see the prompt for user login, I can see the cursor blink. Its not frozen. It just does not accept any input.
For the delta between what's on an AMI (which you would export), and what's on the on-prem images at that URL, check out https://docs.aws.amazon.com/linux/al2023/ug/al2023-ami-kvm-image.html
Notable packages:
amazon-linux-repo-cdn
(this directs the repositories to the CDN rather than a per-region S3 bucket)cloud-init-cfg-onprem
(we have some differences in how cloud-init
is configured for outside of EC2 usage)dracut-config-generic
rather than dracut-config-ec2
, as the EC2 variant strips out a lot that isn't needed (it makes the AMI boot quicker)kernel-livepatch-repo-cdn
rather than kernel-livepatch-repo-s3
(see other repo package)kernel-modules-extra
and kernel-modules-extra-common
- these will be where a bunch of needed kernel modules will be. We package these separately for a couple of reasons: 1) reducing disk space in the AMIs not available for customer workloads 2) a hardening tacticSo I'd look at those packages and the docs there.
Also have a look at https://docs.aws.amazon.com/linux/al2023/ug/kvm-supported-configurations.html and check your VM configuration around our list of supported hardware.
IIRC the vm import/export may try to make some modifications, but may not get all of them. There's a TODO item somewhere on our joint list to try and standardize a package/script/command for Linux distributions to ensure a smooth conversion between on-prem and cloud environments if anything is required.
Does this help?
Hey @stewartsmith. Thanks for your answer. I think I will try the KVM optimized image. If we dont have any issues there then, I will try to find out which package could make the difference.
RHV
is using oVirt
. There are several options for console setup, and it defaults to SPICE
. More info at https://www.ovirt.org/documentation/virtual_machine_management_guide/index.html#sect-Configuring_Console_Options
@a89h4ya which option are you using?
Describe the bug I am working in a enterprise environment and we just recently moved all our EC2 instances to AL2023. However, we are running a CICD System with Agents/Runners and some of them are on-premise to reach certain network zones. To have a homogenous environment we desired to run them with AL2023 aswell. We followed the documentation at https://docs.aws.amazon.com/vm-import/latest/userguide/vmexport_image.html to export them.
We had some issues properly booting the disk image which were solved by adding virtio modules into the initial ramdisk:
dracut -f initramfs-6.1.82-99.168.amzn2023.x86_64.img 6.1.82-99.168.amzn2023.x86_64 --add-drivers "virtio virtio_pci virtio_blk virtio_net"
This worked well with qemu. However, the destination is a Red Hat Virtualization System. There, the VM also boots, but no console input is possible. This could also be an Issue with Red Hat Virtualization, so we opened a case with Redhat. However, during the bootloading we are able to navigate the EFI menu with the keyboard.
My current assumption is, that the kernel of AL2023 might be missing some modules needed to properly identify the keyboard. In contrast, the EFI contains the proper drivers and thus allows keyboard usage.
I investigated a bit into virtio and found that there are besides "virtio virtio_pci virtio_blk virtio_net" also "virtio_input" and "virtio_console". But I cant add them via dracut, they seem to be missing (And I would need to add them to the kernel instead of the initrd anyway, right?)
This topic is leaving my field of expertise, so I would appreciate for any confirmation of my thoughts, different ideas or troubleshooting suggestions.
To Reproduce Steps to reproduce the behavior:
Expected behavior Keyboard can be used to login to console after boot