Closed itforxp closed 1 year ago
Hi, did you have mkisofs ? What is the result of which -a mkisofs
?
Hi, thanks for your reply. I already have installed mkisofs
Added pool on separate host where I run terraform. resource "libvirt_cloudinit_disk" "commoninit" { name = "commoninit.iso" user_data = data.template_file.user_data.rendered pool = "images" }
but can't run VMs with ip addresses I set up
resource "libvirt_domain" "remote_host1-domain" { provider = libvirt.remote_host1 name = "node1" memory = "3072" vcpu = 2
disk { volume_id = libvirt_volume.remote_host1-qcow2.id } network_interface { network_name = "net1" hostname = "node1" addresses = ["10.0.0.1"] } ...
Working on IP & cloud_init apply on VMs
Do you have a full orchestration? I've found navigating libvirt networks locally to be a bit finicky at times.
I've found macvtap interfaces easier to navigate (provided you are willing to plug your vms' networking specs in cloudinit), but the guest/host communication restrictions can make it impractical to run on one machine locally.
We've had some success creating a steady setup locally with a libvirt network by...
See the following comment for my observation on the matter: https://github.com/Ferlab-Ste-Justine/kvm-dev-orchestrations/tree/main#caveats
Thanks for the links!
Look at the links and didn't get how to apply it. I have existed KVM infrastructure but they as you wrote create new net.
Look at the links and didn't get how to apply it. I have existed KVM infrastructure but they as you wrote create new net.
If you haven't done so, you need to change the git submodule ssh links to use https instead (this: https://github.com/Ferlab-Ste-Justine/kvm-dev-orchestrations/blob/main/.gitmodules).
We use the ssh links internally to validate/work on the various terraform modules locally (read/write workflow), but that will not work if you don't have write access to the sub-repos (which you won't). You need to use the https links instead.
Alternatively, I guess you could just replace the relative module paths: https://github.com/Ferlab-Ste-Justine/kvm-dev-orchestrations/blob/main/nfs/nfs.tf#L11
By their global https github equivalent: git::https://github.com/Ferlab-Ste-Justine/kvm-nfs-server
Unfortunately, there are parts of terraform where interpolation is not supported (backend files, provider argument in resources/modules, lifecycle arguments and if memory serves, I think source arguments in modules too, among other things) so it is more challenging to make it customizable from the convenience of a centralized configuration file.
I don't understand why static ip is such odd stuff. With DHCP it's working like a charm. Now seriously thinking about pre-ordered IP by DHCP server, but for me it's a dirty trick) Thanks for your help, guess someday I get into this magic.
Automatically managed static addresses are better for our use-cases which is why we went with that (more stable and easier to pass around which we needed for things like distributed/ha databases or kubernetes clusters).
If you have something that works for your purposes, that is what matters ultimately.
Found solution
There is some working code of terraform & cloudinit for static IP & remote KVM hosts: main.tf terraform { required_providers { libvirt = { source = "dmacvicar/libvirt" } } }
provider "libvirt" { uri = "qemu:///system" }
provider "libvirt" { alias = "HOST1" uri = "qemu+ssh://terraform@HOST1/system" } provider "libvirt" { alias = "HOST2" uri = "qemu+ssh://terraform@HOST2/system" }
resource "libvirt_volume" "local" { name = "local-qcow2" pool = "myimages" format = "qcow2" } resource "libvirt_volume" "HOST1-qcow2" { provider = libvirt.HOST1 name = "vm1.qcow2" pool = "myimages" format = "qcow2" source = "cloud_rhel_based_os_iso_from_internet.qcow2" } resource "libvirt_volume" "HOST2-qcow2" { provider = libvirt.HOST2 name = "vm2.qcow2" pool = "myimages" format = "qcow2" source = "cloud_rhel_based_os_iso_from_internet.qcow2" }
resource "libvirt_domain" "HOST1-domain" { provider = libvirt.HOST1 name = "vm1" memory = "3072" vcpu = 2
disk { volume_id = libvirt_volume.HOST1-qcow2.id } network_interface { network_name = "local1" # List networks with virsh net-list hostname = "vm1" addresses = ["1.1.1.1"] } console { type = "pty" target_type = "serial" target_port = "0" } cloudinit = libvirt_cloudinit_disk.vm1_cloudinit.id graphics { type = "spice" listen_type = "address" autoport = true } autostart = true qemu_agent = true }
resource "libvirt_domain" "HOST2-domain" { provider = libvirt.HOST2 name = "vm2" memory = "3072" vcpu = 2
disk { volume_id = libvirt_volume.HOST2-qcow2.id } network_interface { network_name = "local1" hostname = "vm2" addresses = ["1.1.1.2"] } console { type = "pty" target_type = "serial" target_port = "0" } cloudinit = libvirt_cloudinit_disk.vm2_cloudinit.id graphics { type = "spice" listen_type = "address" autoport = true } autostart = true qemu_agent = true }
resource "libvirt_cloudinit_disk" "vm1_cloudinit" { name = "vm1_cloudinit.iso" user_data = data.template_file.vm1_cloudinit.rendered pool = "myimages" provider = libvirt.HOST1 }
data "template_file" "vm1_cloudinit" { template = file("${path.module}/vm1_userdata.yaml") }
resource "libvirt_cloudinit_disk" "vm2_cloudinit" { name = "vm2_cloudinit.iso" user_data = data.template_file.vm2_cloudinit.rendered pool = "myimages" provider = libvirt.HOST2 }
data "template_file" "vm2_cloudinit" { template = file("${path.module}/vm2_userdata.yaml") }
vm1_userdata.yaml
hostname: vm1 manage_etc_hosts: true fqdn: vm1.localdomain bootcmd:
@itforxp Nice. I find that there is a cleaner way to edit network configurations earlier in the cloud-init workflow like so: https://github.com/Ferlab-Ste-Justine/terraform-cloudinit-templates/tree/main/network https://github.com/Ferlab-Ste-Justine/kvm-etcd-server/blob/main/main.tf#L26 https://github.com/Ferlab-Ste-Justine/kvm-etcd-server/blob/main/main.tf#L168
I used it more with macvtap than libvirt networks (my coarse understanding from what I observed so far is that libvirt networks seem to use a dhcp server with fixated replies to support static ips, I haven't bothered overriding the behavior), but it works well.
Anyways, fyi.
System Information
Linux distribution
Centos
remote hosts libvirt 4.5Terraform version
terraform -v Terraform v1.4.4 on linux_amd64
Checklist
[ ] Is your issue/contribution related with enabling some setting/option exposed by libvirt that the plugin does not yet support, or requires changing/extending the provider terraform schema?
[x] Is it a bug or something that does not work as expected? Please make sure you fill the version information below:
Description of Issue/Question
Setup
(Please provide the full main.tf file for reproducing the issue (Be sure to remove sensitive information)
Steps to Reproduce Issue
(Include debug logs if possible and relevant).
Additional information:
Do you have SELinux or Apparmor/Firewall enabled? Some special configuration? Have you tried to reproduce the issue without them enabled?
Selinux terraform host disabled, KVM hosts enabled Without cloud-init VM are up and running like a charm.
Can you please make example about multiple providers with cloud-init initialization? Is it a right way to custom initialization with multiple providers?
Now I am catching: libvirt_domain.remote_host1: Creation complete after 0s [id=a5] ╷ │ Error: error while starting the creation of CloudInit's ISO image: exec: "mkisofs": executable file not found in $PATH │ │ with libvirt_cloudinit_disk.commoninit, │ on main.tf line 157, in resource "libvirt_cloudinit_disk" "commoninit": │ 157: resource "libvirt_cloudinit_disk" "commoninit" {
for cloud-init part I have to installed libvirt locally.
tf code: resource "libvirt_cloudinit_disk" "commoninit" { name = "commoninit.iso" user_data = data.template_file.user_data.rendered }
data "template_file" "user_data" { template = file("cloud_init.cfg") }