Open maksimsamt opened 1 month ago
@maksimsamt could you provide me with a link to the iso/ova image you used to provision the template vm?
I've always done the testing with the latest Debian genericcloud-amd64 image: https://cloud.debian.org/images/cloud/
@maksimsamt could you provide me with a link to the iso/ova image you used to provision the template vm?
I've always done the testing with the latest Debian genericcloud-amd64 image: https://cloud.debian.org/images/cloud/
I'm using a custom (build by Packer) AlmaLinux Generic Cloud 9.4 image, but underneath is this image:
By manual cloning and configuring cloud-init in PVE - there are no such problems.
Tried as well now with Fedora 40 Cloud: https://fedoraproject.org/cloud/download The same result, need VM restart to apply cloud-init config
@maksimsamt , are you able to make it work with Ubuntu 22? (I'm stuck with this... using Packer custom images also)
@maksimsamt , are you able to make it work with Ubuntu 22? (I'm stuck with this... using Packer custom images also)
Nope, in my project currently only rhel machines. As you can see above, @Tinyblargon using Debian genericcloud-amd64 images and this almost the same as Ubuntu, differences only in autoinstall(ubuntu) vs preseed(debian) files.
I found the root cause for my case.
As I mentioned above, I use the Packer build to create PVE templates, and with adding an empty Cloud-Init CDROM drive after the virtual machine has been converted to a template.
By default Packer adds Cloud-Init drive as ide0
.
In main.tf
I have configured the following related cloudinit block/config:
...
disks {
ide {
ide0 {
cloudinit {
storage = "local-lvm"
}
}
}
...
So, as you can see, drives in the template and in new VM are the same - ide0
. This caused the conflict. It turns out that the cloudinit drives (in template and in new vm) cannot be the same (e.g. ide0 vs ide0) or template should be without cloudinit drive at all.
Is it a normal and expected behavior or is it still a bug?
Moved in main.tf
cloudinit block/config into scsi drive, for instance scsi10
:
disks {
scsi {
scsi10 {
cloudinit {
storage = "local-lvm"
}
}
And now it works as expected, Cloud-init config after template cloning at firstboot is applied without VM rebooting.
I am having the same issue as you, I must restart for the cloud-init to work and I am using Ubuntu noble-server-cloudimg-amd64.img. I added reboot command to the image and it seems to work for ubuntu but not debian which doesn't even seem to be able to see the cloud-init drive. This seems to be a known issue on UEFI setups and the recommendation is to move it to scsi instead of IDE but on v3.0.1-rc1 attempting to add the config above gives me Blocks of type "cloudinit" are not expected here
Sorry, I see now that its rc2 that has changed many things. that allows me to use scsi for cloud-init and the previous method was removed
In my case for Ubuntu 22:
apt purge cloud-init rm -rf /etc/cloud/ rm -rf /var/lib/cloud/
And a conflicting network configuration file created during OS installation rm -f /etc/netplan/00-installer-config.yaml
apt install cloud-init
After that, recreate the template
System details: Proxmox VE 8.2.2 Terraform v1.8.4 terraform-provider-proxmox v3.0.1-rc2
VM at firstboot doesn't have IP, hostname, user/password form cloud-init config. To get applied the cloud-init config VM must be manually restarted. Seems this issue intersects with https://github.com/Telmate/terraform-provider-proxmox/issues/603 .
As per https://github.com/Telmate/terraform-provider-proxmox/issues/603#issuecomment-1627953944, tried to change
ci_wait
to 60, 120 and so on, but without success. Seemsci_wait
parameter is ingored at all and this is possible root cause. For example, afterterraform apply
there is no in actions this parameter: