Open Ikariusrb opened 5 months ago
@Ikariusrb,
proxmox_cloud_init_disk
only supports cdrom
and not cloudinit
in disk blocks.To get proxmox_cloud_init_disk
to work you have options:
Use only proxmox_cloud_init_disk
functionality, without cloudinit
disk block, don't define ciuser
, cipassword
, cicustom
etc.
Define all these parameters in proxmox_cloud_init_disk
for example in user_data
file.
proxmox_cloud_init_disk
uses NoCloud datasource and configured as method1:
Method 1: Labeled filesystem A labeled vfat or iso9660 filesystem may be used. The filesystem volume must be labelled CIDATA.
So, this means that no additional modifications or settings are required for it to work.
proxmox_cloud_init_disk
(NoCloud) datasource with cloudinit
(Config drive) with datasource_list
parameter:
https://cloudinit.readthedocs.io/en/latest/reference/base_config_reference.html#base-config-datasource-listAlso, I think this is the wrong approach for user_data
:
...
user_data = yamlencode({
hostname = local.vm_name
fqdn = "${local.vm_name}.${local.local_domain}"
manage_etc_hosts = true
users = ["default"]
user = local.cloudinit_user
ssh_authorized_keys = [local.ssh_key]
package_upgrade = true
chpasswd = {
expire = false
}
})
...
You should use here Heredoc Strings, like:
...
user_data = <<-EOT
#cloud-config
users:
- default
ssh_authorized_keys:
- ssh-rsa AAAAB3N......
EOT
...
Because user_data
must begin with #cloud-config
.
https://cloudinit.readthedocs.io/en/latest/explanation/format.html
Wasn't able to achieve this combo either using rc3.
@ironicbadger , Please share your Terraform files and error log file
Hello I have the same problem on rc3.
The cloudinit is also correctly generated but the vm is not taking it into account and booting without executing the cloud init config. Here are my files if it could help
proxmox_cloud_init_disk:
resource "proxmox_cloud_init_disk" "ci" {
name = var.vm-name
pve_node = var.pm_node
storage = "local"
meta_data = yamlencode({
instance_id = sha1(var.vm-name)
local-hostname = var.vm-name
})
user_data = <<EOT
#cloud-config
package_upgrade: true
groups:
- docker
users:
- name: dev
groups: docker, sudo
lock_passwd: false
final_message: "The system is finally up, after $UPTIME seconds"
EOT
}
proxmox_vm_qemu:
resource "proxmox_vm_qemu" "android_commander" {
name = "terraform-test-vm"
desc = "A test for using terraform and cloudinit"
target_node = "zgBois"
agent = 1
clone = "debian12-cloudinit-template"
full_clone = true
os_type = "cloud-init"
skip_ipv4 = false
cores = 2
sockets = 2
memory = 4096
disks {
scsi {
scsi0 {
disk {
backup = true
size = 20
storage = var.storage_pool
}
}
scsi1 {
cdrom {
iso = "local:${proxmox_cloud_init_disk.ci.id}"
}
}
}
}
}
@Qlebrun ,
Try to change these lines:
// proxmox_cloud_init_disk
...
user_data = <<EOT
...
EOT
...
// =>
user_data = <<-EOT
...
EOT
...
// proxmox_vm_qemu
scsi1 {
cdrom {
iso = "local:${proxmox_cloud_init_disk.ci.id}"
}
}
// =>
scsi1 {
cdrom {
iso = proxmox_cloud_init_disk.ci.id
}
}
Hi @maksimsamt, thanks for your quick reply !
Sadly we only have a little bit of progress, the vm start with my cloudinit disk attached but only the user config are taken into account, all my packages, locales and run cmd settings are completly ignored
Here my complete cloud-config if it could help:
#cloud-config
package_upgrade: true
groups:
- docker
users:
- name: dev
groups: docker, sudo
lock_passwd: false
chpasswd:
users:
- name: dev
password: *****
type: hash
ssh_pwauth: True
locale: fr_FR.utf8
timezone: Europe/Paris
keyboard:
layout: fr
apt:
sources:
docker.list:
source: deb [arch=amd64] https://download.docker.com/linux/debian $RELEASE stable
keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
- unattended-upgrades
- docker-ce
- docker-ce-cli
- containerd.io
- qemu-guest-agent
runcmd:
- ['systemctl', 'start', 'qemu-guest-agent']
# One-command install, from https://tailscale.com/download/
- ['curl -fsSL https://tailscale.com/install.sh | sh']
# Generate an auth key from your Admin console
# https://login.tailscale.com/admin/settings/keys
# and replace the placeholder below
- ['tailscale', 'up', '--authkey=', '--accept-routes', '--ssh', '--auto-update']
final_message: "The system is finally up, after $UPTIME seconds"
Is this really in your cloud-init config (final result on the server)?
#cloud-config
package_upgrade: true
groups:
- docker
...
If so, do you see where is the problem? Spaces or tabs... Correct config should be like (without spaces):
#cloud-config
package_upgrade: true
groups:
- docker
...
Therefore, as I wrote before, you should use in Terraform this <<-EOT
(not <<EOT
):
user_data = <<-EOT
...
EOT
...
No sorry this bad formatting from my part in the Github comment, it is well formed in the server. And yes I have been using the right <<-EOT
like you said, thank you for that.
My terraform ressource:
resource "proxmox_cloud_init_disk" "ci" {
name = var.vm-name
pve_node = var.pm_node
storage = "local"
meta_data = yamlencode({
instance_id = sha1(var.vm-name)
local-hostname = var.vm-name
})
user_data = <<-EOT
#cloud-config
package_upgrade: true
groups:
- docker
users:
- name: dev
groups: docker, sudo
lock_passwd: false
chpasswd:
users:
- name: dev
password: *****
type: hash
ssh_pwauth: True
locale: fr_FR.utf8
timezone: Europe/Paris
keyboard:
layout: fr
apt:
sources:
docker.list:
source: deb [arch=amd64] https://download.docker.com/linux/debian $RELEASE stable
keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
- unattended-upgrades
- docker-ce
- docker-ce-cli
- containerd.io
- qemu-guest-agent
runcmd:
- ['systemctl', 'start', 'qemu-guest-agent']
# One-command install, from https://tailscale.com/download/
- ['curl -fsSL https://tailscale.com/install.sh | sh']
# Generate an auth key from your Admin console
# https://login.tailscale.com/admin/settings/keys
# and replace the placeholder below
# - ['tailscale', 'up', '--authkey=', '--accept-routes', '--ssh', '--auto-update']
final_message: "The system is finally up, after $UPTIME seconds"
EOT
}
I will try to use the other method without the proxmox_cloud_init_disk
resource by using the cicustom
parameter and putting the CloudInit config file in the snippet
folder of the host.
Can you share the user-data cloud-init config from your server, after Terraform provisioning is done:
/var/lib/cloud/instance/user-data.txt
Okay the deployment is working very well ! I still have errors on vm startup but its on my side now (Debian config in the CloudInit cfg file), but Cloud Init disk is properly created and ran at boot. Thank you very much for your help. 👍
One last question do you think I can wrap CloudInit config in another file and pass it with a terrafrom local_file
provisioner or a simple file()
function ?
Instead of this:
user_data = <<-EOT
...
EOT
...
Do this:
user_data = file("user-data.txt")
I think for this purpose it is better to use local_file. For example:
data "local_file" "user_data" {
filename = "${path.module}/user-data.yml"
}
resource "proxmox_cloud_init_disk" "ci" {
name = var.vm-name
pve_node = var.pm_node
storage = "local"
meta_data = yamlencode({
instance_id = sha1(var.vm-name)
local-hostname = var.vm-name
})
user_data = data.local_file.user_data.content
...
}
Because
user_data
must begin with#cloud-config
. https://cloudinit.readthedocs.io/en/latest/explanation/format.html
That is what bothered me today, I expected, that the provider module would add this prefix (it does not).
So I added it by myself via
user_data = join("\n", [
"#cloud-config",
yamlencode({
@zapotocnylubos agree this should be a feature, the provider should add it when it's not added by the user.
I'm currently fighting with 3.0.1-rc2 and trying to convince it to use a custom-generated
proxmox_cloud_init_disk
. I'm able to get it to generate a cloud init iso image on the proxmox node without too much difficulty:I can verify it's successfully generating an ISO on the proxmox node with this resource, and I've temp mounted that
.iso
and validated its contents look correct. I've got a couple of templates; one with a cloudinit device as a cdrom on ide2, and one without.The trouble comes in when I attempt to actually use that iso for cloudinit. I can get ide2 as a cdrom with the iso attached, but I can't seem to get it to recognize it as a cloudinit source. I suspect I'm missing something in my disks config. Here's where I'm at currently for the VM disk config:
I haven't found a combination of config that lets me specify a cloudinit drive AND the ISO image. The only parameter the
cloudinit
block seems to support is thestorage
, which only tells it which storage pool to use.I've followed as much as I could in the docs and read a number of the bugs, but most of the examples are for using the proxmox built-in cloud-init config, and I want to add some things which aren't supported through proxmox' cloudinit config (packages, runcmd).