Closed web-differently12 closed 6 days ago
You can disable vlan creation in clusters.tf
.
create_vlan = false
This should disable all Unifi connections. However, you can still set the vlans for your VMs using assign_vlan = true
so you can use your pfSense vlans.
Let me know if that fixes you issue and I'll update the docs to be more clear.
yes for network i think its necessary to have doc for exemple every node have three vlan ??.. vlan_id = 600 ipv4 = { subnet_prefix = "10.0.3" this one 10.0.3.16 pod_cidr = "10.16.0.0/16" and this another vlan svc_cidr = "10.17.0.0/16" and this dns1 = "10.0.3.3" dns2 = "10.0.3.4" } and all node for exemple my vlan is tag 200 with 10.0.20.1/24 dns and dhcp pfsense is 10.0.2O.254 i should create three vlan for every node 10.0.3.1/24 and 10.16.0.0/16 and 10.17.0.0/16 and all for node Beta and gamma and alpha
The pod_cidr
and svc_cidr
are internal to k8s and don't need to be provisioned by Unifi of pfSense. As long as your lan doesn't actually have the cidrs used by pod_cidr
and svc_cidr
, it should work.
yes thanks you i used now unifi with small vm to have network without touche my network and i have probleme with this link unifi vm https://community.ui.com/questions/UniFi-Installation-Scripts-or-UniFi-Easy-Update-Script-or-UniFi-Lets-Encrypt-or-UniFi-Easy-Encrypt-/ccbc7530-dd61-40a7-82ec-22b17f027776
tofu apply local_file.cluster_config_json: Refreshing state... [id=f8c71889c8227cc5d93b4095657253d146fb066e] unifi_network.vlan["gamma"]: Refreshing state... [id=673127d3ee7b0b3f0e0d45b8] proxmox_virtual_environment_pool.operations_pool["gamma"]: Refreshing state... [id=GAMMA]
Planning failed. OpenTofu encountered an error while generating this plan.
╷ │ Error: error getting pool: received an HTTP 403 response - Reason: Permission check failed (/pool/GAMMA, Pool.Audit) │ │ with proxmox_virtual_environment_pool.operations_pool["gamma"], │ on main.tf line 121, in resource "proxmox_virtual_environment_pool" "operations_pool": │ 121: resource "proxmox_virtual_environment_pool" "operations_pool" { privilege separate deactivate
You'll need to add the Pool.Audit permission to your terraform user in Proxmox's console.
We're working on fixing the documentation regarding this so other users won't run into the same issue.
hello i revolved Pool.Audit permission i used this command
pveum role add terraformProv -privs "Datastore.Allocate,Datastore.AllocateSpace,Datastore.AllocateTemplate,Datastore.Audit,Group.Allocate,Pool.Allocate,Pool.Audit,Sys.AccessNetwork,Sys.Audit,Sys.Console,Sys.Modify,VM.Allocate,VM.Audit,VM.Backup,VM.Clone,VM.Config.CDROM,VM.Config.CPU,VM.Config.Cloudinit,VM.Config.Disk,VM.Config.HWType,VM.Config.Memory,VM.Config.Network,VM.Config.Options,VM.Migrate,VM.Monitor,VM.PowerMgmt,VM.Snapshot SDN.Use" and this pveum aclmod / -user terraform-prov@pve -role Administrator
now i have this error but i dont understand whats happened Error: error updating VM: the requested resource does not exist │ │ with proxmox_virtual_environment_vm.node["beta-general-1"], │ on main.tf line 139, in resource "proxmox_virtual_environment_vm" "node": │ 139: resource "proxmox_virtual_environment_vm" "node" { │ ╵ ╷ │ Error: error updating VM: the requested resource does not exist │ │ with proxmox_virtual_environment_vm.node["beta-apiserver-0"], │ on main.tf line 139, in resource "proxmox_virtual_environment_vm" "node": │ 139: resource "proxmox_virtual_environment_vm" "node" { │ ╵ ╷ │ Error: error updating VM: the requested resource does not exist │ │ with proxmox_virtual_environment_vm.node["beta-general-0"], │ on main.tf line 139, in resource "proxmox_virtual_environment_vm" "node": │ 139: resource "proxmox_virtual_environment_vm" "node" {
That looks like the proxmox terraform provider is confused. It happens sometimes when tofu apply
fails. The easiest way to fix it (if you don't have data on those VMs) is to run tofu destroy
and tofu apply
again.
let me know if that works
Also, the cli commands you posted suggest that your terraform user terraform-prov
is using the role Administrator
instead of the terraformProv
role you defined
See this section of the updated readme
local_file.cluster_config_json: Destroying... [id=8fdfb30656aba6fa608f53e0aff6ccaf3a50349c] local_file.cluster_config_json: Destruction complete after 0s proxmox_virtual_environment_vm.node["beta-general-0"]: Destroying... [id=2130] proxmox_virtual_environment_vm.node["beta-apiserver-0"]: Destroying... [id=2110] proxmox_virtual_environment_vm.node["beta-general-1"]: Destroying... [id=2131] proxmox_virtual_environment_vm.node["beta-apiserver-0"]: Destruction complete after 1s proxmox_virtual_environment_vm.node["beta-general-1"]: Destruction complete after 1s proxmox_virtual_environment_vm.node["beta-general-0"]: Destruction complete after 1s proxmox_virtual_environment_pool.operations_pool["beta"]: Destroying... [id=BETA] proxmox_virtual_environment_pool.operations_pool["beta"]: Destruction complete after 0s unifi_network.vlan["beta"]: Destroying... [id=673226176e9bd271ecdda185] unifi_network.vlan["beta"]: Destruction complete after 0s
Destroy complete! Resources: 6 destroyed. name@name-MacBook-Pro ClusterCreator-main % tofu apply
OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
OpenTofu will perform the following actions:
resource "local_file" "cluster_config_json" {
resource "proxmox_virtual_environment_pool" "operations_pool" {
resource "proxmox_virtual_environment_vm" "node" {
acpi = true
bios = "seabios"
description = "Managed by Terraform"
id = (known after apply)
ipv4_addresses = (known after apply)
ipv6_addresses = (known after apply)
keyboard_layout = "en-us"
mac_addresses = (known after apply)
machine = "q35"
migrate = true
name = "beta-apiserver-0"
network_interface_names = (known after apply)
node_name = "pve"
on_boot = false
pool_id = "BETA"
protection = false
reboot = false
scsi_hardware = "virtio-scsi-pci"
started = true
stop_on_destroy = false
tablet_device = true
tags = [
template = false
timeout_clone = 1800
timeout_create = 1800
timeout_migrate = 1800
timeout_move_disk = 1800
timeout_reboot = 1800
timeout_shutdown_vm = 1800
timeout_start_vm = 1800
timeout_stop_vm = 300
vm_id = 2110
agent {
clone {
cpu {
disk {
initialization {
datastore_id = "nvmes"
interface = "ide2"
upgrade = (known after apply)
dns {
ip_config {
user_account {
memory {
network_device {
vga {
resource "proxmox_virtual_environment_vm" "node" {
acpi = true
bios = "seabios"
description = "Managed by Terraform"
id = (known after apply)
ipv4_addresses = (known after apply)
ipv6_addresses = (known after apply)
keyboard_layout = "en-us"
mac_addresses = (known after apply)
machine = "q35"
migrate = true
name = "beta-general-0"
network_interface_names = (known after apply)
node_name = "pve"
on_boot = false
pool_id = "BETA"
protection = false
reboot = false
scsi_hardware = "virtio-scsi-pci"
started = true
stop_on_destroy = false
tablet_device = true
tags = [
template = false
timeout_clone = 1800
timeout_create = 1800
timeout_migrate = 1800
timeout_move_disk = 1800
timeout_reboot = 1800
timeout_shutdown_vm = 1800
timeout_start_vm = 1800
timeout_stop_vm = 300
vm_id = 2130
agent {
clone {
cpu {
disk {
initialization {
datastore_id = "nvmes"
interface = "ide2"
upgrade = (known after apply)
dns {
ip_config {
user_account {
memory {
network_device {
vga {
resource "proxmox_virtual_environment_vm" "node" {
acpi = true
bios = "seabios"
description = "Managed by Terraform"
id = (known after apply)
ipv4_addresses = (known after apply)
ipv6_addresses = (known after apply)
keyboard_layout = "en-us"
mac_addresses = (known after apply)
machine = "q35"
migrate = true
name = "beta-general-1"
network_interface_names = (known after apply)
node_name = "pve"
on_boot = false
pool_id = "BETA"
protection = false
reboot = false
scsi_hardware = "virtio-scsi-pci"
started = true
stop_on_destroy = false
tablet_device = true
tags = [
template = false
timeout_clone = 1800
timeout_create = 1800
timeout_migrate = 1800
timeout_move_disk = 1800
timeout_reboot = 1800
timeout_shutdown_vm = 1800
timeout_start_vm = 1800
timeout_stop_vm = 300
vm_id = 2131
agent {
clone {
cpu {
disk {
initialization {
datastore_id = "nvmes"
interface = "ide2"
upgrade = (known after apply)
dns {
ip_config {
user_account {
memory {
network_device {
vga {
Plan: 6 to add, 0 to change, 0 to destroy.
Changes to Outputs:
Do you want to perform these actions in workspace "beta"? OpenTofu will perform the actions described above. Only 'yes' will be accepted to approve.
Enter a value: yes
unifi_network.vlan["beta"]: Creating... local_file.cluster_config_json: Creating... local_file.cluster_config_json: Creation complete after 0s [id=8fdfb30656aba6fa608f53e0aff6ccaf3a50349c] unifi_network.vlan["beta"]: Creation complete after 0s [id=673226976e9bd271ecdda18c] proxmox_virtual_environment_pool.operations_pool["beta"]: Creating... proxmox_virtual_environment_pool.operations_pool["beta"]: Creation complete after 0s [id=BETA] proxmox_virtual_environment_vm.node["beta-general-1"]: Creating... proxmox_virtual_environment_vm.node["beta-general-0"]: Creating... proxmox_virtual_environment_vm.node["beta-apiserver-0"]: Creating... ╷ │ Error: error updating VM: the requested resource does not exist │ │ with proxmox_virtual_environment_vm.node["beta-general-1"], │ on main.tf line 143, in resource "proxmox_virtual_environment_vm" "node": │ 143: resource "proxmox_virtual_environment_vm" "node" { │ ╵ ╷ │ Error: error updating VM: the requested resource does not exist │ │ with proxmox_virtual_environment_vm.node["beta-apiserver-0"], │ on main.tf line 143, in resource "proxmox_virtual_environment_vm" "node": │ 143: resource "proxmox_virtual_environment_vm" "node" { │ ╵ ╷ │ Error: error updating VM: the requested resource does not exist │ │ with proxmox_virtual_environment_vm.node["beta-general-0"], │ on main.tf line 143, in resource "proxmox_virtual_environment_vm" "node": │ 143: resource "proxmox_virtual_environment_vm" "node" { │
I'd revisit creating the terraform user and token. Don't forget to generate the token with privilege separation disabled
sudo pveum user token add terraform@pve provider --privsep=0
yes i have create new user but i have this probleme, vm not started
fi_network.vlan["beta"]: Creation complete after 0s [id=673247ca791dee5107c9ced4] proxmox_virtual_environment_pool.operations_pool["beta"]: Creating... proxmox_virtual_environment_pool.operations_pool["beta"]: Creation complete after 0s [id=BETA] proxmox_virtual_environment_vm.node["beta-general-0"]: Creating... proxmox_virtual_environment_vm.node["beta-apiserver-0"]: Creating... proxmox_virtual_environment_vm.node["beta-general-1"]: Creating... ╷ │ Error: error updating VM: the requested resource does not exist │ │ with proxmox_virtual_environment_vm.node["beta-general-1"], │ on main.tf line 143, in resource "proxmox_virtual_environment_vm" "node": │ 143: resource "proxmox_virtual_environment_vm" "node" { │
finaly its work without unify and manuel ip in beta cluster
Awesome! Let me know if you need anything else.
hello i used Pfsense for my network and i haven't unifi api variable "unifi_api_url" { default = "https://10.0.20.254/" my network is vmbr0 is 192.168.1.1 with my box internet
is vmbr1 with tag 200 10.0.20.1/24 with pfsense and i try to add this lab in this auto lanserver iface lanserver inet manual ovs_type OVSIntPort ovs_bridge vmbr1 ovs_options tag=200
Error: unable to determine API URL style: Get "https://10.0.20.254/": dial tcp 10.0.20.254:443: i/o timeout │ │ with unifi_network.vlan["beta"], │ on main.tf line 83, in resource "unifi_network" "vlan": │ 83: resource "unifi_network" "vlan" { How i can confure it without Unifi. thanks you fir your help