Closed turb0steve closed 1 year ago
Can confirm that setting the VMs to "depend_on" the security group elicits the expected behavior, and destroys and re-creates the VMs.
Hi,
This is the OpenNebula behavior and Terraform cannot change it. The provider should not have a different behavior than the one currently performed by One. Hope you understand these reasons
This is a security to not delete a resource currently use.
If the depend_on does the job can we consider this issue is solved ?
Regards
Hi again, and Thanks :)
Both Azure and AWS prevent you removing resources if they are a dependency of another resource,
However on these platforms Terraform will always run the dependency graph and taint those upstream dependencies. Terraform must to behave this way to protect idempotency.
[The only exception seems to be storage, where destroying a storage component will usually provision a replacement, but leave the old one in-place, which is configured as "deletion protection" within the cloud provider]
So, effectively, there is a conflict between the "spirit" of the GUI (protect the user) and Terraform (ALWAYS be idempotent), which, I think, can only be resolved by treating Terraform as 'omnipotent'. It's what makes it so very powerful (and so dangerous!).
I can certainly continue to use "depends_on" toadd this dependency explicitly - but it seems to be the opposite of normal TF behaviour. VMs on Subnets is a hard-dependency, so I feel like it should be implicit within the TF graph. Apologies if you have already done so, but is it worth seeking consensus with the upstream TF project?
Apologies also for the negativity - I cannot understate how grateful I am for all the hard work that has gone into this provider. Given how huge terraform is becoming, and how useful OpenNebula is as a cross-cloud platform orchestrator, this all seems really important.
Hi
Thank you :) No worries. Tour comments and this issue are totally correct. I am very glad to see people use this provider :).
In a first step this provider has been created to easy OpenNebula resource management. It is the very beginning of this project. It started independently form OpenNebula and then OpenNebula invite us to make it official :).
So I think we need to think deeply on Terraform Behavior and OpenNebula behavior. For the moment this provider is not really "intelligent" and its behavior is based on what OpenNebula answers on each request.
So thank you again for this issue, I will try to think about some behavior enhancement sin a future release. I don't want to give an ETA as we are developing this provider depending on other priorities we have in our company and also personally.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 5 days
The way we describe resources show a dependency of the VM NIC on the security group:
resource "opennebula_security_group" "mysecgroup" {
...
}
resource "opennebula_virtual_machine" "test" {
...
nic {
...
security_groups = [opennebula_security_group.mysecgroup.id]
}
}
But terraform doesn't know that security group depends on the VM NIC, it's an OpenNebula rule, not "described" in the provider. The NIC should be updated first to remove the security group dependency, then the security group could be deleted.
At first glance I don't have a perfect solution to propose.
We could add some VM NIC management in the security group resource but I'm not sure this is a good idea.
For instance, we could deprecate the security_group
NIC field and move this on security group side via a new field that collect a pair of VM ID and NIC ID.
Anyway, It would be probably better to work with externalized NIC resources if we modify NICs.
If someone has something to propose I'm open to discuss
By the way, I remove this from 0.5.1 scope, it's not a bug fix for me
So I guess we can go for something like this:
resource "opennebula_security_group" "mysecgroup" {
# ...
}
resource "opennebula_virtual_machine" "test" {
# ...
# nic is deprecated
}
resource "opennebula_nic" "test" {
# ...
security_groups = [
opennebula_security_group.mysecgroup.id
]
vm_id = opennebula_virtual_machine.test.id
}
As seen, we can use ForceNew
(doc) on opennebula_nic.test.security_group
and opennebula_nic.test.vm_id
.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Introducing opennebula_nic
is not in the scope anymore, I remove this one from the 1.1.0 release
Hello @turb0steve,
Sorry for the late reply.
Do you still experience this issue?
I am not quite sure to understand it. Could you provide some HCL example please?
This issue is stale because it has been open for 30 days with no activity and it has not the 'status: confirmed' label or it is not in a milestone. Remove the 'status: stale' label or comment, or this will be closed in 5 days.
Tainting a network security group with VMs attached causes terraform apply to fail with:
Error: OpenNebula error [ACTION]: [one.secgroup.delete] Cannot delete security group. The security group has VMs using it
Surely the expected behavior would be that Terraform understands the dependency graph, and taints the VMs, causing them to be redeployed with the security group?