Closed PrafullaKGit closed 4 months ago
which components did u export and ran import cmds for? Plan: 54 to add and 54 to destroy ---> seems like it is recreating the resources. Can you please share the terraform plan too. Thanks
Hi Suruchi, We are exporting following resources - -rwxr--r--. 1 cd3user cd3user 2019 Jun 21 06:39 tf_import_commands_compartments_nonGF.sh -rwxr--r--. 1 cd3user cd3user 1448 Jun 21 06:40 tf_import_commands_groups_nonGF.sh -rwxr--r--. 1 cd3user cd3user 1801 Jun 21 06:40 tf_import_commands_policies_nonGF.sh -rwxr--r--. 1 cd3user cd3user 2216 Jun 21 06:40 tf_import_commands_users_nonGF.sh -rwxr--r--. 1 cd3user cd3user 2050 Jun 21 06:40 tf_import_commands_tags_nonGF.sh -rwxr--r--. 1 cd3user cd3user 5353 Jun 21 06:40 tf_import_commands_network_major-objects_nonGF.sh -rwxr--r--. 1 cd3user cd3user 566 Jun 21 06:40 tf_import_commands_network_dhcp_nonGF.sh -rwxr--r--. 1 cd3user cd3user 91 Jun 21 06:40 tf_import_commands_network_vlans_nonGF.sh -rwxr--r--. 1 cd3user cd3user 801 Jun 21 06:40 tf_import_commands_network_subnets_nonGF.sh -rwxr--r--. 1 cd3user cd3user 1422 Jun 21 06:40 tf_import_commands_network_secrules_nonGF.sh -rwxr--r--. 1 cd3user cd3user 1368 Jun 21 06:41 tf_import_commands_network_routerules_nonGF.sh -rwxr--r--. 1 cd3user cd3user 533 Jun 21 06:41 tf_import_commands_network_drg_routerules_nonGF.sh -rwxr--r--. 1 cd3user cd3user 89 Jun 21 06:41 tf_import_commands_network_nsg_nonGF.sh -rwxr--r--. 1 cd3user cd3user 30630 Jun 21 06:41 tf_import_commands_dns-views-zones-records_nonGF.sh -rwxr--r--. 1 cd3user cd3user 1380 Jun 21 06:42 tf_import_commands_dns-resolvers_nonGF.sh -rwxr--r--. 1 cd3user cd3user 6329 Jun 21 06:42 tf_import_commands_instances_nonGF.sh -rwxr--r--. 1 cd3user cd3user 9489 Jun 21 06:44 tf_import_commands_blockvolumes_nonGF.sh -rwxr--r--. 1 cd3user cd3user 976 Jun 21 06:44 tf_import_commands_fss_nonGF.sh -rwxr--r--. 1 cd3user cd3user 391 Jun 21 06:44 tf_import_commands_buckets_nonGF.sh -rwxr--r--. 1 cd3user cd3user 307 Jun 21 06:45 tf_import_commands_dbsystems-vm-bm_nonGF.sh
Though I can't share Terraform Plan output as it has customer data, following resources are getting replaced - Instances Block Volume attachments DB system Backup policies attached to each BV DNS resolvers and FWD
You will have to see what is causing the replacement. eg for instances it can be metadata - in that case you can put ignore lifecycle changes for metadata in main.tf. If instance replacement is fixed then the corresponding BV and BV attachment replacement will also be fixed.
Plan shows that Instance is being force replaced due to subnet ocid mentioned in create_vnic_details section of the instance. But I don't understand why, because that subnet ocid is correct, matches the current value from the console.
Have u exported network and ran import cmds for network too? I see that you are using single outdir for all components - any specific reason for not using multi outdir?
and i hope you dont have duplicates for subnet names in the console. Toollkit doenst support duplicate resource names
Have u exported network and ran import cmds for network too? I see that you are using single outdir for all components - any specific reason for not using multi outdir?
Yes, I Have exported network and many other resources as mentioned before and ran import scripts for them all. I'm using single outdir for simplicity.
No, we don't have duplicates for any resources in OCI.
We are getting below type of errors after clearing the TF state and running all TF import scripts again - │ Error: Resource already managed by Terraform │ │ Terraform is already managing a remote object for │ module.drg-route-distribution-statements["drg-hub-lhr_drg-rt-dis-lhr_statement3"].oci_core_drg_route_distribution_statement.drg_route_distribution_statement. │ To import to this address you must first remove the existing object from the state.
Is this an issue?
Also, shall we try putting ignore lifecycle changes for metadata in main.tf?
You will get resource already managed errors if you try to import the same resource into tfstate again. This is not an issue and can be ignored. And if you are getting force replacements because of metadata - then yes you can put it into ignore lifecycle changes in main.tf (at /<outdir>/terraform_files/<region>/modules/compute/instance/main.tf)
lifecycle { ignore_changes = [create_vnic_details[0].defined_tags["Oracle-Tags.CreatedOn"], create_vnic_details[0].defined_tags["Oracle-Tags.CreatedBy"],metadata] }
If you are getting force replacements because of subnet_ocid, can u try commenting 'depends_on' at line 9 from instance.tf (at /<outdir>/terraform_files/<region>/instance.tf)
After making this change at line 9 and 17 in instance.tf, terraform plan produces this output. Plan: 9 to add, 26 to change, 9 to destroy.
All instances are now getting updated in-place, with changes in red (-) for agent_config and defined_tags ~ agent_config {
Also other resources like oci_file_storage_mount_target, oci_dns_resolver_endpoint , oci_database_db_system are still getting replaced due to subnet_id. oci_file_storage_export is getting updated due to export_set_id
Also, the terraform workvm (which hosts the container) is getting replaced due to metadata.
Another strange issue which I observed is, if I run terraform plan -target=oci_core_subnet.subnet OR terraform plan -target=oci_core_instance.instance It shows==> No changes. Your infrastructure matches the configuration. Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Then why terraform plan without specific target shows discrepancies for these resources?
@xs2suruchi , can you please take a look at my above comment ?
So agent plugin changes. - you can ignore. it will not disable any plugins if you do terraform apply. The VNIC defined tags - yes it will replace them to null. To avoid that you can add column called - 'VNIC Defined Tags' in Instances sheet of excel and re-export instances. It will export the VNIC tags as well and put it in tfvars which will not show this change in terraform plan
Thanks @xs2suruchi ,
Can you also respond to other queries which I had posted above. I have added them below again - Also other resources like oci_file_storage_mount_target, oci_dns_resolver_endpoint , oci_database_db_system are still getting replaced due to subnet_id.
oci_file_storage_export is getting updated due to export_set_id
Also, the terraform workvm instance (which hosts the container) is getting replaced due to metadata.
Another strange issue which I observed is, if I run terraform plan -target=oci_core_subnet.subnet OR terraform plan -target=oci_core_instance.instance It shows==> No changes. Your infrastructure matches the configuration. Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Then why terraform plan without specific target shows discrepancies for these resources?
For replacements happening because of subnet_id - you wil need to comment depends_on = [module.subnets] for all root modules like fss.tf, dns.tf etc like you did for instance.tf
workVM replacement issue because of metadata - should have got resolved by putting ignore lifecycle in main,tf as i showed in one of my previous comments.
Because of module dependency for all respurces combined you are getting the replacements in terraform plan. Doing it via multi outdir is generally easier i guess. Thanks
Are you able to proceed @PrafullaKGit
Hi @xs2suruchi , adding comments for the subnet_id and ignoring metadata, made add/destroy type of changes to 0. There were still 24 updates for different resources, which were mainly due to tags values. Not sure what is the solution for it.
But then I added column called - 'VNIC Defined Tags' in Instances sheet of excel and re-exported entire tenancy in multi outdir output destination. I will try running the imports after sometime and let you know the outcome,
@xs2suruchi , I performed these steps -
Outcome: terraform plan for the most of the resources is fine, except below - changes/updates are shown for network, database and dns due to defined_tags. Please advise on the solution. changes/updates are shown for compute due to plugins_config (you said we can ignore this , right?)
Good to know that you were able to proceed. Regarding changes/updates are shown for compute due to plugins_config , you can chose any of the solutions below as per complexity -
Regarding changes/updates are shown for network, database and dns due to defined_tags. Please advise on the solution, - generally CD3 assigns same tags to the dependent components for a particular sheet eg the 'Defined Tags' column in VCNs sheet will apply these tags to VCN objects as well as the dependent objects like gateways. So if you are seeing such changes, and you don't want those tags to be applied/removed - you might need to make some manual adjustments in the tfvars. I can suggest better based on the actual data.
Hi @xs2suruchi , Please advise how to add ignore changes for plugins_config in instance main.tf.
We were expecting that the tool would read currently assigned parameters values from each resource in OCI and sync the state accordingly. But if the tool is assigning values based on values of parent objects, then it's not correct. Let us know what data you need to provide the solution.
Also, now, we want to test the tool by deleting an instance from OCI and trying to recreate it using cd3tool. Can you tell me the process to do it?
the same way u added metadata - lifecycle { ignore_changes = [create_vnic_details[0].defined_tags["Oracle-Tags.CreatedOn"], create_vnic_details[0].defined_tags["Oracle-Tags.CreatedBy"],metadata,agent_config]
Can you share the terraform plan for Defined tags changes?
If your excel and terraform state is in synch with OCI, you just need to add/modify/remove the row from Instances sheet of CD3 Excel sheet and re run setUpOCI. To remove a row you can also put it after
@xs2suruchi , Below is the plan output for database , with ids masked
Terraform will perform the following actions:
~ resource "oci_database_db_system" "database_db_system" { id = "ocid1.dbsystem.oc1.uk-london-1.xyz"
~ db_home {
id = "ocid1.dbhome.oc1.uk-london-1.xyz"
# (8 unchanged attributes hidden)
~ database {
~ defined_tags = {
+ "tag-oci.tag-key-cat" = "mvp"
~ "tag-oci.tag-key-env" = "non-prod" -> "acct"
+ "tag-oci.tag-key-reg" = "lhr"
# (3 unchanged elements hidden)
}
id = "ocid1.database.oc1.uk-london-1.xyz"
# (10 unchanged attributes hidden)
# (1 unchanged block hidden)
}
}
# (3 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Does your dbsystem has these tags ? "tag-oci.tag-key-cat" = "mvp" "tag-oci.tag-key-env" = "acct" "tag-oci.tag-key-reg" = "lhr" If so then yes looks like it is applying the same tags as dbsystem to the db_home also. Generally the requirement is to have all resources properly tagged and that has been a usual requirement to apply same tags to child resources. In case you don't want that, you can modify the respective main.tf for that resource (eg modules/database/dbsystem-vm-bm/main.tf) and remove defined_tags and freeform_tags -
hi @xs2suruchi DB System has these tags - tag-oci.tag-key-cat: mvp tag-oci.tag-key-stack: db tag-oci.tag-key-reg: lhr tag-oci.tag-key-env: acct
Yeah, correct (similar to what i mentioned above)
Note: Objects have changed outside of Terraform ...
.... Terraform will perform the following actions:
We expected that running the tf_import_commands_instances_nonGF.sh would synch the TF state with whatever config present in OCI currently and would not show that instance getting created. What went wrong?
@xs2suruchi , Can you please look into above issue and respond quickly? We are unable to progress.
could you join this public [slack] (https://oracle-devrel.github.io/cd3-automation-toolkit/latest/queries/) so its easier to troubleshoot. It seems like you are importing new instance into the old state file which had details about old instance.
Thanks @xs2suruchi , All issues are solved as of now. I could also create a new VM using the toolkit.
Thanks @xs2suruchi , All issues are solved as of now. I could also create a new VM using the toolkit.
We are using Oracle delivered cd3toolkit to export OCI configuration to excel file and Terraform scripts. We ran all import shell scripts. But Terraform plan shows many discrepancies like below - Plan: 54 to add, 10 to change, 54 to destroy.
Since we did not change any configuration on the OCI outside Terraform before the export, we expected that import shell scripts would sync up the state files so Terraform plan was not supposed to show any mismatch.