Open patpicos opened 4 years ago
Import existing resources are not covered yet. That may explain the issue you are reporting. I would re-qualify that as an improvement to add in the rover
Import existing resources are not covered yet. That may explain the issue you are reporting. I would re-qualify that as an improvement to add in the rover
Exposing the state commands would also be useful to determine the list of resources in the state....to generate the proper path for importing a resource
It would be nice if rover actually exposed all terraform commands. I would certainly like being able to import and export statefile for backup/restore purpose using rover as I used to be able to with the terraform cli. This has been an issue in many cases where I wanted to backup statefiles before upgrading from one release of launchpad to the next.
We ran in many occasions where having access to the taint command would have saved us a full destroy and apply when something goes wrong on Azure. It often happen that some DSC deployment on VM fail and using rover it is impossible to issue a terraform taint
@patpicos I ran in a strange feature of rover today. This might or might not help you. It did help me with my taint issue for resources. If you run the rover command like what you usually type, like:
rover -lz /tf/caf/landingzones/someblueprint -a apply
then go in the folder containing the code for the landingzone and run:
terraform state list
you will get the values from the statefile cached for that LZ. Interesting enough, if you actually do a taint on one of the resources it will actually write this up to the Azure Storage... This must be possible due to some local caching in the devcontainer that properly orient terraform to the backend storage. Unexpected but useful. I was able to taint my resource that way and on subsequent rover apply it got re-created as I needed.
For example:
[vscode@78c5751e69cd code]$ terraform taint time_offset.tomorrow
Acquiring state lock. This may take a few moments...
Resource instance time_offset.tomorrow has been marked as tainted.
Releasing state lock. This may take a few moments...
See how it is acquiring a lock on the statefile in the cloud? I then confirmed it actually wrote the taint command to the correct Azure state file by looking into the statefile in the storage account for the taint and fount it there.
This appear to be thanks to the envvar: TF_DATA_DIR
So technically I could change the TF_DATA_DIR to some different folder for each landing zone before executing the rover command to keep an active local cache per landingzone rather than a shared one and keep a local cached statefile for each.
I have actually implemented the local cache feature using a custom script that set the TF_DATA_DIR at runtime. The nice side effect is that now I can go into any LZ and easily issue commands like terraform state pull > backupstate.file
.
I have actually implemented an automatic backup of the remote state file to the LZ cache before doing an apply. I have lost my statefile many times when using rover and running into timeouts, lost connectivity, etc... and not having a backup of the statefile has been a big issue. This is no more with this.
Here is the short bash code to do this:
# Taking backup of statefile before applying if cache already exist
if [[ ${command} == "apply" ]]; then
if [[ -d ${TF_DATA_DIR} ]]; then
date=`date +%Y%m%d%H%M%S`
current=${PWD}
cd code
echo "Taking backup of state file"
terraform state pull > ${TF_DATA_DIR}/terraform.state.${date}
cd ${current}
else
echo "cache does not yet exist, can't take backup."
fi
fi
Might be wise to update the launchpad to enable the blob versioning (in preview)
@patpicos Do you have a link for that preview? I found this: https://medium.com/@ripon.banik/terrform-state-and-versioning-in-azure-72cb92aa4f19
but I think you are refering to something else perhaps?
EDIT:
Found it: https://docs.microsoft.com/en-us/azure/storage/blobs/versioning-overview?tabs=powershell
I tried using the version feature of the storage account and it does indeed track statefile changes... but man it is chatty. This is the result of simply running an apply on a deployed plan:
It literally created 4 interim versions. It is nice but I am worried we will drown in versions given how terraform/rover appear to touch the statefile.
Yikes. Storage is cheap so I would not be super worried. It would be nice if a lifecycle policy could be used to maintain X versions of a blob
I'm running into this problem using terraform via azure CLI, is there a way to resume a deployment when the console booted you due to inactivity?
Describe the bug I have been experimenting with deploying the CAF foundations and modifying some of the tfvars. I enabled the security center option and ran apply. The apply timed out. When I do a re-apply, it says the resource is there and needs to be imported.
The rover command does not expose the import command. Also, the path is so embedded that it becomes difficult to determine how to import the resource into the state. Please advise
To Reproduce Steps to reproduce the behavior:
Expected behavior A clear and concise description of what you expected to happen.
Screenshots If applicable, add screenshots to help explain your problem.
Configuration (please complete the following information):
Additional context Add any other context about the problem here.