Closed irvingpop closed 5 months ago
So, I believe the issue we're going to have here is that if you save state, and run the starter pack a second time, it will remove the resource for the columns that were previously created, and therefore the second apply will delete the previously created columns because they'll be gone, but still in state. I'm not sure how to resolve that.
Yup @tdarwin - we're 99% counting on people to never run TF a second time. Jason suggested that there's an undocumented inmem
backend for TF that will throw away all state between runs.
I'd actually like to utilize the new datasource in a different way than creating columns that don't exist, as we can't get that to be stateful. I believe I have a way to use it differently so that it will be stateful but rather than creating columns with 0 data, we'll just modify/ignore queries that use missing columns, so that when the columns do have data, running the TF again will update the queries and boards to use those new columns as they arrive.
We can chat more during/after standup or later if you like.
Okay. My plan didn't work. Reopening this PR.
@irvingpop Can we use the Prevent Destroy lifecycle function? And then we can tell people to do a terraform refresh
if they see that error? Though I don't know if the terraform refresh will remove it from state... needs to be tested.
Short description of the changes
This PR utilizes the new
honeycombio_columns
data source (note: PR still pending ) to determine the list of already-existing columns, and only create the needed ones. A little bit of "Terraform programming" tosetsubtract()
the list of existing columns from the list of required columns and badabing!As a tiny debugging helper, I added a
columns_created
output which helps you see whHow to verify that this has the expected result
Create a dataset that has some or all of the required_columns already created. Run
terraform plan
on this PR and you should see:and no plan to create any already-existing columns.