Open LouisIV opened 2 years ago
Came here looking for similar issues, but it's not a bug. I think the solution is to put the instance count of both the service and the job at zero then re-apply with 1
Edit : Nevermind, instance count can never be zero. Chicken and egg problem...
Part of me thought I might be able to do something with a local_exec provisioner. If I had the ID I could send a GraphQL request to the API along the lines of:
{
"operationName": "DBaaSSetFirewallRules",
"variables": {
"payload": {
"cluster_name": "mysql",
"cluster_uuid": "--------",
"rules": [
{
"name": "agnew",
"value": "-----------",
"type": "APP"
}
]
}
},
"query": "mutation DBaaSSetFirewallRules($payload: FirewallRulesRequest) {\n DBaaSSetFirewallRules(FirewallRulesRequest: $payload)\n}\n"
}
But I can't get the ID until app deploys. Not sure if this applies to the local_exec provisioner though
I really don't like this but I was able to get something working (still running) with dynamic blocks. Basically I deployed the app with nothing that would need to access the database (you could use an empty nginx container for this for example)
resource "digitalocean_app" "foo" {
...
dynamic "job" {
for_each = toset(digitalocean_database_cluster.mysql != null ? ["fake"] : [])
content {
...
}
...
}
resource "digitalocean_database_firewall" "foo-fw" {
cluster_id = digitalocean_database_cluster.mysql.id
rule {
type = "app"
value = digitalocean_app.foo.id
}
}
In theory this should deploy the database, the app, and then setup the firewall. You could then come back in with a second apply and get your stuff setup since the database will exist by then. This is not an acceptable workaround though since it won't work if anything fails after the database has been created.
Not requiring an active deployment for an app import would go a long way here.
I'd go further, the functionality is useless if one cannot import an app without a deployment... As long as the app spec exists so should the tfstate corresponding to the app resource.
Thanks for flagging this problem for us. I think it brings up some larger questions about how/if Terraform should be handling the state of deployments. The PR at https://github.com/digitalocean/terraform-provider-digitalocean/pull/843 doesn't completely address this issue, but it will allow importing existing apps without an active deployment.
Coming from other providers it does not make sense for the app to successfully deploy for the app to be created. Once the app is created then the digitalocean provider can return the app id, and further config can take place that will allow the deploy to pass. Terraform is not for deploying code, it is about setting up infrastructure.
Changing this would also make terraform apply much quicker, as it does not need to wait for the entire deploy to pass.
This has not only cost me a lot of time, but it also can result in taking the application offline as once the deployment fails, the app is tainted, and then needs to be destroyed (taking the application offline), before being recreated. The whole process is much worse than other providers, such as AWS ECS.
Can it be changed to ignore the deployment state.
Bug Report
Currently I have a configuration like:
I have a worker that connects to the database and runs migrations before the app deploys. The issue is that as far as I can tell the app deployment needs to pass before I can get the app id. The migration container can't pass unless it can connect to the database, which it can't do because it's not trusted.
Is there any workaround for this?
Describe the bug
Affected Resource(s)
Expected Behavior
Actual Behavior
Steps to Reproduce
Terraform Configuration Files
Expected behavior
Debug Output
Panic Output
Additional context
Important Factoids
References