Closed bleything closed 2 years ago
(moving this conversation out of unrelated thread) @subfuzion said:
I realize this is a WIP, but ping as soon as you feel you have a working baseline. Given that list of things you enumerated in the PR description, I'd much prefer merging incremental PRs instead of one big monolithic one. Up to you, though.
I think in this case it makes sense to keep everything together. Let's keep it all here.
See #10 / external PR related to a single container deploy version of mean-stack-example.
Running demo of single container deployment running on Cloud Run: https://mean-stack-example-tagz5nlvpa-uc.a.run.app/
This requires the source code changes in my pull request: https://github.com/mongodb-developer/mean-stack-example/pull/2
I've built @subfuzion's container and published it at us-central1-docker.pkg.dev/next22-mean-stack-demo/demo-app/server
. The PR has also been updated to deploy the single container version using that image. I still need to make some readme updates but the actual code is ready for testing.
Just ran through this.
Couple of thoughts:
Is there a reason not to have a stubbed out tfvars
file in the repo (vs having them create it). That would reduce the risk of someone fingering the keys on those key value pairs.
Also, Might want to tell folks how to find their google billing ID. (https://console.cloud.google.com/billing
)
Finally, could we have them put the billing ID in the vars file rather than being prompted for it (since they are already editing the file)?
@mikegcoleman
Is there a reason not to have a stubbed out tfvars file in the repo (vs having them create it). That would reduce the risk of someone fingering the keys on those key value pairs.
The main reason is that having that file in place in the repo dramatically increases the risk of accidentally committing your creds. We could do something like a terraform.tfvars.example
file they can copy and edit, but at some point it's not that different from copy/pasting from the readme.
Also, Might want to tell folks how to find their google billing ID.
and
Finally, could we have them put the billing ID in the vars file rather than being prompted for it (since they are already editing the file)?
yep, I mentioned this in comments above; the readme hasn't been updated since I added the billing account stuff. I'll address both of these points.
After successfully running terraform destroy
, attempted to re-run terraform apply
in the same directory with the same terraform.tfvars
.
``` $ terraform init Initializing the backend... Initializing provider plugins... - Finding hashicorp/google versions matching "~> 4.36"... - Finding mongodb/mongodbatlas versions matching "~> 1.4.5"... - Finding latest version of hashicorp/random... - Installing hashicorp/google v4.37.0... - Installed hashicorp/google v4.37.0 (signed by HashiCorp) - Installing mongodb/mongodbatlas v1.4.6... - Installed mongodb/mongodbatlas v1.4.6 (signed by a HashiCorp partner, key ID 2A32ED1F3AD25ABF) - Installing hashicorp/random v3.4.3... - Installed hashicorp/random v3.4.3 (signed by HashiCorp) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. tony:~/projects/google/GoogleCloudPlatform/terraform-mean-cloudrun-mongodb.git (mvp) $ terraform apply Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # google_cloud_run_service.app will be created + resource "google_cloud_run_service" "app" { + autogenerate_revision_name = false + id = (known after apply) + location = "us-central1" + name = "demo" + project = (known after apply) + status = (known after apply) + metadata { + annotations = (known after apply) + generation = (known after apply) + labels = (known after apply) + namespace = (known after apply) + resource_version = (known after apply) + self_link = (known after apply) + uid = (known after apply) } + template { + metadata { + annotations = (known after apply) + generation = (known after apply) + labels = (known after apply) + name = (known after apply) + namespace = (known after apply) + resource_version = (known after apply) + self_link = (known after apply) + uid = (known after apply) } + spec { + container_concurrency = (known after apply) + service_account_name = (known after apply) + serving_state = (known after apply) + timeout_seconds = (known after apply) + containers { + image = "us-central1-docker.pkg.dev/next22-mean-stack-demo/demo-app/server:latest" + env { + name = "ATLAS_URI" + value = (known after apply) } + ports { + container_port = (known after apply) + name = (known after apply) + protocol = (known after apply) } + resources { + limits = (known after apply) + requests = (known after apply) } } } } + traffic { + latest_revision = (known after apply) + percent = (known after apply) + revision_name = (known after apply) + tag = (known after apply) + url = (known after apply) } } # google_cloud_run_service_iam_binding.app will be created + resource "google_cloud_run_service_iam_binding" "app" { + etag = (known after apply) + id = (known after apply) + location = "us-central1" + members = [ + "allUsers", ] + project = (known after apply) + role = "roles/run.invoker" + service = "demo" } # google_project.prj will be created + resource "google_project" "prj" { + auto_create_network = true + billing_account = "017FC3-D561B3-68DD84" + id = (known after apply) + name = (known after apply) + number = (known after apply) + project_id = (known after apply) + skip_delete = (known after apply) } # google_project_service.svc["run"] will be created + resource "google_project_service" "svc" { + disable_on_destroy = true + id = (known after apply) + project = (known after apply) + service = "run.googleapis.com" } # mongodbatlas_cluster.cluster will be created + resource "mongodbatlas_cluster" "cluster" { + auto_scaling_compute_enabled = (known after apply) + auto_scaling_compute_scale_down_enabled = (known after apply) + auto_scaling_disk_gb_enabled = true + backing_provider_name = "GCP" + backup_enabled = false + cloud_backup = false + cluster_id = (known after apply) + cluster_type = (known after apply) + connection_strings = (known after apply) + container_id = (known after apply) + disk_size_gb = (known after apply) + encryption_at_rest_provider = (known after apply) + id = (known after apply) + mongo_db_major_version = (known after apply) + mongo_db_version = (known after apply) + mongo_uri = (known after apply) + mongo_uri_updated = (known after apply) + mongo_uri_with_options = (known after apply) + name = (known after apply) + num_shards = 1 + paused = false + pit_enabled = (known after apply) + project_id = (known after apply) + provider_auto_scaling_compute_max_instance_size = (known after apply) + provider_auto_scaling_compute_min_instance_size = (known after apply) + provider_backup_enabled = false + provider_disk_iops = (known after apply) + provider_disk_type_name = (known after apply) + provider_encrypt_ebs_volume = (known after apply) + provider_encrypt_ebs_volume_flag = (known after apply) + provider_instance_size_name = "M0" + provider_name = "TENANT" + provider_region_name = "CENTRAL_US" + provider_volume_type = (known after apply) + replication_factor = (known after apply) + snapshot_backup_policy = (known after apply) + srv_address = (known after apply) + state_name = (known after apply) + version_release_system = "LTS" + advanced_configuration { + default_read_concern = (known after apply) + default_write_concern = (known after apply) + fail_index_key_too_long = (known after apply) + javascript_enabled = (known after apply) + minimum_enabled_tls_protocol = (known after apply) + no_table_scan = (known after apply) + oplog_size_mb = (known after apply) + sample_refresh_interval_bi_connector = (known after apply) + sample_size_bi_connector = (known after apply) } + bi_connector_config { + enabled = (known after apply) + read_preference = (known after apply) } + labels { + key = (known after apply) + value = (known after apply) } + replication_specs { + id = (known after apply) + num_shards = (known after apply) + zone_name = (known after apply) + regions_config { + analytics_nodes = (known after apply) + electable_nodes = (known after apply) + priority = (known after apply) + read_only_nodes = (known after apply) + region_name = (known after apply) } } } # mongodbatlas_database_user.user will be created + resource "mongodbatlas_database_user" "user" { + auth_database_name = "admin" + aws_iam_type = "NONE" + id = (known after apply) + ldap_auth_type = "NONE" + password = (sensitive value) + project_id = (known after apply) + username = "mongo" + x509_type = "NONE" + labels { + key = (known after apply) + value = (known after apply) } + roles { + collection_name = (known after apply) + database_name = "meanStackExample" + role_name = "readWrite" } } # mongodbatlas_project.demo will be created + resource "mongodbatlas_project" "demo" { + cluster_count = (known after apply) + created = (known after apply) + id = (known after apply) + is_collect_database_specifics_statistics_enabled = (known after apply) + is_data_explorer_enabled = (known after apply) + is_performance_advisor_enabled = (known after apply) + is_realtime_performance_panel_enabled = (known after apply) + is_schema_advisor_enabled = (known after apply) + name = (known after apply) + org_id = "62757186a4a88a598211c228" + with_default_alerts_settings = true + api_keys { + api_key_id = (known after apply) + role_names = (known after apply) } } # mongodbatlas_project_ip_access_list.acl will be created + resource "mongodbatlas_project_ip_access_list" "acl" { + aws_security_group = (known after apply) + cidr_block = "0.0.0.0/0" + comment = (known after apply) + id = (known after apply) + ip_address = (known after apply) + project_id = (known after apply) } # random_string.mongodb_password will be created + resource "random_string" "mongodb_password" { + id = (known after apply) + length = 32 + lower = true + min_lower = 0 + min_numeric = 0 + min_special = 0 + min_upper = 0 + number = true + numeric = true + result = (known after apply) + special = false + upper = true } # random_string.suffix will be created + resource "random_string" "suffix" { + id = (known after apply) + length = 8 + lower = true + min_lower = 0 + min_numeric = 0 + min_special = 0 + min_upper = 0 + number = true + numeric = true + result = (known after apply) + special = false + upper = false } Plan: 10 to add, 0 to change, 0 to destroy. Changes to Outputs: + app_url = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes random_string.mongodb_password: Creating... random_string.suffix: Creating... random_string.suffix: Creation complete after 0s [id=xfdfumo4] random_string.mongodb_password: Creation complete after 0s [id=BE6JrN4rmDAy6gUoUltOZSLPYJg5zKw1] mongodbatlas_project.demo: Creating... google_project_service.svc["run"]: Creating... google_project.prj: Creating... mongodbatlas_project.demo: Creation complete after 3s [id=632b7e775c44a4100f1fe97c] mongodbatlas_project_ip_access_list.acl: Creating... mongodbatlas_database_user.user: Creating... mongodbatlas_cluster.cluster: Creating... mongodbatlas_database_user.user: Creation complete after 1s [id=YXV0aF9kYXRhYmFzZV9uYW1l:YWRtaW4=-cHJvamVjdF9pZA==:NjMyYjdlNzc1YzQ0YTQxMDBmMWZlOTdj-dXNlcm5hbWU=:bW9uZ28=] mongodbatlas_project_ip_access_list.acl: Creation complete after 5s [id=ZW50cnk=:MC4wLjAuMC8w-cHJvamVjdF9pZA==:NjMyYjdlNzc1YzQ0YTQxMDBmMWZlOTdj] google_project.prj: Still creating... [10s elapsed] mongodbatlas_cluster.cluster: Still creating... [10s elapsed] google_project.prj: Still creating... [20s elapsed] mongodbatlas_cluster.cluster: Still creating... [20s elapsed] google_project.prj: Creation complete after 25s [id=projects/gcp-meanstack-demo-xfdfumo4] mongodbatlas_cluster.cluster: Still creating... [30s elapsed] mongodbatlas_cluster.cluster: Still creating... [40s elapsed] mongodbatlas_cluster.cluster: Still creating... [50s elapsed] mongodbatlas_cluster.cluster: Still creating... [1m0s elapsed] mongodbatlas_cluster.cluster: Still creating... [1m10s elapsed] mongodbatlas_cluster.cluster: Still creating... [1m20s elapsed] mongodbatlas_cluster.cluster: Still creating... [1m30s elapsed] mongodbatlas_cluster.cluster: Still creating... [1m40s elapsed] mongodbatlas_cluster.cluster: Still creating... [1m50s elapsed] mongodbatlas_cluster.cluster: Still creating... [2m0s elapsed] mongodbatlas_cluster.cluster: Still creating... [2m10s elapsed] mongodbatlas_cluster.cluster: Still creating... [2m20s elapsed] mongodbatlas_cluster.cluster: Still creating... [2m30s elapsed] mongodbatlas_cluster.cluster: Still creating... [2m40s elapsed] mongodbatlas_cluster.cluster: Still creating... [2m50s elapsed] mongodbatlas_cluster.cluster: Still creating... [3m0s elapsed] mongodbatlas_cluster.cluster: Creation complete after 3m2s [id=Y2x1c3Rlcl9pZA==:NjMyYjdlN2E1NmY1YTE0MTFhZjE0MjIz-Y2x1c3Rlcl9uYW1l:Z2NwLW1lYW5zdGFjay1kZW1vLXhmZGZ1bW80-cHJvamVjdF9pZA==:NjMyYjdlNzc1YzQ0YTQxMDBmMWZlOTdj-cHJvdmlkZXJfbmFtZQ==:VEVOQU5U] google_cloud_run_service.app: Creating... ╷ │ Error: Error when reading or editing Project Service : Request `List Project Services gcp-meanstack-demo-xfdfumo4` returned error: Failed to list enabled services for project gcp-meanstack-demo-xfdfumo4: googleapi: Error 403: Project 'gcp-meanstack-demo-xfdfumo4' not found or permission denied. │ Help Token: AWzfkCMw8HQjroLfRvVgpveaESNvZeAkgtAVT6XIJmESi8mYctCIKd7VuiME6gpdo1Ql-X_Rw94bpK290MhDPP1N4fKr_iZjxBGlMd5FAZ6hENGh │ Details: │ [ │ { │ "@type": "type.googleapis.com/google.rpc.PreconditionFailure", │ "violations": [ │ { │ "subject": "?error_code=210002\u0026type=Project\u0026resource_id=gcp-meanstack-demo-xfdfumo4", │ "type": "googleapis.com" │ } │ ] │ }, │ { │ "@type": "type.googleapis.com/google.rpc.ErrorInfo", │ "domain": "serviceusage.googleapis.com", │ "metadata": { │ "resource_id": "gcp-meanstack-demo-xfdfumo4", │ "type": "Project" │ }, │ "reason": "RESOURCES_NOT_FOUND" │ } │ ] │ , forbidden │ │ with google_project_service.svc["run"], │ on google.tf line 31, in resource "google_project_service" "svc": │ 31: resource "google_project_service" "svc" { │ ╵ ╷ │ Error: Error creating Service: googleapi: Error 403: Cloud Run Admin API has not been used in project 59153604986 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/run.googleapis.com/overview?project=59153604986 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. │ Details: │ [ │ { │ "@type": "type.googleapis.com/google.rpc.Help", │ "links": [ │ { │ "description": "Google developers console API activation", │ "url": "https://console.developers.google.com/apis/api/run.googleapis.com/overview?project=59153604986" │ } │ ] │ }, │ { │ "@type": "type.googleapis.com/google.rpc.ErrorInfo", │ "domain": "googleapis.com", │ "metadata": { │ "consumer": "projects/59153604986", │ "service": "run.googleapis.com" │ }, │ "reason": "SERVICE_DISABLED" │ } │ ] │ │ with google_cloud_run_service.app, │ on google.tf line 40, in resource "google_cloud_run_service" "app": │ 40: resource "google_cloud_run_service" "app" { │ ╵ ```
Is this expected? Should anything be cleaned up from previous state, or do we need to tell users best to start from a fresh clone?
Using the same clone after terraform destroy
, I tried the following and it still failed (same error about Cloud Run Admin API).
git clean -fX
terraform init
terraform apply
Will try again from a fresh clone.
That's very strange. Did it successfully create a project before failing? Does the project ID from the error output match the ID of the newly created project? Does that project have a billing account assigned if you check in the console or with gcloud?
Using the same terraform.tfvars
succeeded in a fresh clone.
huh. that's interesting. it's possible it was transient or it's possible that it's another graph traversal thing, but it's hard for me to imagine what could have gone wrong unless the destroy was interrupted. Give it another shot and if it fails again, ping me so we can debug
That's very strange. Did it successfully create a project before failing? Does the project ID from the error output match the ID of the newly created project? Does that project have a billing account assigned if you check in the console or with gcloud?
I updated the detail in my comment to show the full output before the error.
Yeah, wow, I don't understand what happened there. Maybe @villasenor or @mikegcoleman might have an idea?
If it happens again, let's hop on a call so we can look at it together.
Just a weird anomaly, I guess? I just successfully twice ran terraform destroy
followed by another terraform apply
out of the same clone and it ran fine.
After running terraform destroy
and seeing the following:
Even after waiting for a while, I can see all the "destroyed" projects still listed in the console and can navigate to them.
I know that projects are scheduled for deletion after 30 days, but after shutting one down access to the project should be lost immediately. Is this not the case?
I think that's an artifact of how the console works. Several of my old projects are listed in there as well, but I can also see them on the pending deletion page. Check to see if they're listed there, and if so I think everything is working as expected.
Yes, you're right. I should have checked that. This is what threw me off, however, because I still had access to the project (guess losing access is not quite immediate). All good though, thanks for confirming!
It's super confusing. I'll add a note to the readme mentioning that you might see the project stick around for a while, and giving a link to the pending deletion page for confirmation. Good find!
This PR will contain the extremely limited MVP we discussed using as a starting point. Opening the PR now so we have a place to discuss while I continue work. We can consider this exercise successful when a user can:
terraform apply
terraform destroy
to clean upThings it'll do:
Things it won't do: