Open AlbertMoser opened 2 years ago
it doesnt look like cloudfunction api directly exposes this. there might be an extra step needed here according to : https://cloud.google.com/sql/docs/mysql/connect-functions#public-ip-default. might be simpler with private conn: https://cloud.google.com/sql/docs/mysql/connect-functions#private-ip
@shuyama1 to investigate the required changes
AFAICT this page is the structure Terraform is sending to GCP to create the resource, and it doesn't have any references to Cloud SQL
metadata
, etc. I think this might just be a (very unfortunate) shortcoming of GCP's API.
I was able to get this to work by this workaround:
locals {
cloud_function_service = google_cloudfunctions2_function.default.service_config[0].service
cloud_function_service_name = replace(local.cloud_function_service, "/projects\\/[-a-z0-9]+\\/locations\\/[-a-z0-9]+\\/services\\//", "")
}
resource "google_cloud_run_service" "default" {
name = local.cloud_function_service_name
project = var.project
location = var.region
template {
metadata {
annotations = {
"run.googleapis.com/cloudsql-instances" = var.cloudsql_connection
}
}
}
lifecycle {
ignore_changes = [
id,
status,
timeouts,
template[0].metadata[0].annotations["run.googleapis.com/client-name"],
template[0].metadata[0].annotations["gcf-v2-enable-pubsub-retries"],
template[0].metadata[0].annotations["run.googleapis.com/startup-cpu-boost"],
template[0].metadata[0].annotations["autoscaling.knative.dev/minScale"],
template[0].metadata[0].annotations["client.knative.dev/user-image"],
template[0].metadata[0].annotations["cloudfunctions.googleapis.com/trigger-type"],
template[0].metadata[0].annotations["run.googleapis.com/cpu-throttling"],
template[0].metadata[0].annotations["run.googleapis.com/vpc-access-connector"],
template[0].metadata[0].annotations["run.googleapis.com/vpc-access-egress"],
]
}
}
Then what you need to do is import the cloud_run service that powers the function and bind it to the google_cloud_run_service
resource.
google_cloud_run_service.default region/name-of-newly-created-function-cloud-run-service
lifecycle { ignore_changes = [ id, status, timeouts, template[0].metadata[0].annotations["run.googleapis.com/client-name"], template[0].metadata[0].annotations["gcf-v2-enable-pubsub-retries"], template[0].metadata[0].annotations["run.googleapis.com/startup-cpu-boost"], template[0].metadata[0].annotations["autoscaling.knative.dev/minScale"], template[0].metadata[0].annotations["client.knative.dev/user-image"], template[0].metadata[0].annotations["cloudfunctions.googleapis.com/trigger-type"], template[0].metadata[0].annotations["run.googleapis.com/cpu-throttling"], template[0].metadata[0].annotations["run.googleapis.com/vpc-access-connector"], template[0].metadata[0].annotations["run.googleapis.com/vpc-access-egress"], ] }
@matsko Would this work for the first run ? We cannot import the state of a Cloud Run instance that hasn't been created yet. Any workarounds for this?
@vyacheslav31 this worked for me:
resource "null_resource" "bootstrap_cloudsql" {
depends_on = [google_cloudfunctions2_function.placeholder]
provisioner "local-exec" {
command = <<-EOT
PROJECT_ID="${var.project_id}"
REGION="${google_cloudfunctions2_function.placeholder.location}"
SERVICE_NAME="${google_cloudfunctions2_function.placeholder.name}"
CLOUDSQL_CONNECTION_NAME="${data.google_sql_database_instance.db-shared.connection_name}"
check_cloud_run() {
gcloud run services list --platform managed --region "$1" --project "$2" --filter "metadata.name=$3" --format "value(metadata.name)" || true
}
check_cloud_sql() {
gcloud sql instances list --project "$1" --filter "connectionName:$2" --format "value(connectionName)" || true
}
check_cloud_sql_added() {
gcloud run services describe "$1" --platform managed --region "$2" --project "$3" --format "value(metadata.annotations['run.googleapis.com/cloudsql-instances'])" | grep -w "$4" || true
}
update_cloud_run() {
gcloud run services update "$1" --platform managed --region "$2" --add-cloudsql-instances "$3" --project "$4"
}
SERVICE_EXISTS=$(check_cloud_run "$REGION" "$PROJECT_ID" "$SERVICE_NAME")
if [ -z "$SERVICE_EXISTS" ]; then
echo "Cloud Run service not found."
exit 0
fi
CLOUDSQL_EXISTS=$(check_cloud_sql "$PROJECT_ID" "$CLOUDSQL_CONNECTION_NAME")
if [ -z "$CLOUDSQL_EXISTS" ]; then
echo "Cloud SQL instance not found."
exit 0
fi
ALREADY_ADDED=$(check_cloud_sql_added "$SERVICE_NAME" "$REGION" "$PROJECT_ID" "$CLOUDSQL_CONNECTION_NAME")
if [ -n "$ALREADY_ADDED" ]; then
echo "Cloud SQL instance already added to the Cloud Run service."
exit 0
fi
update_cloud_run "$SERVICE_NAME" "$REGION" "$CLOUDSQL_CONNECTION_NAME" "$PROJECT_ID"
EOT
}
}
Source: https://www.reddit.com/r/Terraform/comments/12hszd5/comment/jg4e867
@vyacheslav31 this worked for me:
@manudawber That is my solution, I am the same person from reddit lol. Thanks for posting. 👍
Community Note
Description
When deploying gen2 cloudfunctions which have to access a SQL database, the database connection needs to be manually added after the deployment. This makes the whole process rather tedious. It seems like the underlying functionality already exists in cloudrun according to this example. However, looking at this issue it currently doesn't work for cloudfunctions2. For our setup, this is crucial since we sometimes deploy all of our services in a different region and adding the SQL connections manually would make it error prone.
New or Affected Resource(s)
Potential Terraform Configuration
References