Currently, the Helm chart provides various customization options in the values.yaml file for the deployment resource, such as communicating with the MySQL DB via a CloudSQL proxy sidecar container.
However, similar customization options are not available for the migration job resource, even though both the deployment and job would likely need to communicate with the database using the same mechanisms.
What have you tried?
We are deploying Fleet in GCP. We import the helm chart, and deploy it with customizations suitable for our environment. Our application ultimately fails because it gets stuck on the job. The job times out, and when looking at the failed pod's log, it shows:
Failed to start: creating db connection: dial tcp 127.0.0.1:3306: connect: connection refused
When inspecting the job's pod's manifest, we can see that there is no proxy sidecar container, even though enableProxy from values.yaml is true, and the sidecar container is defined in the deployment's manifest.
Potential solutions
There should be an additional section for customizing the job resource in the values.yaml, and the job template should be updated to leverage these customizations.
The migration job template should be updated to reference the existing configuration options used for the deployment resource.
What is the expected workflow as a result of your proposal?
I should be able to use the helm chart, and have it deploy successfully only by modifying the exposed customization options via the values.yaml file.
Problem
Currently, the Helm chart provides various customization options in the values.yaml file for the deployment resource, such as communicating with the MySQL DB via a CloudSQL proxy sidecar container.
However, similar customization options are not available for the migration job resource, even though both the deployment and job would likely need to communicate with the database using the same mechanisms.
What have you tried?
We are deploying Fleet in GCP. We import the helm chart, and deploy it with customizations suitable for our environment. Our application ultimately fails because it gets stuck on the job. The job times out, and when looking at the failed pod's log, it shows:
Failed to start: creating db connection: dial tcp 127.0.0.1:3306: connect: connection refused
When inspecting the job's pod's manifest, we can see that there is no proxy sidecar container, even though
enableProxy
from values.yaml is true, and the sidecar container is defined in the deployment's manifest.Potential solutions
What is the expected workflow as a result of your proposal?
I should be able to use the helm chart, and have it deploy successfully only by modifying the exposed customization options via the values.yaml file.