When deploying Kustomize to scratch, the celery pods are somehow interfering with staging, generating invalid signature errors in staging, and causes intermittent failures on smoke test. It seems as though the celery pods in scratch are somehow connecting to the staging queues.
To Reproduce
In a scratch context, kubectl apply -k ./env/scratch
Expected behavior
There should be no impact or connection between scratch and staging.
Impact
This is preventing full development of Notify in scratch environments. It also creates a concern that this connection could eventually impact production somehow.
Additional context
I have been able to narrow it down to celery pods, since deleting the celery deployment in scratch seems to resolve the issue in staging.
Determined that a custom built docker image I had used was copying over a local .env file that had staging credentials in it. This .env file was overriding the values set by K8s.
Created a PR to avoid this situation in the future, and also configured the api for scratch environment.
Describe the bug
When deploying Kustomize to scratch, the celery pods are somehow interfering with staging, generating invalid signature errors in staging, and causes intermittent failures on smoke test. It seems as though the celery pods in scratch are somehow connecting to the staging queues.
To Reproduce
In a scratch context, kubectl apply -k ./env/scratch
Expected behavior
There should be no impact or connection between scratch and staging.
Impact
This is preventing full development of Notify in scratch environments. It also creates a concern that this connection could eventually impact production somehow.
Additional context
I have been able to narrow it down to celery pods, since deleting the celery deployment in scratch seems to resolve the issue in staging.