Open apex-omontgomery opened 5 years ago
Front-end has its own staging environment, separate from our infra. It'd be smarter to be within the same infra system that everybody else is, but in terms of solving the problem - we're good for now :)
Happy the immediate issue is resolved. Tags are the way to go IMO and I would avoid using latest
FWIW. It's automatically tagged latest
if no tag is specified, and then you don't know which version you're using. I've got a bash script for versioning releases if needed.
pybot/ pyback:
staging
branch, targets a test slack but shows what happens when deployed in ECS.master
, targets prod@OperationCode/front-end and current frontend:
Backend does the same as front-end, both front-end and backend you can go into kubectl and manually target whatever container tag you'd like. When I request the current images for all pods, we'll see some oddities:
In the above case, we see that frontend is using both 2334and latest. while backend is using 1269, 1245 and latest.
I don't know the correct command to list the namespace for each of these. But I remember find out that we are setting staging cube as "latest" and the prod as a numerical value. I don't know why we have 3 different backend image versions.
After talking with @kylemh (IIRC) he doesn't need a way to promote a deployed code instance to staging. For their purposes it's fine to have staging only running locally. But I think going to a strategy where we promote to staging on merge to master, by tags, and then manually promote by adding a production tag would be more ideal, or automatically based on some criteria.
Our initial concern was having
k
instances of staging fork
developers. But I don't see that as a current problem.Lastly, we need a way that data sources (backend, pyback) RDS instances, have data in production that propagate down so that when we update those instances we can easily carry that over. I think for the time being a nightly copy over, plus ability to manually trigger, would be fine. With the knowledge that if you are working on something that requires database migrations you'll have to perform the migration again with the updated data.