Closed bushwallyta271828 closed 7 months ago
People seem to like Hashicorp Nomad. It seems relatively simple and I think I like it. I'm just trying to figure out what cloud providers will let me deploy it.
EDIT: Apparently the license on Nomad is not great?
At this point I feel like Kubernetes is the only software I have faith in. Everything else seems to be deprecated or on the way out. I want to set up my container orchestration once and then be done with it. If AWS changes their product lineup in three years, I don't want to be going back and changing my code. With Kubernetes, I feel that I will always be able to find some cloud provider that I can turn to.
Okay, it's been a while since I last thought about this issue. I've got a Django project set up now and I think I want to tackle this problem next, rather than doing a bunch of local development first. It's annoying and that's why my instinct is to shy away from it.
Okay, what are my requirements? (1) My container orchestrator should be supported by some cloud provider into the future. I have time to work on this project now, but I may not in the years to come. If a cloud provider deprecates a tool and I need to rewrite my code base in the future after I've forgotten all of the details, that wouldn't be funny. (2) My container orchestrator should be cloud-provider-agnostic. This is basically for the same reason as requirement (1): I've read about services coming and going, and if one company decides to stop their offering I need to be able to easily switch to another. (3) My container orchestrator should be simple. I'd rather not learn Kubernetes and I don't envision needing to scale this application in the immediate future. Docker Compose (the new version which I'm using for local development and testing) would be ideal if it were still supported by big cloud providers, but alas, it isn't. (4) I'd like my deployment to be relatively cheap. I know Kubernetes deployments can quickly get expensive.
WAIT... does Docker Compose live??????????? https://github.com/compose-x/ecs_composex
Here's a blog post by the person responsible for that repository: https://aws.amazon.com/blogs/opensource/automating-your-ecs-container-architecture-deployments-with-ecs-composex/. I'm strongly favoring this approach. I'm just trying to figure out if it will work and how much an AWS deployment would cost.
Some further reasoning: I have to create Docker Compose files anyway to test locally. This approach would basically involve zero overhead. Also, Docker Compose files presumably aren't going anywhere, so hopefully this repository or some equivalent will continue to exist into the future.
Okay, I'm going to build at least two Docker images: one for the app and one for the proxy (I might end up building multiple for the app? One for frontend and another for backend?). The database uses an official postgres image. Therefore, I'd like my CI/CD pipeline to pull the (at least) two freshly-generated images from the GitHub Container Registry and use them according to the compose-deploy.yaml file I provide.
Hmmmm, I might make the order a little different. Rather than (build Docker images) --> (push to registry) --> (pull from registry) --> (orchestrate), I might do (build Docker images) --> (orchestrate) --> (push to registry for posterity). Docker Compose seems to have an "image:" feature which lets you push the image to a registry. On the other hand, I might not. It seems nicer to build the image on the GitHub runner than on whatever environment AWS has.
Either way, the difference shouldn't be that big. Either the docker file looks like
image: logic_graph_scalable_app:sha_hash_goes_here
...
or it looks like
build:
- context: .
image: somewhere_to_send_the_image
Getting AWS to talk to a Docker Compose file at all will be the hard part.
Update as of the next morning: right now I'm thinking of doing a manual deployment. That will require adding a section to the README on how to set everything up, like it used to be in the separate-application-database-deployments branch. More importantly, I have to figure out if I can possibly avoid using shared volumes between different containers like I do now to pass information about static files between the proxy and the application.
No, it looks like that shared volume is really important. It's kind of the whole point of the proxy.
New thought: instead of using AWS Lightsail containers, maybe I could create an AWS instance (blank-slate, no default nginx or anything) and install the containers inside there. Every time I want to deploy, I just pull the latest Docker images and run my compose-deploy.yaml script.
EDIT: Could save me the cost of running two separate containers as well! Though I haven't looked extensively at the AWS pricing comparison.
Right now, I'm definitely favoring this approach. My only problem is that I need to connect a database to my instance. I could theoretically have a postgresql database sitting inside my AWS Lightsail instance, but that seems dangerous from the perspective of persistence. If the server goes down, reinstalling the code and running it again won't recover the lost data. AWS presumably does a good job of managing dedicated databases.
In the past I've been able to establish a connection between an AWS Lightsail database and an AWS Lightsail container. The question I'm facing now is how to establish a connection between an AWS Lightsail database and an AWS Lightsail instance. That task feels doable.
Current idea: for my deployment compose file in the AWS instance, simply don't reference the db container. For the docker compose file, pass the database connection details into the app container, but don't create a corresponding database. AWS will already have the database set up, so the app container can just talk to that through the usual psycopg2 calls. Since it's one user in the same AWS region, they should be on the same network and should be able to communicate naturally.
I'm closing this issue since I think I've found my way forward. If I wanted to have a really slick CI/CD pipeline then I'd need to set up authentication between GitHub Actions and AWS with some sort of IAM thing, and I really don't want to deal with that right now. The current steps in the README should be sufficient for manual deployments. Finally, after weeks of trying to figure out DevOps, I think I've found a solution that works!
If only Docker Compose were still supported by AWS and Azure, that would be perfect.