Open gamedevsam opened 1 year ago
I've been thinking about this for a little, as there are a few moving parts and some general guidance around how I position Dokku Pro with against Dokku OSS.
Dokku can - with some manual work - handle a "promotion" style workflow wherein the build artifact (a docker image) from one app is deployed to another app.
registry
plugin to push images on build to a remote registryThe caveat here is that some applications will build with pre-production values - in the case of herokuish-based images, those are actually vendored in. Dockerfile and pack-based deploys aren't immune, but are typically a bit better about this. If you can avoid writing secrets into the image, then in theory that artifact can be used in prod and pre-prod environments.
We could in theory write a pipelines plugin like the one for heroku that handled a lot of this automatically for users. It would be a ton of work and not something I think would be of benefit to the majority of OSS users, but would definitely hit the sweet spot of "useful for teams and organizations", so this is definitely something I will ticket for myself to think about adding to Dokku Pro.
That said, image promotion can be done today with a bit of duct tape :)
The default proxy implementation in Dokku is nginx, and it's not very programmer friendly. In fact, I deal with a lot of issues in ensuring that the new containers get picked up appropriately - something that Caddy, Haproxy, and Traefik all handle automatically. In fact, I've been haphazardly working on a replacement that follows the pattern of our Caddy/Haproxy integrations here (it'll become an option, though not default, once I figure out letsencrypt support).
That said, no proxy implementation is good at what you're asking for without a lot of extra code. Envoy-based proxies are decent with this sort of signaling, but are usually enterprise-grade offerings. I think adding this functionality is well beyond the scope of both Dokku OSS and Dokku Pro as it's fairly complex and somewhat of a distraction from the core offering.
In our case, I'd rather integrate with a product that already provides this than poorly "reimplement the wheel".
What you're asking for re: staggering traffic during deploys is actually akin to canary/blue green deployments.
Dokku sort of supports canary deploys today via our Caddy/Haproxy/Traefik integrations as new containers are picked up automatically and Dokku will either stop a failing deployment or continue on as things succeed. We also allow for (simple) healthchecks via a CHECKS file, which helps support our defacto canary support.
Note that a future version will begin to have more robust healthchecking that is similar to what is offered by Kubernetes and Nomad.
Blue/Green deployments, more advanced canary configurations, and programmatic access aren't possible today in Dokku. I think this could be implemented, though...
build:promote
or something that would take a BUILD_ID
and mark it as good to continue with a deploy.At the end of the day, stuff like this requires a fair amount of work at the scheduler level, and there isn't a great docker-local scheduler (other than the one we built for Dokku) so I'd rather punt on this and build one correctly in the future, perhaps based on the docker-compose spec.
This part would/should be part of the OSS offering, as the divergence would be hard to manage separate from Dokku Pro.
Generally speaking, when I consider features for Dokku OSS vs Dokku Pro, I weigh the following:
http(s)
server for deploying code.tl;dr:
Thanks for the thorough response, I kinda need to digest it and maybe read over it again and collect my thoughts. But here are some initial impressions:
A UI is something that provides added value, but a single engineer can get by well enough with a CLI.
This may sound strange, but the only reason I paid for dokku-pro was so I didn't have to deal with the CLI as much. The main reason I like UIs is that they are discoverable - you can sort of piece together what you can do with an app just by exploring the UI. It reduces the time I have to spend parsing docs or remembering common commands, and it's also just a personal thing. I'm not averse to using a CLI, but if given the option to use a CLI, or a nicely crafted UI, I'll go for the UI (unless it's majorly broken or something like that). I'm the only person on my team at work that insists on using a Git GUI for example (and I paid for it with my own money).
Don't underestimate the value of having a nice UI on top of the CLI. I think you may be missing out on some potential revenue by not having more screenshots of the UI and visually demonstrating more of its capabilities (for example managing env vars in the UI is a very nice feature). I think there's more work to be done on the UI to make it more capable and cover more of the common CLI commands, but eye candy definitely sells.
Description of feature
I'm looking for ideas on how to achieve the following environment:
Setup an app with 3 different instances:
With the intent being that the main branch is always deployed to staging, then promoted to the inactive production environment (this is an identical copy of the active production environment), where additional checks can be done, and once everything looks good we would promote the inactive environment to active (perhaps with some sort of stagger strategy, ex: send 10% of the traffic for 5 mins, then 50% for 5 mins, then 100%).
And finally we'd catch up the previously active environment (now inactive) with the same artifact deployed to QA.
Is this possible with the current dokku architecture? What features would dokku-pro need to enable this type of setup?
Is this all overkill?
The main goal is to achieve zero downtime releases and instant rollbacks.