Open Brandutchmen opened 4 years ago
Was thinking about this the other day. The last successful build will still be served fine whilst a new deploy is being built in Docker. Then you will see the 502
for the time it takes to complete the opening tasks in entrypoint.sh. Just checking that this is happening for you?
In that case you could try and move some of these tasks as RUN
statements towards the end of the Dockerfile, then make the ENTRYPOINT
be the supervisor command. I imagine this would reduce the potential for 502
s to be almost nothing.
I mostly use Laravel as an API for native apps or sites hosted statically elsewhere, so I find this downtime palatable because it can be handled gracefully & treat it like a php artisan down
.
As an aside, you can always customise the 502 page to something nicer in the 'HTTP Settings' of your app:
Hope this helps, would be interested to hear how you get on with it!
Yeah. It seems like the entrypoint is what is delaying the 502's. I will try moving everything besides the supervisor and WebSockets over. I will update with the results of that
I now only have the entrypoint starting the supervisor, however, that still leaves around 10 seconds or so of 502 error time. :/
Found a related issue thread with no real solution because its very much dependent on what you rely on for your app to run.
That said, a potential solution could be to use a separate 'app' container as a HTTP proxy; responsible for proxying to the 'latest' running build of your app. It seems like it could be automated with a mixture of:
I may look into building this in the future, because it would be nice to say you can have 'zero-downtime deployments' with this setup.
Hmm. I'm not sure where to go with that. I am trying to set up a reliable production environment, so zero-downtime is a priority of mine.
Using an HTTP proxy as the app's entry could possibly solve it. Another thing mentioned in some of those threads is Docker rolling updates. This would probably require a cluster of app instances running.
Thoughts?
I've ran a few tests with a bare bones laravel install, and am getting less than a second downtime:
composer dump-autoload --no-interaction --no-dev --optimize
towards the end of the Dockerfile
entrypoint.sh
only starts supervisorIf you have 'Persistant Data' enabled; this may be contributing to the 10 second downtime you've mentioned. Even if you remove your app's reliance on the local filesystem, there is still no guarantee of 'zero-downtime'.
Can't see a solution for you right now, unless you are willing to create the one I mentioned! https://github.com/jackbrycesmith/laravel-caprover-template/issues/1#issuecomment-635907246
Okay, so I spent some more time testing this and I tried using a stateless app and wow.. it really cuts down on that time. It became challenging to hit a 502 on a redeploy with one stateless container running.
I will have to set my apps up to use redis as a cache system and figure out something like s3 for out-of-container storage support, but then I should be able to replicate those performance increases within working apps instead of the barebones laravel app. As an added bonus, that would make it really easy to make many instances of the app at once.
Nice one thats great to hear! Will leave this open & report back if I get around to making a 'zero-dowtime' solution.
After further testing, the persistent directory seemed to be the issue. Any stateless app has zero-downtime deployments with env changes or code changes.
I'm not sure how I overall feel about that because it adds the overhead of a redis cache server and a minio data server for user uploads and such to laravel apps. I am still probably going to continue poking around to see if I can make a persistent-data app work with zero-downtime.
TLDR: If you want zero downtime laravel apps, use stateless laravel applications
Why not use HEALTHCHECK for docker? Checks it for being healthy until it puts it into service. No more 502 errors either.
Is there a way to fix the Nginx 502 during deploys?