zodern / meteor-up

Production Quality Meteor Deployment to Anywhere
http://meteor-up.com/
MIT License
1.28k stars 279 forks source link

How to deploy new version with no downtime? #173

Closed artpolikarpov closed 4 years ago

artpolikarpov commented 7 years ago

Subj 👆

I imagine a scenario in which the new version of the app is deployed in parallel with the old one, and only when new is completely ready, users are directed to the new deployment.

Is it possible? If not, is there any workaround?

madushan1000 commented 7 years ago

No, Meteor Up doesn't support this right now. But you might be able to get something like this working using manual deployments and proxy config.

artpolikarpov commented 7 years ago

This one could be a nice feature in the future. Couldn’t it?

artpolikarpov commented 7 years ago

Meteor Galaxy updates similar.

madushan1000 commented 7 years ago

Yeah, this is actually in the plan. But no one is working on it right now. Implementing this would probably require changes inmeteorhacks/meteord, meteorhacks/mup-frontend-server and mup itself.

zeroasterisk commented 7 years ago

I was thinking about implementing this w/ pm2... instead of running via node directly... it supports rolling updates and it supports spawning multiple nodes... any interest in collaboration?

rwatts3 commented 7 years ago

I wanted to share how i'm doing 0 downtime deployment right now.

Currently i am using a framework called mechanic from the good folks at PunkAve.

This is a worker that controls nginx. It handles a plethora of things such as load balancing, etc.

I typically have two mup.js files and run two instances of the application. I have mechanic serve the two ports for each docker container to the same host. Thus implementing load balancing automagically.

The cool thing is while instance A is updating, mechanic automatically shifts traffic to instance B, and while instance B is updating it will shift traffic to instance A. I have a shell script that deploys basically run mup deploy for each mup.js file , then only runs the second deploy after the first deploy script finishes.

This is a quick and easy way for me to maintain 0 downtime for my client's and it's relatively cheap.

https://github.com/punkave/mechanic

ilan-schemoul commented 7 years ago

Yeah the way Galaxy does zero downtime is by completely starting the new container before switching and shutting down the old one. It also manages directing new connections and reconnections to the right places during this process, to avoid a stampeding herd problem where every client needs to reconnect at once.

Say a guy of the Meteor's team. Just leaving this here in case of...

kalepail commented 6 years ago

Anyone actively working on this? Seems like such a basic essential feature. Downtime especially for realtime apps or APIs is just such a no can do.

rwatts3 commented 6 years ago

I would suggest, deploying with two works setup, and use an nginx service maybe mechanic from the apostrophe team to server a load balancer between the two container endpoints. Then when you deploy, one container will be brought offline and the load balancer will switch all traffic to the second container. once both are back online it will continue to serve traffic to each container.

You can also set up a blue green approach , by making one container in your mup config file dependent upon the successful completion or termination of another container.

Either way the best way to do this is with a load balancer.

kalepail commented 6 years ago

@rwatts3 This worked great for the load balancing / zero downtime but it has completely borked the Websockets.

Websocket Errors

Any ideas?

rwatts3 commented 6 years ago

hmm, i think you can disable websockets in mup.js if i'm not mistaken, can you test that and see what you get.

rwatts3 commented 6 years ago

also when you setup mechanic , did you pass multiple addresses ?

mechanic add --host=foo.com --backends=4000,4001

kalepail commented 6 years ago

@rwatts3 Yes I did and passing DISABLE_WEBSOCKETS: 1 will fix the problem but I want to use websockets. When I run my own single port with my custom .conf file the wss address works fine. Seems to be some issue with mechanic's .conf files

kalepail commented 6 years ago

For reference this is the conf file I use for the single port reverse proxy.

server {
  listen 443;
  server_name www.foo.com foo.com;

  ssl_certificate           /opt/foo/config/bundle.crt;
  ssl_certificate_key    /opt/foo/config/private.key;

  ssl on;
  ssl_session_cache  builtin:1000  shared:SSL:10m;
  ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
  ssl_prefer_server_ciphers on;

  access_log         /var/log/nginx/app.dev.access.log;
  error_log             /var/log/nginx/app.dev.error.log;

  location / {
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $http_host;
    proxy_set_header X-NginX-Proxy true;

    proxy_pass http://localhost:3000/;
    proxy_redirect off;
  }
}

My guess is the proxy settings and ssl settings that mechanic doesn't have set up to handle wss addresses.

kalepail commented 6 years ago

Hmm.. actually.. it would appear once both port sites are up even with DISABLE_WEBSOCKETS: 1 the site still errors out. :/

rwatts3 commented 6 years ago

ah i see, ok check the docs on mechanic, you can set a custom conf file, in fact they have a template in the repo. now that I recall i too set a custom conf, and they recommend that you do so. You'll want to also configure things such as max upload size, set it to something like 500MB. etc. ...,

then you can run a command to tell mechanic that that conf file is your default, then restart nginx.

kalepail commented 6 years ago

Have you tried this setup with SSLs and Websockets? Note I'm using CloudFlare.

kalepail commented 6 years ago

@rwatts3 Would you be able to share your custom conf file for reference? I'm curious what your settings are to get all this to work. I'm not very familiar with nginx conf and reverse proxies.

kalepail commented 6 years ago

Got it!

For everyone else, here's a gist of my custom conf template. https://gist.github.com/tyvdh/e360a2e67a4c207af551ded2e635947e

rwatts3 commented 6 years ago

@tyvdh sorry I didn't see your message until just now, thank you for sharing.

rwatts3 commented 6 years ago

Does mup respect docker depends on setting ?

On Fri, Jul 28, 2017 at 5:18 AM Leon Machens notifications@github.com wrote:

I think it would be possible to avoid a downtime with multiple servers, if the task lists have a different order. Right now it looks like this (5 servers with load balancer): [image: image] https://user-images.githubusercontent.com/10058950/28716626-8e761e96-739e-11e7-981e-934341d2a56a.png

The problem is, that all servers are stopped first. The first server came back online will get all the traffic and connections through the load balancer. In my app the first server has to handle around 400 sessions on mup restart or deploy.

It would be better if the servers are stopped and started one after the other. @zodern https://github.com/zodern Is this possible?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zodern/meteor-up/issues/173#issuecomment-318637332, or mute the thread https://github.com/notifications/unsubscribe-auth/AFFsScAJLhido8QuvgcsQjRYuQR9VQPhks5sSdGXgaJpZM4JTZL0 .

--

-V/R

Ryan Watts

lmachens commented 6 years ago

@rwatts3 I deleted my message because the behaivour is different for mup deploy. My example was for mup restart (where all servers stopped first). So it seems to be fine.

rwatts3 commented 6 years ago

awesome

kalepail commented 6 years ago

For those who care: https://medium.com/@tyvdh/deploying-production-meteor-in-2017-f2983277e872

skarborg commented 6 years ago

I have multiple "app servers" (Linodes) that I deploy too and then use CloudFlare to update DNS and failover between them. I host my DBs with mLab/Compose. Not HA but investigating CF's Load Balancing feature. Really simple to understand too. Check out: https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html

maka-io commented 5 years ago

I've been using pm2 in production for a while now on aws instances, maybe pm2 is worth looking into as the main process manager within the meteor docker container. It supports zero downtime, and can be configured as an internal load balancer provided the core count. https://pm2.io/doc/en/runtime/overview/?utm_source=pm2&utm_medium=website&utm_campaign=rebranding

zodern commented 4 years ago

This should work in the latest 1.5 betas. To try the feature:

  1. Install the beta with npm i -g mup@next
  2. Set the proxy.loadBalancing option to true in your mup config
  3. Run mup setup and mup reconfig.

The docs are at https://github.com/zodern/meteor-up/blob/c8251213b05dd166bb7222eec0f3f6a66835edf3/docs/docs.md#load-balancing.