strongloop / strong-pm

deployer for node applications
http://strong-pm.io
Other
1k stars 71 forks source link

Delete old deployments files in Strong-pm #263

Closed yagobski closed 4 years ago

yagobski commented 9 years ago

All time i make new deploy i have the old version stored in my server. Do you have add any configuration to keep only the last 10 deployments for exemple? All time i need to clean it manually.

Any solution also to switch between deployments? For exemple to comback to old version?

sam-github commented 9 years ago

I'm sorry, we don't delete the old deployments at the moment, but we should.

Switching between versions is best done, ATM, by using slc build and tracking what you have deployed:

We will have better management and staging of deployments in the future, but we don't at the moment.

erikverheij commented 9 years ago

My server ran short on available inodes. It apears that my strongloop-pm deployments are consuming 80% of the available inodes.

Removing old deployments should be part of the default deployment flow.

For the time being I use the following command to remove old deployments;

# List all deployments excluding the current;
ls -ldt /var/lib/strong-pm/svc/1/work/* | tail -n +3
# List and remove all deployments excluding the current;
ls -ldt /var/lib/strong-pm/svc/1/work/* | tail -n +3 | sudo xargs rm -rf

Replace the "1" with the id of your app.

sam-github commented 9 years ago

I'm sorry for the delay, dealing with this is near the top of our backlog.

Just an FYI: The reason this isn't just a trivial deploy the new, rm -rf the last in strong-pm is that it does zero-downtime restarts, the app is restarted by starting one worker after another in the new deployment directory, and until all the workers have restarted and not crashed for some minimal time, there are still old workers running on the previous deployment, so deleting immediately would actually delete the working directory of running workers. We need to wait until we know that all old workers have moved to the new deployment.

Also, we're thinking of keeping the deployments around, and exposing them, to allow rolling forward and back to arbitrary deployments, though that could require explicit removal on your part, and/or a --gc flag to the deploy command to remove any unused deployments.

sam-github commented 9 years ago

Btw, the above behaviour is observable, if you deploy an app that requires files not in its package dependencies (or has some trivial syntax error), you'll see how pm keeps most of the old workers running until the new code doesn't crash.

cgole commented 9 years ago

backlogged.

cgole commented 8 years ago

https://github.com/strongloop-internal/scrum-nodeops/issues/913

lkrnac commented 8 years ago

+1

ebarault commented 8 years ago

i ran out of disk space at some point because of this, and strong-pm started to mess up totally. nodejs process at 100%, connection refused when trying to connect remotely. rebooting, removing, reloading conf, restarting were of no help. I had for only solution to remove totally strong-pm est reinstall it from scratch after freeing some space.

Would someone provide a way to reset totally strong-pm state without removing/reinstalling? or any other mean to sort my issue out and take control back over strong-pm? Where should i look for interesting strong-pm logs?

despairblue commented 8 years ago

You could stop the strong-pm service via upstart/systemd and delete the strong-pm data dir. Somewhere in /usr/share I think. I'm on my phone so I can't check. When you start strong-pm after that it's completely pristine again.

Hope that helps.

On Thu, Apr 14, 2016, 20:16 ebarault notifications@github.com wrote:

i ran out of disk space at some point because of this, and strong-pm started mess up totally. nodejs process at 100%, connection refused when trying to connect remotely. rebooting, removing, reloading conf, restarting were of no help. I had for only solution to remove totally strong-pm est reinstall it from scratch after freeing some space.

Would someone provide a way to reset totally strong-pm state without removing/reinstalling? or any other mean to sort my issue out? Where should i look for interesting strong-pm logs?

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/strongloop/strong-pm/issues/263#issuecomment-210084258

Q: Why is this email five sentences or less? A: http://five.sentenc.es

ebarault commented 8 years ago

@despairblue Are you referring to the /var/lib/strong-pm/svc dir ? I tried this a couple of time by the past but didn't help and had no choice than removing everything and start over.

ebarault commented 8 years ago

tail -f /var/log/upstart/strong-pm.log gives this, forever Why is this happening suddenly? All went good for days with this build It seems linked to this issue but none of the fixes suggested worked

Error: Cannot find module '/usr/lib/node_modules/strong-pm/node_modules/minkelite/node_modules/sqlite3/lib/binding/node-v47-linux-x64/node_sqlite3.node'
    at Function.Module._resolveFilename (module.js:339:15)
    at Function.Module._load (module.js:290:25)
    at Module.require (module.js:367:17)
    at require (internal/module.js:16:19)
    at Object.<anonymous> (/usr/lib/node_modules/strong-pm/node_modules/minkelite/node_modules/sqlite3/lib/sqlite3.js:4:15)
    at Module._compile (module.js:413:34)
    at Object.Module._extensions..js (module.js:422:10)
    at Module.load (module.js:357:32)
    at Function.Module._load (module.js:314:12)
    at Module.require (module.js:367:17)
module.js:341
    throw err;

it turns out I have /usr/lib/node_modules/strong-pm/node_modules/minkelite/node_modules/sqlite3/lib/binding/node-v14-linux-x64/node_sqlite3.node installed (for node v14 instead of v47)

I mitigated this the following way: Inside the /usr/lib/node_modules/strong-pm dir, i installed latest version of minkelite the logs for strong-pm then gave the following error loopback-connector-sqlite3 must be installed to use the sql backend So I installed the loopback-connector-sqlite3, and now strongloop process did start peacefully...

What a mess ! why suddenly requiring SQLite3 and the strongloop ad'hoc connector? is this linked somehow to strongloop logging system? There's a serious bug here

I'm running node v5.10.1 currently on that machine, and have installed strong-pm previously with an earlier version of node at that time

sam-github commented 8 years ago

@ebarault can you report this as a different bug, it seems complete unrelated to this (and clarify what you mean by node v14 instead of v47, there is no node of those versions). FYI, strong-pm needs to store data to disk, the memory connector we used before was intended only as a test tool, it was not a robust DB, and cannot be made to be one, so we switched to a real DB (sqlite3).

paulJordaan commented 8 years ago

+1

ebarault commented 8 years ago

for what it's worth given the different issues encountered with strong-pm, we are currently moving away from it in favor of docker images operated with docker-compose and gitlab-ci for automated deployments

2016-09-16 12:04 GMT+02:00 Paul Jordaan notifications@github.com:

+1

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/strongloop/strong-pm/issues/263#issuecomment-247564372, or mute the thread https://github.com/notifications/unsubscribe-auth/ALcep80NgAPjQjQYNcSEFKy5m9ifKomsks5qqmnKgaJpZM4FbhaW .

dycode commented 8 years ago

We registered cron job to remove old releases every day on Debian server with this command:

$ find /var/lib/strong-pm/svc/1/work/ -mindepth 1 -maxdepth 1 ! -name $(basename `readlink -f /var/lib/strong-pm/svc/1/work/current`) -type d -exec rm -rf {} +
stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] commented 4 years ago

This issue has been closed due to continued inactivity. Thank you for your understanding.