007revad / Synology_docker_cleanup

Remove orphan docker btrfs subvolumes in Synology DSM 7
MIT License
6 stars 2 forks source link

Unable to remove orphan images, un-updated images not able to be restarted #4

Open talz13 opened 5 months ago

talz13 commented 5 months ago

Carrying on the discussion on the new repo!

I reviewed and ran the new syno_docker_cleanup.sh, and it was successful in removing the orphan docker btrfs subvolumes, all 594 of them on my nas.

However, I'm getting an error on deleting the orphan images:

Deleting 6 orphan images...
Error: No such image: b4d108121738
2f48543fad4f
6fd099c65bce
c29b2a13b349
d874c386dd44
16cb8800d474

I ran the nested command on its own, and it produced the desired output.

Also after running the cleanup, some of my containers are not able to be restarted (but some still work! might have to do with which images were updated since the migration already?), and it looks like their associated subvolumes were removed with the script, not sure why they didn't come up as active.

Anyway, I'm trying to get those images back up and running, but not sure how to get past that.

For example, unifi-controller was one container that's failing to start/restart, so I tried:

Here's my Container Manager logs from the affected time:

image

Any advice on this state?

007revad commented 5 months ago

The only thing I can think of why docker rmi "$(docker images -f "dangling=true" -q)" failed in the script but worked when you ran it via SSH is maybe docker was busy scanning the subvolumes after the script deleted the orphan subvolumes.

Also after running the cleanup, some of my containers are not able to be restarted (but some still work! might have to do with which images were updated since the migration already?), and it looks like their associated subvolumes were removed with the script, not sure why they didn't come up as active.

Were the affected container's running when the script was run?

007revad commented 5 months ago

This script was supposed to solve issues from running the syno_app_mover script, and not create more issues.

007revad commented 5 months ago

It looks like Container Manager's .json exports are useless when the image and/or subvolume no longer exist.

talz13 commented 5 months ago

edit: I just manually made a "@docker.bak" folder and moved the whole contents of the @docker folder to it after uninstalling Container Manager. I reinstalled it, reinstalled Portainer, and it still had my stacks that I previously set up. A couple quick Update the stacks later, and everything is back up and running. Was able to deploy the remaining containers without issue!

original post So I've started moving all my images to Portainer stacks, to be able to re-create them much more easily, but still having issues with the couple remaining images. Seems like as long as I'm using the same image from before (in this case, tonesto7/echo-speaks-server which hasn't been updated in a couple years), it cannot "refresh" it, or it is still referencing those deleted IDs.

All my containers use external volumes, so I don't have any concern about clearing out Container Manager / Docker and starting over, I'd just like to get rid of everything aside from my docker shared folder and start fresh, hopefully getting rid of the issues.

Since I'm not sure if trying to remove the /volume2/@docker folder will be the best course of action, I was trying to move it to @docker.bak, but get a busy error:

$ sudo mv \@docker \@docker.bak
Password: 
mv: cannot move '@docker' to '@docker.bak': Device or resource busy

any ideas with that?

007revad commented 5 months ago

Try creating the @docker.bak folder then copying the contents of @docker to @docker.bak instead of moving @docker.

if mkdir -m 710 "/volume2/@docker.bak"; then
    cp -prf "/volume2/@docker/." "/volume2/@docker.bak"
fi

Edit: I just saw your edit where you did something similar.