Open tommyguuuun opened 1 month ago
Okay, Update: I'm even not able to delete or modify any container and i cannot move it back to volume 1 using the script
I found myself in the same situation. Were you able to solve the problem?
The script moves the Container Manager and @docker
to volume 2, and then edits the docker symlink in /var/packages/ContainerManager/var to point to @docker
on volume2.
If /var/packages/ContainerManager/var/docker still points to @docker
on volume1 then something went wrong. Deleting and recreating the symlink (while Container Manager is stopped) should fix it. I prefer to edit symlinks in WinSCP's ui.
A while ago someone ended up with a few orphan btrfs container volumes and images and could not delete or modify any containers. So I wrote a script to cleanup the orphans https://github.com/007revad/Synology_docker_cleanup IIRC this allowed them to delete containers but they had to delete all of them and restore from their portainer stacks or docker compose files.
See https://github.com/007revad/Synology_app_mover/issues/37 and https://github.com/007revad/Synology_docker_cleanup/issues/4
What do the following commands return:
sudo du -sh /volume1/@docker
sudo du -sh /volume2/@docker
sudo du -sh /volume1/@docker_backup
sudo du -sh /volume2/@docker_backup
And these commands:
readlink /var/packages/ContainerManager/etc
readlink /var/packages/ContainerManager/home
readlink /var/packages/ContainerManager/share
readlink /var/packages/ContainerManager/target
readlink /var/packages/ContainerManager/tmp
readlink /var/packages/ContainerManager/var
readlink /var/packages/ContainerManager/var/docker
root@DS923:/volume2/docker/pihole# du -sh /volume2/@docker
22G /volume2/@docker
root@DS923:/volume2/docker/pihole# du -sh /volume1/@docker_backup
du: cannot access '/volume1/@docker_backup': No such file or directory
root@DS923:/volume2/docker/pihole# sudo du -sh /volume2/@docker_backup
du: cannot access '/volume2/@docker_backup': No such file or directory
root@DS923:/volume2/docker/pihole# readlink /var/packages/ContainerManager/etc
/volume2/@appconf/ContainerManager
root@DS923:/volume2/docker/pihole# readlink /var/packages/ContainerManager/home
/volume2/@apphome/ContainerManager
root@DS923:/volume2/docker/pihole# readlink /var/packages/ContainerManager/share
/volume2/@appshare/ContainerManager
root@DS923:/volume2/docker/pihole# readlink /var/packages/ContainerManager/target
/volume2/@appstore/ContainerManager
root@DS923:/volume2/docker/pihole# readlink /var/packages/ContainerManager/tmp
/volume2/@apptemp/ContainerManager
root@DS923:/volume2/docker/pihole# readlink /var/packages/ContainerManager/var
/volume2/@appdata/ContainerManager
root@DS923:/volume2/docker/pihole# readlink /var/packages/ContainerManager/var/docker
/volume2/@docker
I'm now trying to delete all btrfs subvolumes to get Docker to a working state again so that I can recreate all containers.
So this is what worked in the end: 1) uninstall Container Manager 2) reboot (without this, the next step would fail with "device busy") 3) move aside /volume2/@docker 4) install Container Manager 5) recreate all containers
I am also having this issue with only one container. The remaining 15 containers run ok after a successful transfer of Container Manager from /volume1 to /volume2, except for netdata.
Start container netdata failed: {"message":"error evaluating symlinks from mount source \"/volume1/@docker/volumes/netdata_netdatalib/_data\": lstat /volume1/@docker/volumes: no such file or directory"}.
The output of sudo du -sh /volume1/@docker is 0 and the fact everything else is running fine suggests there is just the one symlink issue with netdata. Have you got any suggestions on how to fix this?
@ross090 How did you originally install netdata?
@ross090 How did you originally install netdata?
Via a docker-compose in Container Manager. I don't think I mounted the volumes correctly but it still seemed to work. It's no deal to delete the container and start again but I can't seem to delete it. Can I use your orphaned containers script to delete just this one failed container? The folders I created on my docker data folder for netdata were empty for this container so I suspect I misconfigured something on this one
I just installed netdata via docker compose, then ran my script to move it to a different volume and it still runs okay. And my folders in /volumeX/docker/netdata/ all contain data.
The orphaned containers script ( https://github.com/007revad/Synology_docker_cleanup ) should remove the broken netdata container and image. But make sure you have a backup of all your docker compose files - just in case somethi9ng that is currently working. If I remember correctly you should have all the other containers running so their images don't get deleted.
Thanks Dave I'll try that this evening. If the script removes working containers can I just rebuild with my backed up docker-compose files?
I had a look at my docker-compose for netdata and I passed incorrect volumes (despite it still running successfully, assumingly in @.***/volume/" instead of "/volume1/" so suspect this may have caused an issue with your script which is why it is now showing the ""btrfs" failed to remove root file system:" error. Happy to provide any files/info that you may think help prevent the issue recurring for others.
On Sat, 24 Aug 2024, 08:30 Dave Russell, @.***> wrote:
I just installed netdata via docker compose, then ran my script to move it to a different volume and it still runs okay. And my folders in /volumeX/docker/netdata/ all contain data.
The orphaned containers script ( https://github.com/007revad/Synology_docker_cleanup ) should remove the broken netdata container and image. But make sure you have a backup of all your docker compose files - just in case somethi9ng that is currently working. If I remember correctly you should have all the other containers running so their images don't get deleted.
— Reply to this email directly, view it on GitHub https://github.com/007revad/Synology_app_mover/issues/104#issuecomment-2308166250, or unsubscribe https://github.com/notifications/unsubscribe-auth/AROB7W6S6CIXNGGQEBKCTD3ZTAZC7AVCNFSM6AAAAABMKDIWSGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBYGE3DMMRVGA . You are receiving this because you were mentioned.Message ID: @.***>
If the script removes working containers can I just rebuild with my backed up docker-compose files?
Yes.
Thanks. I've run into further issues with synology_app_mover, sent you a Reddit PM with further details.
On Sat, 24 Aug 2024 at 10:27, Dave Russell @.***> wrote:
If the script removes working containers can I just rebuild with my backed up docker-compose files?
Yes.
— Reply to this email directly, view it on GitHub https://github.com/007revad/Synology_app_mover/issues/104#issuecomment-2308259796, or unsubscribe https://github.com/notifications/unsubscribe-auth/AROB7W5BWEGSZ4LSQHFTHITZTBGY5AVCNFSM6AAAAABMKDIWSGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBYGI2TSNZZGY . You are receiving this because you were mentioned.Message ID: @.***>
Hey guys,
i got two problems/questions:
1.) I moved my container manager today to volume2 (nvme pool) and all containers just work finde, except of my paperless-ngx stack. I got following error message here, i cannot even stop the stack:
What can i do about this?
2.) As i understand, the script creates a symlink @docker on volume2, pointing to volume1. I just see @docker in the command line. But can i move my "docker" folder with all the configuration files from /volume1/docker to /volume2/docker or would that cause a problem with the symlink?
Thanks in advance!