007revad / Synology_app_mover

Easily move Synology packages from 1 volume to another volume
MIT License
391 stars 26 forks source link

Method for moving CM from ext4 to btrfs #131

Closed jradwan closed 1 month ago

jradwan commented 1 month ago

I'm converting my ext4 volume to btrfs and going through the process of moving all shared folders and packages.

When I tried to move Container Manager to the new volume, I got this warning and the script stopped:

admin@DiskStation:/volume2/jcr_scripts/Synology_app_mover-4.0.73$ sudo -s ./syno_app_mover.sh
Password:
Synology_app_mover v4.0.73
DS720+ DSM 7.2.2-72806

Running from: /volume2/jcr_scripts/Synology_app_mover-4.0.73/syno_app_mover.sh

1) Move
2) Backup
3) Restore
Select the mode: 1
You selected Move

[Installed package list]
1) /volume1  Cloud Sync           6) /volume2  IDrive
2) /volume1  Container Manager    7) /volume2  Plex Media Server
3) /volume1  Hyper Backup         8) /volume2  Storage Analyzer
4) /volume1  PHP 7.4              9) /volume2  Text Editor
5) /volume1  PHP 8.0             10) /volume2  Web Station
Select the package to move: 2
You selected Container Manager in /volume1

Destination volume is /volume2

WARNING Do not move ContainerManager from ext4 volume to btrfs volume!

What's the right way to move CM to the btrfs volume? I couldn't find anything here as to what the risks are moving across file systems like that. Do I have to actually re-install it from Synology and point it to /volume2? If so, can I then restore a backup made with this script into that new install to get everything back?

007revad commented 1 month ago

The script used to allow moving container manager from a btrfs volume to a ext4 volume, or vice versa. Occasionally people would report that their containers needed to be migrated.

After moving container manager from a btrfs volume to a ext4 volume, when you open container manager you get a message saying all your containers are incompatible: after_moving_containermanager

When you click Manage you get an option to Migrate your containers: after_moving_containermanager_warning2

The part that says "The containers' contents will be deleted after the Migration" was a concern. So I thought it was safer to prevent people moving container manager between file systems until I've had time to work what exactly "The containers' contents will be deleted after the Migration" means.

I was able to migrate grafana and nginx but I had no data or settings. I suspect if the user had configured each container to save their settings to the docker shared folder it would be okay.

Restoring a container manager backup to a different file system would have the same issue.

jradwan commented 1 month ago

Ok, so it sounds like maybe I should just export all my dockerfiles (or confirm the ones exported by your script) and then remove Container Manager and re-install clean and then re-create my containers. Although I'll need to make sure I also document my other CM settings like network.

I also haven't moved the docker shared folder to the new btrfs volume.

jradwan commented 1 month ago

I moved my shared Docker folder to the new volume and had issues with the containers because all of the mounted volumes were referencing /volume1 instead of volume2, so I had to update each container (and YAML file) with the new volume before they would start.

image

So now the last package I have on /volume1 is Container Manager.

007revad commented 1 month ago

I forgot to mention that when CM said I had to migrate my containers I first tried importing the json exports and CM gave an error, which is why I went ahead with migration.

If you want to use the script to move CM to the btrfs volume edit the script change line 2226 from:

exit 3

to this:

#exit 3

And let me know if the migration worked for all your containers, or if you needed to and were able to import the json exports, or needed to recreate the containers with your YAML files.

jradwan commented 1 month ago

Ok, I commented out the exit line and then ran the script to move CM from /volume1 (ext4) to /volume2 (btrfs). I had previously moved my shared docker folder to the new volume. As you indicated, after the migration was complete and CM started up, all my containers were marked "incompatible":

image

I "migrated" some smaller, less important ones first. It looks like what CM did was re-download the images and then re-build the container using the migrated settings:

image

The migrated containers seemed fine, so I migrated a few more that had more "data" (like the Unifi Controller or Pi-Hole) and they also were fine after they started up, no data lost. I had one container (qbittorrent) that would not migrate due to "conflicting options":

image

I had to "delete" the container from the migration window and then used my project/YAML file to run a build which downloaded the latest image and re-built everything properly.

Seems like I'm back up and running!

I'm guessing Synology's phrase "The containers' contents will be deleted after the migration" is referring to the downloaded image and container? So if everything user-related is saved out on the shared folder, it should be fine.

jradwan commented 1 month ago

I think this can be closed, thanks for your help.

I ended up re-migrating my new btrfs volume (/volume2) back to a new btrfs volume (/volume1) in an attempt to fix my IDrive issue (it didn't). So I had to re-move all my Container Manager stuff back to /volume1. That was a btrfs-to-btrfs move (as opposed to my original ext4 to btrfs move) so I thought it would be smooth but I ended up with a corrupted container that could not be removed (like issue #104).

Without doing enough research, I tried the cleanup script which found and removed 148 (!!) btrfs subvolumes which completely messed up all my containers and CM was throwing API errors when I ried to create new ones (like this). I had to sudo rm -rf /volume1/@docker/image/btrfs and then I was able to use my YAML compose files to re-build everything. Since all of my Docker volumes are on a shared folder, I didn't lose any data (so probably should have just re-installed clean to begin with).

Whew.

007revad commented 1 month ago

I'm not 100% sure what Container Manager does when it migrates containers.

When I originally moved CM from btrfs to ext4 and migrated the 2 containers I ended up with 3 containers:

  1. grafana-2 (stopped)
  2. nginx (running)
  3. grafana (running)

When I move CM back to the btrfs volume CM shows all 3 containers as needing migration, and:

  1. grafana-2 (stopped)
  2. nginx (running)
  3. grafana (running)

If I try to migrate it fails because containers with the same already exist. The migrate window has a button to delete the selected containers, so I just deleted them... and the 1 stopped and 2 running container were still okay.

If I again move CM to the ext4 volume the same thing happens.

It's not a problem, but it is strange behaviour.