Open sharkykh opened 6 years ago
i'm thinking along the lines:
so:
- it should not be necessary to upgrade the docker image to receive a medusa upgrade.
- the medusa docker container should do an upgrade only when it gets started. This update can be enabled by a settable env-var AUTOUPDATE=true,... in case users do not want to update automatically.
- a fresh created container always installs or updates medusa. medusa should not be part of the image but installed if its not there.
@udochrist I agree on most of that apart from this. Docker container should be stateless and more importantly immutable. We should be shipping with the software fully installed. To use medusa on any version you should be able to just update the container and the migrations needed would run on start.
My reasoning here is:
If the container is immutable then there is no need vor any version mgmt and Info to the user.
Me being a new version junkie wants to update ASAP and not wait an additional day until the image gets updated to the new Medusa version by a thirdparty maintainer.
Am 14.09.2018 um 12:40 schrieb Alexis Tyler notifications@github.com:
a fresh created container always installs or updates medusa. medusa should not be part of the image but installed if its not there.
@udochrist I agree on most of that apart from this. Docker container should be stateless and more importantly immutable. We should be shipping with the software fully installed. To use medusa on any version you should be able to just update the container and the migrations needed would run on start.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@udochrist we provide an official container that's built for master and develop branches on merge. It usually takes about 5 minutes for the new images to be pushed.
Edit: Here you are.
It shouldnt be any more difficult then having a proper 'universal' method of checking if medusa is run in docker. And disable the autoupdating when detected? Updating the docker image will then upgrade the db on next start?
Or am i missing something?
root@34add48022da:/$ ls -la /.dockerenv
-rwxr-xr-x 1 root root 0 Sep 8 05:16 /.dockerenv
root@34add48022da:/$
That should be the indicator to check for running inside docker currently.
Yeah but i don't know if the medusa user will always have access to that. Isn't there something like an env variable?
@udochrist that file doesn't exist 100% of the time. It shouldn't be relied on.
@OmgImAlexis what do you mean with "... not 100% of the time"?
docker-ce/components/engine/daemon/initlayer/setup_unix.go creates it and the / dir with perms 755
For reference, a log of pymedusa/medusa:develop updating.
2018-11-13 07:56:57 INFO CHECKVERSION :: [] Checking for updates using SOURCE
2018-11-13 07:56:59 DEBUG CHECKVERSION :: [] cur_commit = , newest_commit = 0c0a73583452f988c518adeafa75ee1f17987bdf, num_commits_behind = 0
2018-11-13 07:56:59 DEBUG CHECKVERSION :: [] Unknown current version number, don't know if we should update or not
2018-11-13 07:56:59 DEBUG CHECKVERSION :: [] Checking GitHub for latest news.
2018-11-13 07:56:59 DEBUG CHECKVERSION :: [] GET URL: https://cdn.pymedusa.com/news/news.md [Status: 200]
2018-11-13 07:56:59 DEBUG CHECKVERSION :: [] User-Agent: Medusa/0.2.11 (Linux; 4.4.0-47-generic; ac99bc26-e719-11e8-96bd-0242ac110009)
2018-11-13 07:57:17 INFO Thread_0 :: [] Config backup in progress...
2018-11-13 07:57:17 INFO Thread_0 :: [] Config backup successful, updating...
2018-11-13 07:57:19 DEBUG Thread_0 :: [] cur_commit = , newest_commit = 0c0a73583452f988c518adeafa75ee1f17987bdf, num_commits_behind = 0
2018-11-13 07:57:19 INFO Thread_0 :: [] Clearing update folder u'/app/medusa/sr-update' before extracting
2018-11-13 07:57:19 INFO Thread_0 :: [] Downloading update from u'http://github.com/pymedusa/Medusa/tarball/master'
2018-11-13 07:57:19 DEBUG Thread_0 :: [] GET URL: http://github.com/pymedusa/Medusa/tarball/master [Status: 301]
2018-11-13 07:57:19 DEBUG Thread_0 :: [] User-Agent: Medusa/0.2.11 (Linux; 4.4.0-47-generic; ac99bc26-e719-11e8-96bd-0242ac110009)
2018-11-13 07:57:19 DEBUG Thread_0 :: [] GET URL: https://github.com/pymedusa/Medusa/tarball/master [Status: 302]
2018-11-13 07:57:19 DEBUG Thread_0 :: [] User-Agent: Medusa/0.2.11 (Linux; 4.4.0-47-generic; ac99bc26-e719-11e8-96bd-0242ac110009)
2018-11-13 07:57:20 DEBUG Thread_0 :: [] GET URL: https://codeload.github.com/pymedusa/Medusa/legacy.tar.gz/master [Status: 200]
2018-11-13 07:57:20 DEBUG Thread_0 :: [] User-Agent: Medusa/0.2.11 (Linux; 4.4.0-47-generic; ac99bc26-e719-11e8-96bd-0242ac110009)
2018-11-13 07:57:35 INFO Thread_0 :: [] Extracting file /app/medusa/sr-update/sr-update.tar
2018-11-13 07:57:40 INFO Thread_0 :: [] Deleting file /app/medusa/sr-update/sr-update.tar
2018-11-13 07:57:40 INFO Thread_0 :: [] Moving files from /app/medusa/sr-update/pymedusa-Medusa-0c0a735 to /app/medusa
2018-11-13 07:57:41 DEBUG Thread_0 :: [] Special handling for unrar.dll
2018-11-13 07:57:41 DEBUG Thread_0 :: [] Special handling for unrar64.dll
2018-11-13 07:57:41 DEBUG Thread_0 :: [] Special handling for MediaInfo.dll
2018-11-13 07:57:41 DEBUG Thread_0 :: [] Special handling for MediaInfo.dll
2018-11-13 07:57:42 DEBUG Thread_0 :: [0c0a735] Notification for Boxcar2 not enabled, skipping this notification
2018-11-13 07:57:42 INFO EVENT-QUEUE :: [0c0a735] Aborting all threads
2018-11-13 07:57:42 INFO EVENT-QUEUE :: [0c0a735] Waiting for the DAILYSEARCHER thread to exit
2018-11-13 07:57:42 INFO EVENT-QUEUE :: [0c0a735] Waiting for the BACKLOG thread to exit
2018-11-13 07:57:42 INFO EVENT-QUEUE :: [0c0a735] Waiting for the SHOWUPDATER thread to exit
2018-11-13 07:57:42 INFO EVENT-QUEUE :: [0c0a735] Waiting for the CHECKVERSION thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the SHOWQUEUE thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the SEARCHQUEUE thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the FORCEDSEARCHQUEUE thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the MANUALSNATCHQUEUE thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the POSTPROCESSOR thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the TRAKTCHECKER thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the FINDPROPERS thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the FINDSUBTITLES thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the TORRENTCHECKER thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Waiting for the EVENT-QUEUE thread to exit
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Saving all shows to the database
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Saving config file to disk
2018-11-13 07:57:43 DEBUG EVENT-QUEUE :: [0c0a735] Broken providers already updated in the last hour
2018-11-13 07:57:43 INFO EVENT-QUEUE :: [0c0a735] Shutting down Tornado
2018-11-13 07:57:48 INFO EVENT-QUEUE :: [0c0a735] Restarting Medusa with ['/usr/bin/python', u'/app/medusa/start.py', '--nolaunch', '--datadir', '/config']
The bug
During the past 2 database migrations, the databases of some of the users got updated past the "max" database version and some got incorrect values inserted into them. For example, between
v0.2.4
andv0.2.5
we bumped the main DB version from44.9
to44.11
. However, during the migration to shift the quality values, some of the users got incorrect values inserted into the databases. In addition, some users database versions actually got bumped past44.11
, to44.12
, which caused an issue for users when the next database migration landed onv0.2.9
.What happens when Medusa updates?
Currently, after Medusa updates its code either using
git pull
or downloading the archive from GitHub, it stops the web server and other threads, closes open files, etc. After that's done, and unless the user had enabled theNo Restart
option, it spawns a new process of Medusa with the same arguments as the one that was running. (source code)How does the restart process affect the database migrations?
Due to the use of process monitors or service restarters, which automatically restart a process if it dies, in-conjunction-with Medusa's restart process (with
No Restart
off), Medusa is being loaded twice, at the same time, by two separate processes. The database migrations (try to) execute on both processes at the same time, and in doing so, causing unexpected changes to database entries.The current Docker configuration (ours, @linuxserver's and possibly others) uses
S6 overlay
to automatically restart the server inside the container when it dies (source code). This means that as soon as the original Medusa process dies out,S6
spawns a new one, but Medusa does too, which causes the scenario explained above. This can be clearly seen in the Docker logs, if you restart Medusa from the web UI, you can see duplicate log messages being logged, and the server tries to bind to the port twice - one of the processes worked, and the other says it can't bind to the port because it's in use, and exits.It's very possible any other platform that uses a service restarter may have this problem as well. The Windows installation could also be affected.
What can be done (incomplete)
watchtower
to monitor the Docker container's status and update it to a newer version when one is available.Additional information