Open wbern opened 4 years ago
I agree. Even just for the sake of debugging a setup, on any machine, it's slow.
@cpoppema what are your thoughts on migrating flexget
's requirement's management from pip
to conda
? It would probably make the image size larger, but conda
is actually a package manager, in the sense that it finds the dependency versions that play nicely with each other.
I think it also does a better job at updating than pip
does.
I'd be willing to look into this further, if need be.
First I want to apologize to @wbern for overlooking this issue as long as I did! That wasn't my intention.
I am not against trying anything other than plain pip, I think the main issue with using pip install -r requirements.txt
is that flexget has certain flags in the source code checking for specific versions that might or might not be specified in their setup.py
. Might be faster to use either conda, pip-env, or whatever. If you want to look into that, please indulge yourself :stuck_out_tongue:
It shouldn't be too much trouble adding a flag AUTO_UPDATE=0
which skips the pip installs after a first run.
Hmm... honestly, giving it a little more of a think, I think a flag similar to how lsio/plex works might be the easiest? Like you said, having something like -e AUTO_UPDATE=0
β or if we follow with lsio/plex.
Migrating to conda
and the like could be useful, but my experience with conda
in docker
has typically bordered on a nightmare-ish. π
I'd definitely be willing to give that a crack if you need someone else to do it. π
I would recommend not updating at all in the container as docker best practice calls (and half the advantage of docker) for users running just what the developer has built and tested. Updating anyting user side breaks this relationship.
Real world example... pip has broken this container twice for me in the past and because if the way it is built I cant just roll back to a known good build.
Better would be having dockerhub recurring build updates and if you need in container updates relegate them to a development branch.
I know this is a pretty radical change but it really is the right way to go
@ionlights
Back when I started this container lsio didn't have any specific -e VERSION=
flags. As far as I can tell, most still don't. Perhaps they only use it when the software that is being installed provides specific version downloads. I don't have any personal preference in what flag I'd add. But it would be a simple on/off since there are no version-pinned packages (with dependencies included) for flexget.
@anoma
I would recommend not updating at all in the container as docker best practice calls (and half the advantage of docker) for users running just what the developer has built and tested.
Also read this if you're interested: https://github.com/cpoppema/docker-flexget/issues/39#issuecomment-484231549.
Better would be having dockerhub recurring build updates and if you need in container updates relegate them to a development branch.
Last time I checked, dockerhub still didn't support multi-arch automated builds :disappointed:
@cpoppema
Back when I started this container lsio didn't have any specific
-e VERSION=
flags. As far as I can tell, most still don't. Perhaps they only use it when the software that is being installed provides specific version downloads. ...
Ahh, didn't path much mind to when dev on this started. π
Mostly proposed based on how it works for the lsio/plex
container β though I don't think many of their containers update in the container.
... I don't have any personal preference in what flag I'd add. But it would be a simple on/off since there are no version-pinned packages (with dependencies included) for flexget.
I'm not too familiar with the flexget
code-base. Do you mean that they don't specify the versions in their requirements.txt
but check within their code? π
Better would be having dockerhub recurring build updates and if you need in container updates relegate them to a development branch.
Last time I checked, dockerhub still didn't support multi-arch automated builds π
Would taking advantage of GitHub Actions' Docker Registry allow for multi-arch support? Based on my limited understanding here, I believe this is something I could also actively work on as I've worked with GitHub's registry before.
Ahh, didn't path much mind to when dev on this started. sweat_smile Mostly proposed based on how it works for the lsio/plex container β though I don't think many of their containers update in the container.
Wow, much has changed over the years :joy:. 4-5 years ago I started using linuxserver's containers like sonarr, nzbget, transmission. They all used to auto-update. That's why I started doing it for Flexget too. There are still some forks around that people made that still show this actually. Easily recognized by "Upgrade to the latest version of X simply docker restart X.", now there is a paragraph in most READMEs saying:
Most of our images are static, versioned, and require an image update and container recreation to update the app inside. With some exceptions (ie. nextcloud, plex), we do not recommend or support updating apps inside the container.
As far as I can tell, they've streamlined building their images to a level I will never reach: they host their own alpine repositories for some of their software running in containers. They keep track of any version bumps and automatically do multi-arch builds in their own pipeline to now push out version-tagged images automatically on software updates.
I'm not too familiar with the flexget code-base. Do you mean that they don't specify the versions in their requirements.txt but check within their code? :eyes:
Apparently, also outdated information. Flexget now enjoys pip-tools to generate a requirements.txt
(exact versioning) based on a requirements.in
(possibly fuzzy versioning). Because of this strict version schema pip believes there is no more leeway when it comes to versions of dependencies when eg. installing plugins (which are optional so not defined in a requirements.in/txt). Creating a requirements.txt with plugins + flexget and doing a pip install -r requirements.txt
resulted in errors such as:
# ERROR: flexget 3.0.8 has requirement beautifulsoup4==4.6.0, but you'll have beautifulsoup4 4.8.1 which is incompatible.
# ERROR: flexget 3.0.8 has requirement certifi==2017.4.17, but you'll have certifi 2019.9.11 which is incompatible.
# ERROR: flexget 3.0.8 has requirement chardet==3.0.3, but you'll have chardet 3.0.4 which is incompatible.
# ERROR: flexget 3.0.8 has requirement click==6.7, but you'll have click 7.0 which is incompatible.
# ERROR: flexget 3.0.8 has requirement pytz==2017.2, but you'll have pytz 2019.3 which is incompatible.
# ERROR: flexget 3.0.8 has requirement requests==2.21.0, but you'll have requests 2.22.0 which is incompatible.
# ERROR: flexget 3.0.8 has requirement urllib3==1.24.2, but you'll have urllib3 1.25.7 which is incompatible.
Would taking advantage of GitHub Actions' Docker Registry allow for multi-arch support? Based on my limited understanding here, I believe this is something I could also actively work on as I've worked with GitHub's registry before.
I have not played around with GitHub Actions before so I can't speak to what is or isn't possible. Looking around real quick, it seems like there is a tech-preview release of a new tool https://github.com/docker/buildx that should be able to integrate multi-arch builds with GitHub Actions (example). But to be honest it still looks a bit troublesome. So far manually building & pushing it isn't too much effort with the number of updates I am doing (+ I enjoy local cache, without it, building takes several hours initially on non-VM hardware so testing it through GitHub actions doesn't sound too inviting right now :stuck_out_tongue: ).
So, I'm curious β what's kept this from becoming an lsio-backed container? (No intent to pawn off the work you've done or anything, honestly just curious.)
Also, when you say plugins
, are you referring to adding things like a Trakt List or using Deluge/Transmission?
I skimmed the requirements.in
from flexget
and it seems like they needlessly version lock βΒ e..g requests>=2.20.0
is in their requirements.in
, but they lock to a specific version instead of specifying a base-version. I wonder if there's a way to get around that? (Or, I guess, if it's necessary to try getting around that.)
https://github.com/Flexget/Flexget/blob/f6f0a435d8796abfbbd9d55180e00983b73959fd/requirements.in#L12
As for using GitHub Actions and such βΒ having briefly worked with them, the most troublesome part is actually testing your pipeline "in production" since GitHub doesn't [currently] have a way to test without making unnecessary commits to the repository.
So, I'm curious β what's kept this from becoming an lsio-backed container? (No intent to pawn off the work you've done or anything, honestly just curious.)
No idea really. Maybe when asked on their Discord they're happy to pick this up :slightly_smiling_face:
Also, when you say plugins, are you referring to adding things like a Trakt List or using Deluge/Transmission?
Yes, all Flexget plugins (complete list). Some of those require optional packages.
@cpoppema Jump on our discord and ping one of the team ;-)
While troubleshooting I find this step quite slow on my pi.