Open lilws opened 3 years ago
I am not entirely sure of the docker usage aspect - if your NAS runs linux, you should be able to use this directly. Or does the NAS only support images?
If you could share some more info on your setup, I can perhaps help out
Hi, I use Xpenology, it is a mod of Synology OS running on different hardware rather then Synology itself. SSH into it and run uname I find this:
Linux Synology 4.4.59+ #25426 SMP PREEMPT Tue May 12 04:54:55 CST 2020 x86_64 GNU/Linux synology_apollolake_918+
As far as I know, it looks like a simple linux version, don't have apt-get or something like that to install packages. But it has it own Sofftware center which I find that is not an ideal. But with docker running on it, nothing is impossible since it a virtual environment and can run almost anythings on it as long as the image support the hardware like x64 or arm for example.
I'm new to docker too, as I understand it. Dev create an image which built from any linux distro, contains as much as package or dependencies to make it running. It lightweight, fast, and easy to manage because it's virtual environment. User can map any folder of the virtual env to save data without lost if the env failed to run or any error happen. Devs share these image on docker hub, and people like me download these images and create a container from this image and run it on docker.
This is currently 4 containers I'm running on my NAS, since qbittorrent lose in last race, I tried rtorrent.
Sorry for the late reply. From what I can tell, this would not be possible with the current version of the script, since to trigger it, autodl needs to manually call the script path. Can I ask, how do you pass torrents from autoDL to qbittorrent?
Because these containers are isolated, I am not sure if autodl will be able to trigger the script to force reannounces
Yes it is isolated with each other, but we always can mount a specific folder for it to working together. The photo above, floodru is flood-UI that I mount its /config into the same location with rutorrent-flood container. So basically, these 2 shared the same /config folder, and flood is able to access rutorrent socket for its operating.
also I am able to make flood working with qbittorrent because its mainly connect using webui address with user and password. Since 2 different cotainers can working together, I think your script can work that way too. it just need to operate in the same /config with other container so it would be possible with your script. Since I'm not a dev, I can't help much.
I'm definitely sure that across the docker containers, connecting to qbit over the WebUI would not be a problem. I will explore the filesystem stuffs, and see if it is possible. It's not a priority for me though, but I will let you know if / when I make progress. Cheers
Any updates on this? Would be very interested in setting up this tool using docker. I could also help implementing any changes that would be needed. For context, I'm running Qbit and Autobrr in independent containers.
Hi @jaimeferj , one of the main blockers on this for me is I don't quite understand what is required to "dockerize" this project - in the sense of how most people's networking / container setups work.
Could you explain a bit more about how your autobrr and qbittorrent containers interact, and how you would expect qbit-race to fit into it?
For instance, what would trigger qbit-race in your expected setup?
I use qbittorrent on my NAS with docker setup. I choose qbit over rtorrent because it got updated frequently than rtorrent and also offer many features that rtorrent can't accomplish like queue,...
Now I tried to race with the tracker and see that I can only came from top 20 or more despite that I see in peers it looks like it very close.
I wonder if it is possible to make this into docker image?