Unmanic / unmanic

Unmanic - Library Optimiser
GNU General Public License v3.0
1.58k stars 80 forks source link

[Feature Request] Remote Unmanic instances with their own access to the Media configured #261

Open Makeshift opened 2 years ago

Makeshift commented 2 years ago

My understanding is that when adding a remote Unmanic installation, the primary instance will send jobs to secondary instances by sending the file to be converted over the network.

In my use-case, I have multiple servers that all have access to the media store directly, and so they do not need to send it via the network. I am aware that I could just have multiple non-connected Unmanic instances and they would read from the .unmanic file to work out what jobs have/haven't been completed on a file, but scanning my library takes a long time and I suspect there would be overlap issues.

Can there be an option to allow secondary instances to use a file path on the secondary instance itself, but still obtain jobs from the primary?

aron7676 commented 2 years ago

Dropping in to show my support for this feature as well. I have two centralized NAS, and multiple devices that can run Unmanic on the same network. Shouldn't need to pull the file from NAS -> Master Node -> Slave Node -> Master -> NAS. If there is a flag that can be set to use a shared path so it would go direct from NAS -> Slave -> NAS that would be ideal.

EX, Master sees \NASshare\Movie.MP4 in shared library, passes the job info only to Slave, and tells it to look for file at \NASshare\Movie.mp4. Saves a ton of time transferring the file around!

aron7676 commented 2 years ago

Dropping in to show my support for this feature as well. I have two centralized NAS, and multiple devices that can run Unmanic on the same network. Shouldn't need to pull the file from NAS -> Master Node -> Slave Node -> Master -> NAS. If there is a flag that can be set to use a shared path so it would go direct from NAS -> Slave -> NAS that would be ideal.

EX, Master sees \NASshare\Movie.MP4 in shared library, passes the job info only to Slave, and tells it to look for file at \NASshare\Movie.mp4. Saves a ton of time transferring the file around!

Josh5 commented 2 years ago

Dropping in to show my support for this feature as well. I have two centralized NAS, and multiple devices that can run Unmanic on the same network. Shouldn't need to pull the file from NAS -> Master Node -> Slave Node -> Master -> NAS. If there is a flag that can be set to use a shared path so it would go direct from NAS -> Slave -> NAS that would be ideal.

EX, Master sees \NASshare\Movie.MP4 in shared library, passes the job info only to Slave, and tells it to look for file at \NASshare\Movie.mp4. Saves a ton of time transferring the file around!

It won't save a ton of time. Perhaps only max of 10% increase in time by handing files through the API rather than through a network share but with the benefit of not needing to manually configure network shares to match on the primary and secondary installations.

There are improvements to be made to this system, but I can guarantee that network shares will not make big speed gains to the total processing time of 10 files.

Makeshift commented 2 years ago

It won't save a ton of time.

In my case, Unmanic is grabbing files from an Rclone Google Drive mount, so the performance saving would be pretty significant, especially if I had a node running on a remote VPS somewhere.

Josh5 commented 2 years ago

The file transfer time for sending a file from your mount to an API and converting it there is going to be that same time it takes to convert the file direct from the mount. You will hit the same network bottleneck either way. Unless you are saying that each installation has their own mount of that Google drive?

Makeshift commented 2 years ago

Correct, in my setup each server performing transcodes has its own drive mount. The decryption process is quite CPU taxing so I'd prefer not to be doing it all on one machine.

willhughes-au commented 2 years ago

Another +1 for this feature.
My NAS has a 10Gbit NIC, but the GPU instances I have each only have 1Gbit NICs.
With the way the remote installation feature currently works, the sending node becomes a bottleneck.

If it could pass over the path to work on, then that would eliminate this bottleneck. Then I could be responsible for ensuring the container paths were the same.