nzbgetcom / nzbget

efficient usenet downloader
https://nzbget.com
GNU General Public License v2.0
302 stars 16 forks source link

Feature request: Offloaded postprocessing #99

Open paul-chambers opened 8 months ago

paul-chambers commented 8 months ago

(cloned from #45 in nzbget-ng/nzbget, reported by @J-Swift)

First, thank you for taking the mantle and maintaining this fork!

Not sure its been brought up before on this side of the fork, but on the old repo there was this:

https://github.com/nzbget/nzbget/issues/43

I am in a similar boat as the OP of that issue. Running on an underpowered node with good connectivity to a bigger server which would be great to offload to for the postprocessing. Is this still something that isn't likely to be considered?

paul-chambers commented 8 months ago

Personally, I'd like to take this a step further. There's a lot of post-processing of downloads that's common to both nzb and torrents; it seems sensible to create a common service that could handle it wherever the files that need to be unpacked/post-processed originated from. One that's just as applicable to torrent clients as nzb ones. Unpackerr has the right idea, but has it 'backwards' in my opinion. It has a 'pull' model that depends heavily on integration with Sonarr/Radarr, rather than using a more flexible 'push' model that would make far fewer assumptions about how it will be used.

Taking it a step further, getting completed files from the downloader to the postprocessing stage is something else that's not nzb-specific. Better integration with/handoff to transfer tools like rclonewould make sense, too.

It seems logical to me to have nzbget focus on its core functionality: parsing NZBs, downloading usenet articles, using PAR processing to repair them, and finally assembling the files from the pieces.

After that point, the processing pipeline is almost identical between nzbs and torrents, and in my opinion, should be a common mechanism. It would also give a central place to optimize the performance of post-processing, e.g. for different CPU architectures and hardware acceleration blocks. Personally I 'prune' what files will be transferred down to the ones I'll actually use, before the transfer. Is that 'pre-processing' before 'post-processing'? :)

woiza commented 4 months ago

What's your opinion on using rar2fs instead of unpacking files?

https://github.com/hasse69/rar2fs

luckedea commented 4 months ago

@woiza What's the benefit over current approach, and potential use-case? We plan to pay more attention to everything related to unpacking (and disk&network) performance in v25 though, I think that's higher priority based on community needs.

woiza commented 4 months ago

@luckedea personally I don't see a benefit over the current appraoch and am happy with the performance of the latest testing release (direct unpack together with a native unrar build works like a charm). I just thought rar2fs migbt be something @paul-chambers and others using low powered device could try out. I have no idea if they are aware of it.