Closed Tomaster134 closed 3 years ago
I have noted this issue as well here: https://github.com/aliparlakci/bulk-downloader-for-reddit/issues/130
The bug is coming from the host server (gfycat) which periodically closes the download's web socket. The author thinks gfycat does this on purpose. Nothing can be done about this unfortunately. I personally use Motrix to grab the skipped videos/gifs.
The downloader uses urllib (basically the most barebones package python has to offer for web access), so it's not too surprising. Requests with stream=true handles large file downloads better, and youtube-dl (which supports gfycat amongst others) has an even greater advantage of being able to continue downloading from the last point of failure. Video downloads without those are a waste of bandwidth. Besides that, IIRC there are / were several redundant page loads for metadata scattered throughout the code; that would exacerbate the problem.
I'm trying to download some mp4 files that are anywhere between 30 and 90 megabytes, and some of them seem to fail, leaving only a temporary file in the save location.
From the CONSOLE_LOG file