Closed xvagosx closed 10 years ago
This is probably the random redirect error that was mentioned recently. It's fairly rare, so hard to reproduce and verify a fix. But there should be a fix for this in the next release.
I imagined that. I remember once I had opened by hand like 10 tabs with wallpapers to download, and if I let them load at the same time, some of them would fail. This must be similar, only amplified.
Another thing, it's quite random. I've had hundreds of consecutive downloads or even just ten or so before it appears.
Secondly, its not a big deal to call the command again when this error pops up, but it's taking ages to scrape the links, and many times it downloads only a few images. And then again it has to scrape again when I call it and so on. I don't know if this can be changed, but I'm just suggesting a change in the workflow.
Maybe if the links could be saved on a file and then incrementally add the new links, it would speed up the scraping process. So, the script creates a file and copies the links it scrapes on it. Next time it runs, if it finds the file on the directory we set it, then it only scrapes and copies the new links. After that, it continues as normally.
If this could be done, then the aforementioned bug could be sorted out by a loop. Have the script run until no new link is added.
Again, thanks a lot for this money-saver (I even considered paying to get the images), and let me know if you need any help with testing the new code when it's available. (Btw, I'm completely unfamiliar with Python, so maybe I'm just suggesting things that just can't be done, or it would take lots of code to achieve. Please, bear with me)
I can recreate this issue every time without fail when I try to scrape all 3840x1080 (dual screen) backgrounds. Any chance of a fix? Great project other than that - had no issue getting all 1920x1080 images :smile:
Hey there.
I have found a way around that issue, by creating a batch file in Command Prompt.
My file looks like this:
D: // to change drive, because my save directory is in another drive cd D:\Wallpapers\1920x1080 // to change the working-save directory :loop // a label to redirect the loop to call interfacelift-downloader 1920x1080 // the actual command goto loop // when the command above finishes running, this tells the script to go back to the label 'loop' and perform the same things again.
So the command repeats itself after the error and it keeps on going forever. I let it run overnight and by the morning the images were downloaded. However, the script runs forever, meaning that you need to interrupt it. Otherwise, it keeps scraping and then skips all the links as it finds the images stored on the directory and so on.
I hope this helps. I snatched 1920x1080, 1920x1200 and 2440x1600 for my monitors.
By the way, I get the error in EVERY possible resolution, so it must be random.
Yep, a fix is coming down the pipeline for this. I made some good progress this morning, so hopefully I'll have a new release for you soon.
@rmorrin I had a chance to take a look at the 3840x1080 resolution and found why it errored 100% of the time.
If you go to the dual 1080p list and try to download "Bandon Face Rock" you will find that the download produces a 404 error. This event is unhanded in the current version of this script, but will be handled in the next version. Well, you still won't be able to download that image but at least it will no longer throw an error and stop the script.
So that problem is actually unrelated to the issue being discussed here.
@stevenbenner Thanks for taking a look. I can see the issue with the 404 error.
However, I've just ran again with your latest updates from here and it now correctly logs an error: [188] Failed: 03440_bandonfacerock_3840x1080.jpg
After which moving on to the next image - so consider the issue resolved.
Thanks again!
Fixed in version 2.1.0.
The new behavior is that the script will skip a file that has a redirect, so it won't download the file until the next time you run it. In the future I would like to add code to try to get a new download URL, but I had some issues trying to implement that so it will have to wait for another day.
Firstly, great job with this one!
But I get this code out of the blue.
C:\Users\Vangelis\AppData\Roaming\npm\node_modules\interfacelift-downloader\lib\ downloader.js:41 filePath = path.join(downloadPath, fileName[0]); ^ TypeError: Cannot read property '0' of null at EventEmitter. (C:\Users\Vangelis\AppData\Roaming\npm\node_modu
les\interfacelift-downloader\lib\downloader.js:41:50)
at EventEmitter.emit (events.js:95:17)
at IncomingMessage. (C:\Users\Vangelis\AppData\Roaming\npm\node_m
odules\interfacelift-downloader\lib\downloader.js:68:18)
at IncomingMessage.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:920:16
at process._tickCallback (node.js:415:13)
Thanks for even building this script. Sorting this bug out would be a lifesaver!