Closed LifeislikeaboxofCode closed 9 years ago
Could you provide the contents of the downloaded.txt and the output of the script?
Hi macearl,
I can't give you anything for the contents of downloaded.txt as there isn't anything present. When the script runs for each page that is downloaded a tmp file is created and deleted.
The terminal output is as follows:
Download Page 1 -done
Download Wallpapers from Page 1 -done
I'll upload a screenshot soon as I'm currently wiping my Ubuntu installation. I am not behind a proxy and can access Wallhaven directly with no issues. Another point to note is that I have ran chmod a+x on the wallhaven.sh script.
Best regards
Here is my configuration options - default apart from the download location has been changed: http://pastebin.com/93yArb5K
Here are the various outputs also:
wget version is 1.15
I'm back home and tested the script myself, in short: it is not your fault.
I will look into the changes wallhaven has made to their site ;)
edit: not sure what the problem is yet, made some changes (not commited yet) and it does download some pages but not others it returns an '403 Forbidden' Error, will look into it more tomorrow
edit2: wallhaven is barely usable at the moment, my guess is they are doing some sort of maintenance/moving, Will look into it again when the site is usable again
Problem should be fixed now, if the server is slow again a few error messages will occur (wont show up because of the -q flag for wget)
Hi macearl,
Just thought I'd say thanks for the quick response and turn around on a new version!
I tried this tonight, and unfortunately the same issues are still occurring. I've removed the '-q' flag for wget and it is responding to say that it is 403 forbidden. Do you think that your attempts to get a script working are being thwarted to stop the script running as intended? Or am I doing something wrong?
Best regards!
Hi macearl,
I've had a sit down and look at the code. I've made some modifications (without fully understanding the code!) and it is now working for the time being . . .
Best regards.
mmh it worked fine for me, but maybe you could share the changes you made?
Hi macearl,
I'm just heading out the door for work. I will update when I am back!
Best regards
Apologies for the late reply macearl!
So, I noticed what I thought might be an error on line 113 - there is a missing quote - token="$(cat login | grep 'name="_token"' | sed 's:.*value="::"' | sed 's/.{2}$//')"
The following wget code is what I used to successfully download wallpapers (I did not modify the login function as I do not require it for my use):
Line 131 - wget -q --user-agent="Mozilla/4.5 (X11; U; Linux x86_64; en-US)" --keep-session-cookies --save-cookies=cookies.txt --load-cookies=cookies.txt -O tmp "http://alpha.wallhaven.cc/$1"
Line 153 - wget -q -U "Mozilla/5.0 (X11; Linux i686; rv:28.0) Gecko/20100101 Firefox/28.0 SeaMonkey/2.25" --keep-session-cookies --save-cookies=cookies.txt --load-cookies=cookies.txt $img
Line 154 - cat download.txt | parallel --gnu --no-notice 'cat {} | echo "http://$(egrep -o "wallpapers.*(png|jpg|gif)")" | wget -q -U "Mozilla/5.0 (X11; Linux i686; rv:28.0) Gecko/20100101 Firefox/28.0 SeaMonkey/2.25" --keep-session-cookies --load-cookies=cookies.txt --referer=http://alpha.wallhaven.cc/wallpaper/{} -i -'
Line 161 - cat download.txt | parallel --gnu --no-notice 'wget -q -U "Mozilla/5.0 (X11; Linux i686; rv:28.0) Gecko/20100101 Firefox/28.0 SeaMonkey/2.25" --keep-session-cookies --save-cookies=cookies.txt --load-cookies=cookies.txt --referer=alpha.wallhaven.cc http://alpha.wallhaven.cc/wallpaper/{}'
Line 162 - cat download.txt | parallel --gnu --no-notice 'cat {} | echo "http://$(egrep -o "wallpapers.*(png|jpg|gif)")" | wget -q -U "Mozilla/5.0 (X11; Linux i686; rv:28.0) Gecko/20100101 Firefox/28.0 SeaMonkey/2.25" --keep-session-cookies --load-cookies=cookies.txt --referer=http://alpha.wallhaven.cc/wallpaper/{} -i -'
Line 200 - favnumber="$(wget -q -U "Mozilla/5.0 (X11; Linux i686; rv:28.0) Gecko/20100101 Firefox/28.0 SeaMonkey/2.25" --keep-session-cookies --save-cookies=cookies.txt --load-cookies=cookies.txt --referer=alpha.wallhaven.cc http://alpha.wallhaven.cc/favorites -O - | grep -A 1 "Favorites" | grep -B 1 "" | sed -n '2{p;q}' | sed 's/<[^>]+>/ /g')"
ok so here are my thoughts on your changes:
Hi, I had the same issue.
Replacing --referer=alpha.wallhaven.cc
with --referer=http://alpha.wallhaven.cc
in line 133 solved it.
I noticed other lines having the same referer without http:// but maybe those don't actually use it to retrieve the data.
added missing http prefix to all referer options where it was missing, let's see if that fixes the problem
Tried this last week through Cygwin successfully using the latest version!
Hi macearl,
I've been having issues running this under Ubuntu to no success.
I do receive feedback stating that the script has 'Download Page 1 - done' and 'Download Wallpapers from Page 1 - done', however the actual directory set up to save the images into only contains 'downloaded.txt'. The response from the terminal window seems a little too fast for anything to have actually downloaded.
I've ran the script using ./wallhaven.sh.
Any help would be greatly appreciated and thank you in advance.
Bets regards
LILABOC