Open Xerbo opened 4 years ago
Would this be why the current script isn't working? The last time I successfully used it was before this announcement, but when I went to use it today, I couldn't get it working.
This doesn't seem to be a problem with the script but rather FurAffinity, when quiet mode is turned off in wget it exits with 2020-02-27 22:09:03 ERROR 503: Service Temporarily Unavailable.
Which most likely means that FurAffinity have blacklisted the user-agent of this script or it's failing some sort of CloudFlare anti-bot test.
when can i downlode the python version?
any plans on adding the -n function (or something better) back? i just found it very useful whenever i needed to update someone's gallery based just on their latest submissions.
Would this be a good enough replacement?
When downloading favorites, the python tool indicates "Downloading page
@jkmartindale kindly fixed this in #43
-a attempts, how many connection retry attempts before exiting; -1 for unlimited, ? is default.
-t timeout, wait this long in seconds before another connection retry attempt; ? is default.
I would love a way to insert a pause between downloads, something like five to fifteen seconds, to ease up on how much I'd otherwise be hitting them.
By default, it just feels like it hits the server a bit too fast overall, especially as I would like to get back to the long term local archiving habit I have had, and don't want them to just block me for getting caught up on going through my archives.
Also, is it possible to also place the created JSON files in a subfolder, or otherwise not keep them once downloading is finished?
I've downloaded some large (>2000 submissions) galleries and have had no problems with rate limiting yet, would still be a good idea to add a delay though. As for putting the meta files in a different directory, good idea, would really clean up the output folder.
I'll get on this tomorrow, should be pretty easy to do.
@Xial done, see 0f0fe3e6 and 85cd3cd.
Looking forward to giving those a go later today. Thank you! :)
Would this be a good enough replacement?
Perhaps an option to only process a specific number of pages would be appropriate. Occasionally, I might have saved one or two images by hand from the recent stuff, but then notice that the artist just sprayed 30 or 40 pictures up all at once and realize it'd be better to just automate the process.
It could also mitigate things like this when refreshing a gallery:
... Skipping "Bea", since it's already downloaded Downloading page 10 Skipping "Golden Birb", since it's already downloaded ...
:)
i'm wondering if it would be posible to download a specific folder from a user instead of their entire gallery, i know an old browser extension was able to do it but it doesn't work with the new theme
I noticed there's a --start
command that allows users to pick a page number to start from.
Could there be a --stop-at
command, so that once the page count goes beyond a certain point, the script stops downloading?
As example, there's one user who has uploaded lots of art over the years, and having the script go through 30+ pages to announce that it's skipping because file's already present makes me feel like a bad citizen. However, downloading 60 images by hand for one artist is a little tedious.
Thanks. :)
I just want to say: Thank you for this tool, it's so much easier than doing it manually for over 3k+ various images :3
and yes, it works on: arch linux and android (termux) with python 3.10.x
It's finally happening, when finished all current code will be moved to a new branch and the python version will occupy the master branch.
All existing functionality will be migrated, this would also allow native windows support and easier development.