Closed wolfgangkarall closed 9 years ago
Thanks for the contribution! :+1:
I'm actually surprised someone else is using this tool, as the project does not even have a README. I've made for my personal use, it works wonders here and I'm quite proud of parts of the implementation, some of which were quite challenging to me (things like the apt-get
-like command line, the custom hooks interface, parallel retrieval of web pages, cache and download mechanics, etc.)
But I wonder... how does it work there? Had any trouble using or installing it? Feel free to open issues or to contribute more with the project! (does not need to be in code: project needs a decent README and an update of known games database)
Hi,
I found your project because I was using https://github.com/lukas2511/humblebackup before (a while back), but it seems to have stopped working, so I searched for one updated a bit more recently , but it didn't work, so I searched and found this project among others, and this was the one that worked right away. ;)
Anyway, I might add some sort of "download all things" type of functionality if I find time, because I'm actually in search of some way to download all my purchases at once and refreshen my local files if there are new ones to be found. I.e. iterating over all platforms (and maybe even 32/64-bit), and if finding more than one type of download just download all of them.
Anyway, great project. :+1:
Glad to hear it :)
It's been a while since I don't buy new HBs, so you were lucky there was no breaking changes in HB website in the last months and my script is still functional.
As for your "download it all", using a combination of --list
, --show
and --download
you can easily do that with a shell script. All output was carefully chosen to be easily parsed using cut
, grep
, awk
and friends. And there is always --json
if you want.
If you do such script, please upload, it would be a welcome addition :)
This one-liner might whet your appetite:
for game in $(humblebundle --list); do humblebundle --show "$game" --json | grep '"web":' | cut -d'"' -f4; done
I actually only needed the 'ebook' one, but according to the page source there's also 'comedy'