Bionus / imgbrd-grabber

Very customizable imageboard/booru downloader with powerful filenaming features.
https://www.bionus.org/imgbrd-grabber/
Apache License 2.0
2.55k stars 216 forks source link

--return-count on command line doesn't work with Gelbooru #902

Closed NewWorld closed 7 years ago

NewWorld commented 7 years ago

What steps will reproduce the problem?

Run in shell:

Grabber  -c -t "inugami_kira" -s "gelbooru.com" --return-count

Also tried with providing username and password.

What is the expected behavior? What do you get instead?

What I get is -1 as the output. What I expect is the number of images to be returned, that is about 393 (minus any blacklisted images).

How often does this problem occur?

Every time, with all tags.

What version of the program are you using? On what operating system?

v5.2.4, on ArchLinux

Please provide any additional information below

--ri and --download still work if an -m argument is provided. Searching for the tag in the GUI gets the images and shows the correct image count.

PS: Awesome program, many thanks for creating it.

Bionus commented 7 years ago

Well I believe that's the first issue about the CLI :tada: (so either nobody uses it or there are few bugs on it)

It seems that it was caused because the URL that the program loaded to get the number of images wasn't the right one (XML API v. HTML API). I pushed a fix on develop branch, so I don't know how you got the program since you're on Arch, but if you built it from source switching to that branch and re-building should fix the problem.

PS: Awesome program, many thanks for creating it.

No problem, I'm happy to see that the program can help other people. :smile: Feel free to open more issues if you run into any other bug or have improvement suggestions also!

NewWorld commented 7 years ago

Thanks for such a quick fix! I much prefer the CLI because it can be automated. I have 200 tags I wanted to download, and this would take too long to do through the GUI, one-by-one. I spent 8 hours today writing a Python script to automate downloading tags.... and this would have taken only 2 hours if I had known that Grabber already skips duplicates (didn't see this in documentation anywhere)!!! I built a whole system to store and check hashes for dupes haha

When I test the script some more I will send it to you as a pull request in the Wiki.... maybe you will think it would be helpful for others.

Thanks again.

Bionus commented 7 years ago

Yeah there's an MD5 list to ignore them. If you only use the CLI, you way miss on many features that are pretty much only accessible from the GUI's settings window. Unfortunately there's so many options it'd take a lot of work to add options for everything.

Another thing you could have done is generate your own IGL file (it's just a JSON), load it in the GUI, and download it from there. Example of IGL file:

{
    "batchs": [
        {
            "filename": "%search%/%md5%.%ext%",
            "getBlacklisted": false,
            "page": 1,
            "path": "/home/Bionus/Grabber",
            "perpage": 20,
            "site": "safebooru.org",
            "tags": [
                "landscape"
            ],
            "total": 20
        }
    ],
    "uniques": [
    ],
    "version": 3
}

But scripting also allows you to get all the images' URLs and download them through a better file downloader that Grabber may be. :smile:

When I test the script some more I will send it to you as a pull request in the Wiki.... maybe you will think it would be helpful for others.

Please do, I'll be glad to see the current documentation improved :+1: