Confuzu / CivitAI-Model-grabber

The Script Downloads in bulk both model(Lora,Lycoris,Embeding etc..) and related images, from a given CivitAI Username
GNU General Public License v3.0
29 stars 6 forks source link

Not downloading every model / Suggestions #3

Closed ResleyZ closed 7 months ago

ResleyZ commented 8 months ago

Hello, first of all, I want to say that I absolutely love this, it saves so unbelievable much time.

I've noticed that with some users, it doesn't download all models. For example the user Dolirama126 has a total of 684 models uploaded. However, when I run the program, using my API key, I only get 94 folders, missing quite some models.

This includes LORA's, embeddings, and (from other creators) also checkpoints, and it's seemingly random which models do and don't get downloaded.

I do get the following error when I use the program, which I'm not sure is something that causes the inability to download all models, or is something different;

Traceback (most recent call last): 
  File "F:\AI\CivitAI-Model-grabber-main\civitAI_Model_downloader.py", line 299, in <module>
    if metadata['totalPages'] > 1:
       ~~~~~~~~^^^^^^^^^^^^^^ 
KeyError: 'totalPages'

Which refers to;

# Check for pagination and handle subsequent pages
if metadata['totalPages'] > 1:
    handle_pagination(metadata)

Furthermore, I have three suggestions that would make downloading and organizing models much easier (for me at least).

Firstly, is it possible to have my API key stored somewhere, instead of entering it every time?

Secondly, would it be possible to "queue" several creators? At the moment I have several xxx_downloaded folders, and if I want to update these, I have to enter the username, and my API key every single time, which turns into a bit of work if you have a lot of folders. If possible, it would be amazing if I could queue several users, or refer to a file with the usernames in it.

And finally, for the creators that make multiple types of models, would it be possible to download these in seperate folders? So the structure would be something like this;

        Creator/
        ├─ Lora/
        │  ├─ Lora1/
        │  │  ├─ Lora1.safetensors
        │  │  ├─ Lora1.jpg
        │  │
        │  ├─ Lora2/
        │  │  ├─ Lora2.safetensors
        │  │  ├─ Lora2.jpg
        │  │
        ├─ Embedding/
        │  ├─ Embedding1/
        │  │  ├─ Embedding1.pt
        │  │  ├─ Embedding1.jpg
        │  │
        │  ├─ Embedding2/
        │  │  ├─ Embedding2.pt
        │  │  ├─ Embedding2.jpg
      
I'm sorry if this is a bit much to ask, but as I'm a bit of a data hoarder, this program made my live so much easier, so thank you again!
Confuzu commented 8 months ago

As for the bug, I'll look into it, and for your ideas, I'll see if and how I can implement them without unnecessarily enlarging or rewriting the script. I promise nothing for now.

Confuzu commented 8 months ago

idea no. 1 later, because now you have only cpoy & paste it once because new feature multiple usernames idea no. 2 the multiple users is included in the update idea no. 3 I have to take a closer look at how I might be able to implement this.

ElRandom78 commented 7 months ago

Hi everyone, hope you all having good and quality time with CIVITAI stuffs . :) I encountered same problem noticed by ResleyZ : unfortunatley, the algorythm download only a portion of the whole folder of files of a Username-Creator of CIVITAI.

For some users, also I have encountered the error:

Traceback (most recent call last): File "F:\AI\CivitAI-Model-grabber-main\civitAI_Model_downloader.py", line 299, in if metadata['totalPages'] > 1:

KeyError: 'totalPages'

In others cases not, but still, only a portion of files downloaded, not the whole folder.

If this can be of some help, I post here an example of a download that I tried to make with last update of the Grabber :

Try_downloading_SetusArts_files

The try that I made It's about SetsuArts user. In his Civitai user page there are a total of 180 files, Today I tried to download his files, but the Grabber has downloaded only 53/180 files.

t's like the algoryth totalize an incorrect number of files, on first instance.

Another thing is, "the end of pagination", maybe related with variable next_page (but I'm not sure).

it's just a hypothesis but, maybe the end of pagination reaches an incorrect end because of that parameter ?

I hope this can be of some help for fixing, and thank you again for this tool. It's fantastic, really.

Confuzu commented 7 months ago

the problem was the new NSFW filter of Civit AI i updated the Code

ElRandom78 commented 7 months ago

Thank you , from the bottom of my heart, for this fast update, SIr! I've already made several tries yesterday, with users like : Setsuarts, flyx3, eft, narugo1992, ecc... the code worked perfectly, because has downloaded the entirety of all their files posted .

To sum up, I tested the Grabber with creators who have a total of : 180 - 110 - 715 - 1103 files in their CIVITAI pages. So yeah, I made several tests with different amounts, and all in this cases all went smoothly fine.

For now, I've encountered error only with user : nochekaiser881, that has more than 3000 files in his CIVITAI page.

I post below here the CMD error :

error_during_download_user_nochekaiser881

The code has downloaded only a bit more of 300 files / 3280.

Can you please check and try with this user aslo by your side?

And thanks again for this update.

Confuzu commented 7 months ago

The api could not deliver and the script could not handle it and crashed with that error. i try to improve the error handling so that this does not happen again.

sergiosaurio commented 7 months ago

I'm trying to download models from the same user (nochekaiser881) But I got this ouput:

スクリーンショット 2024-04-10 153400

I noticed that some of his models are stills in early access May that be a problem? image

Confuzu commented 7 months ago

Yes, that could be the problem, if you don't have access to the early access stuff, then the API can't deliver either.

Confuzu commented 7 months ago

i will close this one. For every new bug please open a new Issues