kjsman / stable-diffusion-pytorch

Yet another PyTorch implementation of Stable Diffusion (probably easy to read)
MIT License
545 stars 61 forks source link

[Enhancement] automate weights download without user action #3

Open mspronesti opened 2 years ago

mspronesti commented 2 years ago

Hello @kjsman, this is more a feature proposal than an actual issue. Instead of requiring the user to download and open the tar file containing the weights and the vocabulary from your huggingface hub repository, one can directly make the model_loader and the Tokenizer download and cache them.

For the first part, it only requires replacing torch.load(...) here (and for the other 3 functions in the same file) with

torch.hub.load_state_dict_from_url(weights_url, check_hash=True)

All it takes on your side is to upload on hugginface hub the 4 pt files (not in a zipped file) and thats' it.

As regards the tokenizer, just takes to add a default_bpe() method / function

@lru_cache()
def default_bpe():
    p = os.path.join(
        os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz"
    )

    if os.path.exists(p):
        return p
    else:
        p = urlretrieve(
            "https://github.com/openai/CLIP/blob/main/clip/bpe_simple_vocab_16e6.txt.gz?raw=true",
            "bpe_simple_vocab_16e6.txt.gz",
        )
        if len(p) != 1:
            # if it also contains the
            # HTTP message as second entry
            return p[0]
        else:
            return p

Another option is, if you prefer to keep your vocab.json and merges.txt, to upload them as well to Hugginface hub (not in a tar file) or directly to GitHub like the original reposiotry does with its vocab.

If you like it, I will open a new PR, otherwise please let me know if you have any better idea or close this issue if you are not interested in this feature 😄

kjsman commented 2 years ago

Hello,

First of all, thank you for your idea! The notification email was bounced on my inbox so I couldn't reply quickly... 😓

I agree that we can do better for downloading/loading models, but I want to keep data/ directory: I think it's straightforward for users who want to {look at, change, load finetuned, finetune} model (yeah, we don't support conversion and training now, but might gonna do someday).

Maybe we can:

I think we should use the same way for tokenizer. Yeah, everyone uses CLIP's default tokenizer without edit, But:

I'll upload checkpoint files in near future and mention you; I think I might change some structures so I'm not sure I can do it now.

mspronesti commented 2 years ago

Hello @kjsman , thanks for the answer. I guess I will wait for the checkpoint files in the future so that we can discuss more concretely possible enhancement, if you like 😄