Open mspronesti opened 2 years ago
Hello,
First of all, thank you for your idea! The notification email was bounced on my inbox so I couldn't reply quickly... 😓
I agree that we can do better for downloading/loading models, but I want to keep data/
directory: I think it's straightforward for users who want to {look at, change, load finetuned, finetune} model (yeah, we don't support conversion and training now, but might gonna do someday).
Maybe we can:
data/
data/
directorydata/
directory, download itI think we should use the same way for tokenizer. Yeah, everyone uses CLIP's default tokenizer without edit, But:
I'll upload checkpoint files in near future and mention you; I think I might change some structures so I'm not sure I can do it now.
Hello @kjsman , thanks for the answer. I guess I will wait for the checkpoint files in the future so that we can discuss more concretely possible enhancement, if you like 😄
Hello @kjsman, this is more a feature proposal than an actual issue. Instead of requiring the user to download and open the tar file containing the weights and the vocabulary from your huggingface hub repository, one can directly make the
model_loader
and theTokenizer
download and cache them.For the first part, it only requires replacing
torch.load(...)
here (and for the other 3 functions in the same file) withAll it takes on your side is to upload on hugginface hub the 4
pt
files (not in a zipped file) and thats' it.As regards the tokenizer, just takes to add a
default_bpe()
method / functionAnother option is, if you prefer to keep your vocab.json and merges.txt, to upload them as well to Hugginface hub (not in a tar file) or directly to GitHub like the original reposiotry does with its vocab.
If you like it, I will open a new PR, otherwise please let me know if you have any better idea or close this issue if you are not interested in this feature 😄