Closed ss32 closed 1 year ago
It appears that that model is configured in a few places
~/deepLearning/dalle2-laion$ grep -rnw "ViT-L/14"
models/prior_config.json:5: "model": "ViT-L/14"
models/second_decoder_config.json:41: "model": "ViT-L/14"
configs/gradio.example.json:46: "model": "ViT-L/14"
configs/upsampler.example.json:46: "model": "ViT-L/14"
configs/variation.example.json:33: "model": "ViT-L/14"
notebooks/dalle2_laion_alpha.ipynb:436: " clip=OpenAIClipAdapter(\"ViT-L/14\"),\n",
I’m not home right now so I can’t get on my laptop to look this up. What you’re seeing is the clip-anytorch package not finding a link to download the clip model. It’s not directly created by us so I can’t give a solution off the top of my head. It could have been removed from the registry of models, but that would be strange.
Ah, your clip-anytorch is out of date. You need to upgrade to clip-anytorch==2.4.0 to have ViT-L/14.
Yea, it looks like the version isn't specified in the main repo.
That should probably be added. I'll let lucid know.
THanks! Specifying the version fixed it.
Glad it's working!
Running the default
example_inference.py
results in a runtime error due to a specified model that does not exist. The full traceback isThis is pulled directly from the Hugging Face repo, so that one likely needs to be corrected. I will update here if I figure out a workaround.