Closed metanoder closed 9 months ago
Hi,
Have you downloaded the checkpoint from My Hugging Face? If not, you can try downloading the checkpoint from there, as I have provided all the necessary files. And put them into 'checkpoints' folder.
If you encounter any issues while downloading CLIP, you can consider downloading it from CLIP's Hugging Face page. Once the download is complete, remember to modify line 26 and line 34 in the config file for providing correct path of CLIP.
I tried re-cloning from your hugging face checkpoints repo, but the problem persisted. After downloading from CLIP's repo and placing it in checkpoint/clip, then the model started running.
Thanks for the response and links. Your project is quite impressive! Bravo!
ancillary sidenote, for Windows users, running wsl2 + ubuntu 22.04, will have issues. As discussed in issue 786 it is just a matter to add this in the .bashrc: export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
cheers!
Thanks for usage and suggestions. I will include it in the TIPS section of the github page and link to this issue.
cheers!
After installation, running the text2mesh inference, or any inference, and i get this error...
OSError: ./checkpoints/clip/clip-vit-large-patch14 does not appear to have a file named config.json. Checkout 'https://huggingface.co/./checkpoints/clip/clip-vit-large-patch14/None' for available files.
Clicking the link will result in a huggingface 404 and no content stored. Can you point to where I can find the config.json to clip-vit-large-patch14? Or how do I debug this?