Closed StellarCheng closed 1 year ago
If you are not running this in a large volume, you may try the online demo. By clicking on links below results you will jump to a dataset page where you can download them.
If you want to do it for a large set of descriptions, you will need to download the embeddings. You can check the relevant code here on how to use them to retrieve results. You can download from the glb
key in the returned entries (an example of returned entry is pasted in the code comments) at this uri: https://huggingface.co/datasets/allenai/objaverse/resolve/main/{glb}
.
Edit: in your case, you should pass a CLIP text embedding to the retrieve function.
Thank you for your response. So it means that I can use the code in example.py to get the text or image embedding?
Yes, you can use the OpenCLIP (ViT-bigG-14) for the text and image embedding.
Yes, you can use the OpenCLIP (ViT-bigG-14) for the text and image embedding.
Thanks, that's clear~
Hello author, thank you for your excellent work. I would like to know how to easily retrieve the desired Mesh files of objects using your code. For example, I only want to retrieve the top N Shape files that match the description of 'a vintage American sports car' based on text. Could you please guide me on how to obtain the corresponding Shape files? Thank you