Huggingface is blocked.The only way I can get openclip model is from [https://www.modelscope.cn/models/AI-ModelScope/CLIP-ViT-H-14-laion2B-s32B-b79K/files].It's embedding is 1024.
I used it as the OpenCLIPEmbeddingFunction's model.Put images into collection,it seems OK . But when I query use one image,the result is random(every time get different image as result).
Versions
OS: windows 10 chromadb: 0.4.18 open_clip:2.23.0 torch:2.1.1+cu118
\venv\Lib\site-packages\open_clip\model_configs\ ViT-H-14.json --> put model files from modelscope under /ViT-H-14 bug.zip
What happened?
Huggingface is blocked.The only way I can get openclip model is from [https://www.modelscope.cn/models/AI-ModelScope/CLIP-ViT-H-14-laion2B-s32B-b79K/files].It's embedding is 1024. I used it as the OpenCLIPEmbeddingFunction's model.Put images into collection,it seems OK . But when I query use one image,the result is random(every time get different image as result).
Versions
OS: windows 10 chromadb: 0.4.18 open_clip:2.23.0 torch:2.1.1+cu118 \venv\Lib\site-packages\open_clip\model_configs\ ViT-H-14.json --> put model files from modelscope under /ViT-H-14
bug.zip
Relevant log output
No response