Closed ShinasShaji closed 1 year ago
There's an issue with sentence transformers. The default model for immich is clip-ViT-B-32.
Edit: devs are working on it https://twitter.com/huggingface/status/1655760648926642178
Awesome, I shall close this issue for now and await an update from Hugging Face.
On second thought, I'll keep this issue open until a fix, in case other Immich users are facing the same issue and wondering what's going on.
Models in Hugging Face Sentence Transformers have come online, and the model Immich uses is now accessible. CLIP encoding is working. Closing the issue.
The bug
After doing a clean install of Immich v1.55.0 using docker-compose, I noticed that CLIP encoding was repeatedly failing for each image upload in the logs. I have attached the logs below:
I'm not very well-versed in this but opening the api link does prompt me for a username and password.
I have only made very few modifications to the .env file.
I assume this is the CLIP model that Immich uses? openai/clip-vit-base-patch32
The OS that Immich Server is running on
Windows 11 Pro 22H2 (with Hyper-V)
Version of Immich Server
v1.55.0
Version of Immich Mobile App
Irrelevant?
Platform with the issue
Your docker-compose.yml content
Your .env content
Reproduction steps
Additional information
Really happy with Immich so far, it's amazing! The global map is sweet, and I'm looking forward to face recognition!