Open yaman opened 1 year ago
Anyone in the void?
Hey @yaman ! Sorry, I was away from the project.
Would love to have this! This is quite a neat workaround!
Can you push the ONNX Model weights to Huggingface Hub and raise a PR with that? That way, you always retain the attribution for doing the ONNX export.
I can help you get started with both. Here is a calendar link if that's easier? https://cal.com/nirant-kasliwal-qdrant/30min
Hi @NirantK,
Sorry for late reply, I caught flu and it knocked me out.
Let me give latest updates;
After following the issue with friends from hf-optimum team, my workaround is not necessary anymore, team fixed the problem with https://github.com/huggingface/optimum/issues/1519 on their main branch(though might not be released yet).
I have already created hf repo(https://huggingface.co/canavar/clip-ViT-B-32-multilingual-v1-ONNX) but I was waiting a response from model owners to push to original model repository(if possible) but no luck. I will upload the onnx version of the model to my hf repo and let you know.
thanks
Hi @NirantK again,
I pushed the model to https://huggingface.co/canavar/clip-ViT-B-32-multilingual-v1-ONNX. Do you want me to raise a pr to fastembed repo?
I'd love it if you can PR it! That'll go much faster!
We've added Image embedding support (including CLIP) in v0.3.0, not a multilingual version yet though
I exported clip-ViT-B-32-multilingual-v1 to onnx with some modifications(no effect on the output embedding).
hf optimum onnx export can export this model with (0) Transformer and (1) Pooling. But it can not extend with provided dense layer. What I have done is, I created a model that combines 3 layers as follows;
CombinedModel
Combine dense with original model
Export combined model to onnx
Compare both original and onnx model output;
Output:
I would really like to contribute the onnx model, novices like me can use the onnx version easily. I did not find any CONTRIBUTIONS guide, however, I can contribute the model with your directions.