-
I have created index using the below sample.ipynb file:
https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-…
-
Ollama [added support for embedding models like BERT](https://github.com/ollama/ollama/issues/327). This is much faster than using a generative model, such as llama2, which is currently the default in…
-
## The Bug
I'm trying to declare a Field as a Vector-Field by setting the `vector_options` on the `Field`.
Pydantic is then forcing me to annotate the field with a proper type.
But with any possibl…
-
### What is the issue with the HTML Standard?
Discussed this with @fantasai
> TL;DR - Could ``, ``, and possibly `dir` be updated to deal with unbalanced/malformed Unicode bidi control characters…
-
```
class TextEncoder(nn.Module):
def __init__(self, clip_model,device):
super().__init__()
self.transformer = clip_model.transformer
self.positional_embedding = clip_…
766O updated
6 months ago
-
### System Info
```shell
Collecting environment information...
WARNING 11-10 14:19:08 _custom_ops.py:14] Failed to import from vllm._C with ImportError('/mnt/bbuf/vllm-backup/vllm/_C.abi3.so: undef…
-
environmentVariables gets ignored:
acrName: ${{ env.REGISTRY_NAME }}
acrPassword: ${{ secrets.VIDEOCONTENTMODERATIONBACKEND_REGISTRY_PASSWORD }}
acrUsername: ${{ secrets.VIDEO…
-
### 🐛 Describe the bug
When I use the mps it turns into nan values for just a simple encoder similar to the tutorial on PyTorch.org. This was after I tried converting the tensors to float32.
```
…
-
I have gone through the example: opensearch-py-ml/examples/demo_deploy_cliptextmodel.html
Model is correctly registered in opensearch cluster but the final command of the example:
ml_client.depl…
-
I'm encountering an issue with the dimensions of the text encoder output in a fine-tuned CLIP model. The fine-tuning output of my CLIP model based on RN50 is (1, 1024), whereas the output from CLIPTex…