Open SteveSandersonMS opened 5 months ago
Also, could you clarify how to specify the similarity metric (distance function) with the Chroma client?
Docs: https://docs.trychroma.com/usage-guide#changing-the-distance-function
I was getting zero results to all queries until I changed the similarity metric to cosine, but was only able to do that by guessing the HTTP API and invoking it manually.
cc: @roji, @dmytrostruk, @westey-m
However, one of the value props of Chroma (vs other vector dbs) is that it can handle embedding generation for you natively via a built-in model. I was expecting that by not specifying an embedding generator, it would use the built-in model.
@SteveSandersonMS This capability is supported in Chroma Python and JS SDK because they have classes with integration for different embedding generation providers (e.g. OpenAI, Google Gemini etc.).
There is no Chroma .NET SDK, so in .NET version of SK we have implemented our own ChromaClient
which talks to Chroma Backend API. I just checked their OpenAPI spec and some code of their backend server and I can't find a capability to inject embedding AI model or something like that to enable embedding by default.
So, from SK point of view, you can look at ITextEmbeddingGenerationService
as the same capability in Chroma Python/JS SDK to inject embedding function. Let me know if that makes sense.
Also, could you clarify how to specify the similarity metric (distance function) with the Chroma client?
I believe it's not supported yet, and we should make it configurable.
Thanks for clarifying. I see now that when the Chroma docs say it will handle embedding automatically, they actually mean their client-side APIs will do that, and the server does not.
[metric] I believe it's not supported yet, and we should make it configurable.
OK great. I'll update the issue title to reflect this.
I also didn't spot any way of passing "where" filters when querying. Apologies if I'm just failing to recognize correct usage patterns.
i would like to use cosine for similarity search. for melvis in llamaindex i do like this milvus python = Milvus(collection_name=collectionName, embedding_function=oembed, index_params={"metric_type": "COSINE"}, connection_args={"host": "10.0.0.49", "port": 19530})
if you see this link https://stackoverflow.com/questions/77794024/searching-existing-chromadb-database-using-cosine-similarity the default is L2 which is not optimal for full text similarity search.
so this would be a great addition to semantic kernel. i treid the default (whic is l2) with semanti kernel and it does not yield good results as it does in pything with cosine distiance specified.
@matthewbolanos I think makes sense to include in the memory connector update and apply to all memory connectors. Do you agree?
Update: this is the real issue.
The docs for the Chroma connector suggest usage like this:
Configuring a text embedding generator seems to be mandatory, since if you don't, then at runtime this fails with
ITextEmbeddingGenerationService dependency was not provided
.However, one of the value props of Chroma (vs other vector dbs) is that it can handle embedding generation for you natively via a built-in model. I was expecting that by not specifying an embedding generator, it would use the built-in model.
Is this possible to use with SK?