microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
30.48k stars 4.44k forks source link

[Issue]: QdrantRetrieveUserProxyAgent is missing support for text-embedding-ada-002 embedding model #1282

Open Halpph opened 7 months ago

Halpph commented 7 months ago

Describe the issue

Issue Overview: In this GitHub issue, the proposal for implementing QdrantRetrieveUserProxyAgent has been successfully executed. However, upon attempting to use the feature, it was discovered that the current implementation relies on the qdrant_client, which in turn depends on fastembedding. Consequently, only a specific set of models listed in SUPPORTED_EMBEDDING_MODELS are supported. SUPPORTED_EMBEDDING_MODELS: Dict[str, Tuple[int, models.Distance]] = { "BAAI/bge-base-en": (768, models.Distance.COSINE), "sentence-transformers/all-MiniLM-L6-v2": (384, models.Distance.COSINE), "BAAI/bge-small-en": (384, models.Distance.COSINE), "BAAI/bge-small-en-v1.5": (384, models.Distance.COSINE), "BAAI/bge-base-en-v1.5": (768, models.Distance.COSINE), "intfloat/multilingual-e5-large": (1024, models.Distance.COSINE), }.

Enhancement Proposal: It is suggested that the support for additional models be extended beyond the current list. A reference implementation, inspired by the approach taken in Issue 253, is provided below:

from litellm import embedding as test_embedding

embed_response = test_embedding('text-embedding-ada-002', input=query_texts)

all_embeddings: List[List[float]] = []

for item in embed_response['data']:
    all_embeddings.append(item['embedding'])

search_queries: List[SearchRequest] = []

for embedding in all_embeddings:
    search_queries.append(
        SearchRequest(
            vector=embedding,
            filter=Filter(
                must=[
                    FieldCondition(
                        key="page_content",
                        match=MatchText(
                            text=search_string,
                        )
                    )
                ]
            ),
            limit=n_results,
            with_payload=True,
        )
    )

search_response = client.search_batch(
    collection_name="{your collection name}",
    requests=search_queries,
)

This adds dependencies on litellm but I think your contribution to this enhancement would greatly benefit the community by expanding the scope of supported models and enhancing the overall utility of QdrantRetrieveUserProxyAgent.

Steps to reproduce

No response

Screenshots and logs

No response

Additional Information

No response

ekzhu commented 7 months ago

We are currently understaffed on the RAG front. Would you be willing to submit a PR to fix this issue?

ykim-isabel commented 7 months ago

We've come up with a complete pull request for this issue using any general embedding function that returns a list of embeddings. We'll post our pull request in the next few hours.

vitorsabbagh commented 7 months ago

We've come up with a complete pull request for this issue using any general embedding function that returns a list of embeddings. We'll post our pull request in the next few hours.

Is this implemented?

ykim-isabel commented 6 months ago

We'll submit the draft pull request for review.

joshkyh commented 6 months ago

I have also just tried to use ada-embedding by performing the vectorization of chunks outside of Autogen, using LlamaIndex. And then trying to query the populated QDrant database using the notebook example at https://github.com/microsoft/autogen/blob/main/notebook/agentchat_qdrant_RetrieveChat.ipynb using "embedding_model": "default", and docs_path: None.

However, the error message is that embedding_model must be one of the 12 options at https://qdrant.github.io/fastembed/examples/Supported_Models/, and ada is not in it.