run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.36k stars 4.67k forks source link

Unable to Retrieve Chat Responses from Multiple Indexes. #13125

Open rk-spirinova opened 2 months ago

rk-spirinova commented 2 months ago

Question Validation

Question

Hello everyone,

I'm currently encountering an issue while attempting to retrieve chat responses from multiple indexes in my project. Here's a brief overview of the situation:

Problem: Despite conducting thorough research, I haven't found a suitable solution for fetching chat responses from multiple indexes simultaneously.

Objective: My goal is to efficiently collect chat responses from various indexes within my application to enhance the user experience.

Below is a snippet of the code I'm currently using to handle chat requests and create indexes:

async def handle_chat_request(request: Request, data: ChatData) -> StreamingResponse:
    if data.config.model in OLLAMA_MODELS:
        return await _ollama_chat(request, data)
    elif data.config.model in OPENAI_MODELS:
        return await _openai_chat(request, data)
    else:
        raise HTTPException(status_code=400, detail="Invalid Model Name.")

async def _openai_chat(request: Request, data: ChatData) -> StreamingResponse:
    print("Received an OpenAI chat request:", request, data)
    Settings.llm = OpenAI(model=data.config.model, temperature=0.2)
    Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
    return await _chat_stream(request, data)

async def _chat_stream(request: Request, data: ChatData) -> StreamingResponse:
    try:
        index = _get_or_create_index(data.config.model, data.datasource)
        chat_engine = index.as_chat_engine()
        response = chat_engine.stream_chat(data.message, [])

        async def event_generator():
            for token in response.response_gen:
                if await request.is_disconnected():
                    break
                yield convert_sse(token)
            yield convert_sse({"done": True})

        return StreamingResponse(
            event_generator(),
            media_type="text/event-stream",
        )
    except Exception as e:
        full_exception = traceback.format_exc()
        logger.error(f"{data.config.model} chat error: {e}\n{40 * '~'} \n{full_exception}")
        raise HTTPException(status_code=500, detail="Internal Server Error")

Initially, I attempted to combine multiple documents into a single index to address this challenge. However, this approach led to accuracy issues in the response. Refactoring is necessary to overcome this obstacle. Below is the code snippet illustrating the initial approach:

document1 = SimpleDirectoryReader(ds_data_dir).load_data()
document2 = generate_web_index("https://github.com")

index = VectorStoreIndex.from_documents(document1 + document2 , show_progress=True, 
                                        storage_context=StorageContext.from_defaults(),
                                        embed_model=Settings.embed_model)
index.storage_context.persist(ds_storage_dir)
def generate_web_index(website_url: str):
    return BeautifulSoupWebReader().load_data(urls=[website_url])

I'm seeking advice on how to improve this approach and handle multiple indexes effectively. Any insights or suggestions would be greatly appreciated.

Thank you.

dosubot[bot] commented 2 months ago

To address the challenge of fetching chat responses from multiple indexes simultaneously while maintaining accuracy and enhancing user experience, consider the following approach:

  1. Maintain Separate Indexes: Keep your data in different indexes based on the type of chat responses. This helps in preserving data integrity and improving the accuracy of responses.

  2. Implement Asynchronous Fetching: Use asynchronous programming to fetch and process chat responses from each index concurrently. This reduces response time significantly.

  3. Aggregate Results Thoughtfully: After fetching the responses, aggregate them in a way that maintains their relevance and accuracy.

  4. Utilize Advanced Chat Engines: Explore using advanced chat engines like CitationsContextChatEngine from LlamaIndex, which is designed for applications requiring context-aware responses with citations and related documents. This engine can fetch and process responses from multiple indexes effectively.

  5. Caching for Efficiency: Implement caching mechanisms for frequently accessed data to improve the efficiency of your application.

Here's a simplified example of how to implement concurrent fetching using an advanced chat engine:

import asyncio
from llama_index.packs.cohere_citation_chat.citations_context_chat_engine import CitationsContextChatEngine

async def fetch_and_process_chat_responses(queries):
    chat_engine = CitationsContextChatEngine()  # Advanced engine for enriched responses

    tasks = []
    for query in queries:
        task = asyncio.create_task(chat_engine.chat(query))
        tasks.append(task)

    responses = await asyncio.gather(*tasks)
    for response in responses:
        print(f"Chat Response with Citations and Documents: {response}")
        # Process and aggregate responses as needed

# Example usage
queries = ['Query 1', 'Query 2', 'Query 3']
asyncio.run(fetch_and_process_chat_responses(queries))

This approach leverages asynchronous programming to concurrently fetch and process data from multiple sources, enriching the chat responses with citations and related documents. Depending on your specific needs, you may choose a different chat engine provided by LlamaIndex to better suit your application's requirements.

#### Details **Sources** - [[Feature Request]: UX: Varied Latency in Retrieval](https://github.com/run-llama/llama_index/issues/6623) - [llama-index-packs/llama-index-packs-cohere-citation-chat/llama_index/packs/cohere_citation_chat/utils.py](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-cohere-citation-chat/llama_index/packs/cohere_citation_chat/utils.py) - [llama-index-packs/llama-index-packs-cohere-citation-chat/examples/example.py](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-cohere-citation-chat/examples/example.py) - [llama-index-core/llama_index/core/chat_engine/__init__.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/chat_engine/__init__.py) - [llama-index-packs/llama-index-packs-cohere-citation-chat/llama_index/packs/cohere_citation_chat/citations_context_chat_engine.py](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-cohere-citation-chat/llama_index/packs/cohere_citation_chat/citations_context_chat_engine.py) - [llama-index-legacy/llama_index/legacy/llms/openai_utils.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/llms/openai_utils.py) - [benchmarks/agent/agent_utils.py](https://github.com/run-llama/llama_index/blob/main/benchmarks/agent/agent_utils.py) - [llama-index-packs/llama-index-packs-panel-chatbot/llama_index/packs/panel_chatbot/app.py](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-panel-chatbot/llama_index/packs/panel_chatbot/app.py)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

logan-markewich commented 2 months ago

A few notes

async def _openai_chat(request: Request, data: ChatData) -> StreamingResponse:
    print("Received an OpenAI chat request:", request, data)
    llm = OpenAI(model=data.config.model, temperature=0.2)
    embed_model = OpenAIEmbedding(model="text-embedding-3-small")
    return await _chat_stream(request, data, llm, embed_model)

async def _chat_stream(request: Request, data: ChatData, llm, embed_model) -> StreamingResponse:
    try:
        index = _get_or_create_index(data.config.model, data.datasource)
        chat_engine = index.as_chat_engine(llm=llm, embed_model=embed_model)
        response = chat_engine.stream_chat(data.message, [])

        async def event_generator():
            for token in response.response_gen:
                if await request.is_disconnected():
                    break
                yield convert_sse(token)
            yield convert_sse({"done": True})

        return StreamingResponse(
            event_generator(),
            media_type="text/event-stream",
        )
    except Exception as e:
        full_exception = traceback.format_exc()
        logger.error(f"{data.config.model} chat error: {e}\n{40 * '~'} \n{full_exception}")
        raise HTTPException(status_code=500, detail="Internal Server Error")

Also in a server setting like this, I wouldn't be saving to/from disk. This can be pretty slow (and also blocks the async event loop). I would use a remote vector db like qdrant, weaviate, etc.