QuivrHQ / quivr

Opiniated RAG for integrating GenAI in your apps 🧠 Focus on your product rather than the RAG. Easy integration in existing products with customisation! Any LLM: GPT4, Groq, Llama. Any Vectorstore: PGVector, Faiss. Any Files. Anyway you want.
https://core.quivr.com
Other
36.69k stars 3.59k forks source link

Bug: Streaming Response for model "gpt-3.5-turbo-0613" #668

Closed ghost closed 1 year ago

ghost commented 1 year ago

Bug: Streaming Response for model "gpt-3.5-turbo-0613"

Git SHA: 6a436d822c4ed0f1c66e83f5b5a1ba06c8e85a6e

Operating System:

Windows 11, No WSL / No WSL2

Docker and Docker Compose versions:

Supabase version and setup:

Detailed steps to reproduce the issue:

Run the project using Docker.

Make a chat request using model "gpt-3.5-turbo-0613". Observe that the streaming response is not working as expected.

Problem Description:

In the codebase for the backend routes in file backend/routes/chat_route.py, the if condition which checks if chat_question.model is present in the streaming_compatible_models array, seems to not be properly including "gpt-3.5-turbo-0613".

Code Snippets:

# backend/routes/chat_route.py
if chat_question.model not in streaming_compatible_models:
    # Forward the request to the none streaming endpoint
    return await create_question_handler(
        request, chat_question, chat_id, current_user
    )
# backend/util/constants.py
openai_function_compatible_models = [
    "gpt-3.5-turbo-0613",
    "gpt-4-0613",
]

streaming_compatible_models = ["gpt-3.5-turbo, gpt4all-j-1.3"]

private_models = ["gpt4all-j-1.3"]

This issue might cause improper functioning of the system when using the mentioned model. Any feedback or suggested fixes would be appreciated.

ghost commented 1 year ago

I think the "gpt-3.5-turbo-0613" is streaming compatible model ? but I modify it, new problem is appear then.

github-actions[bot] commented 1 year ago

Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.