Closed travisgu closed 4 days ago
Hey there, @travisgu! I'm here to assist you with any issues you're experiencing while using Quivr. I can help you solve bugs, answer questions, and even guide you on how to become a contributor to the project. Let's get started!
To resolve the issue of blank chat responses when using the Ollam qwen:72b model in Quivr, ensure that the prompt
and assistant
columns in the Supabase chat history table are being correctly populated. Here are the steps to address this:
Check the generate_stream
and generate_answer
methods: Ensure these methods are correctly saving the responses to the database.
Update the save_answer
method: Ensure this method correctly updates the prompt
and assistant
columns in the Supabase chat history table.
Here is an example of how you might update the save_answer
method:
def save_answer(self, question, response_tokens, streamed_chat_history, save_answer):
# Combine response tokens into a single response
response = ''.join(response_tokens)
# Update the chat history with the response
streamed_chat_history.assistant = response
# Save the chat history to the database
if save_answer:
chat_service.save_chat_history(
chat_id=question.chat_id,
question=question.question,
answer=response,
prompt=streamed_chat_history.prompt,
assistant=streamed_chat_history.assistant
)
.env
configuration: Ensure the OLLAMA_API_BASE_URL
is uncommented and correctly pointing to the local URL for the Ollama API:# LOCAL
OLLAMA_API_BASE_URL=http://host.docker.internal:11434 # Uncomment to activate ollama. This is the local url for the ollama api
Ensure that the generate_answer
and generate_stream
methods in the backend/modules/brain/qa_headless.py
file are correctly handling the chat history updates and saving the responses to the database [1][2][3].
To continue talking to Dosu, mention @dosu.
I didn't change any code and I was running quivr with the docker image. The url configuration is correct. I can see the prompt in the Ollama log file and Ollama return 200 for the API call. @dosu
what's the model name you use? I find that it should be prefixed with ollama/
what's the model name you use? I find that it should be prefixed with
ollama/
right,the model name is “ollama/qwen:72b”
btw, I am using the quivr with docker-compose on my laptop, I am not sure if this is cause.
Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.
What happened?
When I tried to use Quivr with Ollam qwen:72b model, the chat is already empty. There is no error showing in the log or web page. I checked the superbase chat history table the assistant text is empty:
Relevant log output
Twitter / LinkedIn details
No response