Closed tammenha closed 5 months ago
đ¤
Hey there @tammenha! Great to see you back. Hope you're doing well.
Based on the context provided, it seems like you're encountering a similar issue to what was reported in these two issues: Issue #11029 and Issue #9549.
The TypeError you're encountering is likely due to the fact that you're trying to concatenate a string with a None value. In Python, you cannot concatenate a string with None directly. This error could be occurring in the ChatMessage
objects where content=None
. If you're trying to concatenate the content
of these messages with a string elsewhere in your code, you'll encounter this error.
To fix this, you need to ensure that you're not trying to concatenate None with a string. You could add a check before concatenating to see if the value is None and handle it appropriately. For example:
if content is not None:
result = some_string + content
else:
result = some_string
The issue #11029 was fixed in llama-index-llms-vllm=0.1.4
. However, you mentioned that you're using LlamaIndex version 0.10.10. The latest version of LlamaIndex is v0.10.11. It's possible that the issue you're encountering has been fixed in the latest version. I would recommend updating to the latest version and see if the issue persists.
If the issue still persists after updating, could you please provide more information about the context in which you're trying to concatenate a string with the content
of a ChatMessage
? This would help in providing a more precise solution.
lame, why is localai sending None
in deltas during streaming lol
will need to update the OpenAI LLM class to fix
@logan-markewich I am still suffering from this with OpenAI-Like with LiteLLM--Ollama--Mixtral:8x7B
today, and thus with this change https://github.com/run-llama/llama_index/pull/14365.
Bug Description
When generating a response stream (i.e. with
print_response_stream()
), i get an TypeError as the generation of the response concludes. The response is fully generated at that point, but I still getFor details see the logs below.
I'm using Llama-Index with LocalAI, but not sure if thats related.
These are the models I used: https://huggingface.co/TheBloke/em_german_leo_mistral-GGUF https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF
Version
0.10.10
Steps to Reproduce
Relevant Logs/Tracbacks