Closed MrLinWang closed 1 year ago
π€
Hello,
Thank you for reaching out and providing detailed information about the issue you're facing. The JSONDecodeError
you're encountering typically occurs when there's an issue with the JSON data being parsed. In this case, it seems to be happening when the ChatOllama
model is trying to process the response from the Ollama's generate endpoint.
Given that you're using LangChain version 0.0.320, I would first recommend updating to the latest version, which is 0.0.321. There might have been bug fixes or improvements that could potentially resolve the issue you're facing.
You can update LangChain by running the following command:
pip install --upgrade langchain
After updating, please try running your code again to see if the issue persists.
If the error still occurs, it would be helpful to have a bit more information to diagnose the issue. Could you please provide the full traceback of the error? This would give us more context about where exactly the error is happening.
Additionally, it would be helpful to know the exact input you're passing to the ChatOllama
model. If there's an issue with the input data, it could potentially cause a JSONDecodeError
.
Lastly, please ensure that your Python environment has all the necessary dependencies installed and up-to-date. The ChatOllama
model has several dependencies, as mentioned in the context above.
Please let me know if updating LangChain and checking the input data and dependencies help resolve the issue. If not, the additional information requested will help us further investigate the problem.
Best regards.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot , the issue still persists after upgrading from langchain 0.0.320 to the latest version, 0.0.321.
π€
Hello,
Thank you for reaching out and providing detailed information about the issue you're facing. The JSONDecodeError
you're encountering typically occurs when there's an issue with the JSON data that's being parsed. In this case, it seems like the error is occurring within the ChatOllama
model.
Here are some steps you can take to debug and potentially resolve the issue:
output
variable in the format_response_payload
method: This method is responsible for formatting the response from the model. You can add a print statement before the json.loads(output)
line to see what the output
variable contains. If it's not a valid JSON string, this could be the source of the error.def format_response_payload(self, output: bytes) -> str:
"""Formats response"""
print(output) # Add this line to debug
return json.loads(output)["output"]
response_payload
variable in the _call
method: This method is responsible for making the call to the model and receiving the response. You can add a print statement before the self.content_formatter.format_response_payload(response_payload)
line to see what the response_payload
variable contains.def _call(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Call out to an AzureML Managed Online endpoint."""
_model_kwargs = self.model_kwargs or {}
request_payload = self.content_formatter._format_request_payload(
messages, _model_kwargs
)
response_payload = self.http_client.call(request_payload, **kwargs)
print(response_payload) # Add this line to debug
generated_text = self.content_formatter.format_response_payload(
response_payload
)
return generated_text
ChatOllama
model is expected to return JSON data with the following structure:{
"response": "<response text>",
"done": <boolean>
}
If the JSON data returned by the model doesn't match this structure, it could be causing the error. You can check this by printing out the keys of the JSON object in the format_response_payload
method.
def format_response_payload(self, output: bytes) -> str:
"""Formats response"""
json_obj = json.loads(output)
print(json_obj.keys()) # Add this line to debug
return json_obj["output"]
If the "output" key does not exist in the JSON object, you'll need to modify the code to handle this case.
Please try these debugging steps and let me know what you find. If the issue persists, we can explore further solutions.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
π€
Hello,
Thank you for reaching out and providing detailed information about the issue you're facing. The JSONDecodeError
you're encountering typically occurs when there's an issue with the JSON data that's being parsed. In this case, it seems like the error could be occurring in the ChatOllama
model, specifically in the format_response_payload
method of the LlamaContentFormatter
class or in the _call
method of the AzureMLChatOnlineEndpoint
class.
Here are some steps to debug and resolve the issue:
output
variable in the format_response_payload
method. Make sure it is a valid JSON string. You can print it out before the json.loads(output)
line to see what it contains.def format_response_payload(self, output: bytes) -> str:
"""Formats response"""
print(output) # Add this line to debug
return json.loads(output)["output"]
response_payload
variable in the _call
method. Make sure it is a valid JSON string. You can print it out before the self.content_formatter.format_response_payload(response_payload)
line to see what it contains.def _call(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Call out to an AzureML Managed Online endpoint.
Args:
messages: The messages in the conversation with the chat model.
stop: Optional list of stop words to use when generating.
Returns:
The string generated by the model.
Example:
.. code-block:: python
response = azureml_model("Tell me a joke.")
"""
_model_kwargs = self.model_kwargs or {}
request_payload = self.content_formatter._format_request_payload(
messages, _model_kwargs
)
response_payload = self.http_client.call(request_payload, **kwargs)
print(response_payload) # Add this line to debug
generated_text = self.content_formatter.format_response_payload(
response_payload
)
return generated_text
If the JSON string is not valid, you need to check where it is coming from and fix the issue there. It could be that the AzureML Managed Online endpoint is returning an invalid JSON string, or there could be an issue with how the request_payload
is being created in the _format_request_payload
method.
If the JSON string is valid, but the json.loads(output)["output"]
line is still throwing an error, it could be that the JSON object does not have an "output" key. You can check this by printing out the keys of the JSON object.
def format_response_payload(self, output: bytes) -> str:
"""Formats response"""
json_obj = json.loads(output)
print(json_obj.keys()) # Add this line to debug
return json_obj["output"]
If the "output" key does not exist in the JSON object, you need to modify the code to handle this case.
Please try these debugging steps and let me know if you're able to resolve the issue or if you need further assistance.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I tried using the ChatOpenAI API, and it worked fine. It seems like there might be a bug in the ChatOllama API. Here is the code I used with the ChatOpenAI API:
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
prompt_text="""You are an assistant tasked with summarizing tables and text. \
Give a concise summary of the table or text. Table or text chunk: {element} """
prompt = ChatPromptTemplate.from_template(prompt_text)
model = ChatOpenAI()
summarize_chain = {"element": lambda x:x} | prompt | model | StrOutputParser()
# Apply to text
texts = [i.text for i in text_elements if i.text != ""]
text_summaries = summarize_chain.batch(texts, {"max_concurrency": 5})
Hello, I am using similar code to you, but there is always a problem of failing to link libraries in the hugging face during runtime. If it runs successfully, the code execution time will be very long. How can I solve this problem problem:urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /microsoft/table-transformer-structure-recognition/resolve/main/config.json (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)')))
In case people are struggling with JSONDecodeError: Expecting value: line 1 column 1 (char 0)
, in my case the issue was that the Ollama API was returning a response that was starting with a space, in the format {
(there is a space before {
). All I had to do was call .strip()
on the response before attempting to JSON decode it.
System Info
Python Version: Python 3.10.4 Langchain Version: 0.0.320 OS: Ubuntu 18.04
Who can help?
@hwchase17 @agola11
Information
Related Components
Reproduction
When running the official cookbook code...
The following error was returned:
After attempting:
Returned the same error as before.
It seems that the error is occurring in the βChatOllama modelβ.
Expected behavior
Attempting to reproduce the effects in the Cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb.