langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
89.57k stars 14.14k forks source link

/langchain_experimental/llms/ollama_functions.py", line 400, in _generate raise ValueError( ValueError: 'llama3' did not respond with valid JSON. #23156

Open baskargopinath opened 1 month ago

baskargopinath commented 1 month ago

Checked other resources

Example Code

from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from typing import Optional
import json

# Schema for structured response
class AuditorOpinion(BaseModel):
    opinion: Optional[str] = Field(
        None,
        description="The auditor's opinion on the financial statements. Values are: 'Unqualified Opinion', "
                    "'Qualified Opinion', 'Adverse Opinion', 'Disclaimer of Opinion'."
    )

def load_markdown_file(file_path):
    with open(file_path, 'r') as file:
        return file.read()

path = "data/auditor_opinion_1.md"
markdown_text = load_markdown_file(path)

# Prompt template
prompt = PromptTemplate.from_template(
"""
what is the auditor's opinion

Human: {question}
AI: """
)

# Chain
llm = OllamaFunctions(model="llama3", format="json", temperature=0)
structured_llm = llm.with_structured_output(AuditorOpinion)
chain = prompt | structured_llm
alex = chain.invoke(markdown_text)

response_dict = alex.dict()

# Serialize the dictionary to a JSON string with indentation for readability
readable_json = json.dumps(response_dict, indent=2, ensure_ascii=False)

# Print the readable JSON
print(readable_json)

Error Message and Stack Trace (if applicable)

langchain_experimental/llms/ollama_functions.py", line 400, in _generate
    raise ValueError(
ValueError: 'llama3' did not respond with valid JSON. 

Description

Trying to get structured output from markdown text using with_structured_output

System Info

System Information

OS: Darwin OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020 Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27) [Clang 15.0.0 (clang-1500.3.9.4)]

Package Information

langchain_core: 0.2.9 langchain: 0.2.5 langchain_community: 0.2.5 langsmith: 0.1.79 langchain_experimental: 0.0.61 langchain_google_genai: 1.0.5 langchain_google_vertexai: 1.0.4 langchain_mistralai: 0.1.8 langchain_openai: 0.1.8 langchain_text_splitters: 0.2.0 langchainhub: 0.1.16

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langgraph langserve

chaehonglee commented 1 month ago

also hitting this

zs856 commented 4 weeks ago

same here.
I just did some debugging. The _generate method seems to always return an empty string The following code block is from around line 398 of langchain_experimental/llms/ollama_functions.py:

        system_message = system_message_prompt_template.format(
                    tools=json.dumps(functions, indent=2)
                )
        print(f"{messages=}")
        response_message = super()._generate(
            [system_message] + messages, stop=stop, run_manager=run_manager, **kwargs
        )
        print(f"{response_message=}")
        chat_generation_content = response_message.generations[0].text
        print(f"{chat_generation_content=}")
messages=[HumanMessage(content='what is the weather in shenzhen', id='1a5b3d30-aa81-4fe4-acf4-4dec3c7b60f7'), AIMessage(content='', id='run-3d3484d6-f615-42a1-83c3-57da16dde58f-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in shenzhen'}, 'id': 'call_b858f03795424bee96f12ac819572eb2'}]), ToolMessage(content='[{"url": "https://world-weather.info/forecast/china/shenzhen/june-2024/", "content": "Detailed \\u26a1 Shenzhen Weather Forecast for June 2024 - day/night \\ud83c\\udf21\\ufe0f temperatures, precipitations - World-Weather.info. Add the current city. Search. Weather; Archive; Widgets \\u00b0F. World; China; Guangdong; Weather in Shenzhen; Weather in Shenzhen in June 2024. ... 24 +88\\u00b0 +82\\u00b0 25 +86\\u00b0 +82\\u00b0 26 ..."}]', name='tavily_search_results_json', id='1c92f82d-4e5d-4880-993c-c7c0967c85e4', tool_call_id='call_b858f03795424bee96f12ac819572eb2')]
response_message=ChatResult(generations=[ChatGeneration(generation_info={'model': 'qwen2:7b-instruct', 'created_at': '2024-06-24T05:57:25.530385494Z', 'message': {'role': 'assistant', 'content': ''}, 'done': True, 'total_duration': 10381721517, 'load_duration': 884395, 'prompt_eval_count': 164, 'prompt_eval_duration': 10253678000, 'eval_count': 1, 'eval_duration': 49000}, message=AIMessage(content=''))], llm_output=None)
chat_generation_content=''

Error: ValueError: 'qwen2:7b-instruct' did not respond with valid JSON. Please try again. Response: