langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.52k stars 15.51k forks source link

SqliteCache fails with ChatOpenAI.with_structured_output(method="json_schema") #27264

Open bjschnei opened 1 month ago

bjschnei commented 1 month ago

Checked other resources

Example Code


from langchain.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from pydantic import BaseModel

from dotenv import load_dotenv

_ = load_dotenv()

class Limerick(BaseModel):
    limerick: str

def main():
    set_llm_cache(SQLiteCache())
    llm = ChatOpenAI(model_name="gpt-4o-2024-08-06")
    structured_llm = llm.with_structured_output(Limerick, method="json_schema")
    for _ in range(2):
        result = structured_llm.invoke([SystemMessage(content="Write a limerick")])
        print(result)

if __name__ == "__main__":
    main()

Error Message and Stack Trace (if applicable)

File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3022, in invoke input = context.run(step.invoke, input, config, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5354, in invoke return self.bound.invoke( ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke self.generate_prompt( File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate raise e File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate self._generate_with_cache( File "/usr/local/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 818, in _generate_with_cache return ChatResult(generations=cache_val) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/pydantic/main.py", line 212, in init validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatResult generations.0 Input should be a valid dictionary or instance of ChatGeneration [type=model_type, input_value=Generation(text='{"lc": 1...id_tool_calls": []}}}}'), input_type=Generation] For further information visit https://errors.pydantic.dev/2.9/v/model_type

Description

Trying to use ChatOpenAI(model_name="gpt-4o-2024-08-06") with "json_schema". issue is not seen with "json_mode"

System Info

System Information

OS: Linux OS Version: #1 SMP PREEMPT_DYNAMIC Mon Aug 12 08:48:58 UTC 2024 Python Version: 3.12.7 (main, Oct 1 2024, 22:28:49) [GCC 12.2.0]

Package Information

langchain_core: 0.3.10 langchain: 0.3.3 langchain_community: 0.3.2 langsmith: 0.1.133 langchain_openai: 0.2.2 langchain_text_splitters: 0.3.0 langgraph: 0.2.35

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.10.9 async-timeout: Installed. No version info available. dataclasses-json: 0.6.7 httpx: 0.27.2 jsonpatch: 1.33 langgraph-checkpoint: 2.0.1 numpy: 1.26.4 openai: 1.51.2 orjson: 3.10.7 packaging: 24.1 pydantic: 2.9.2 pydantic-settings: 2.5.2 PyYAML: 6.0.2 requests: 2.32.3 requests-toolbelt: 1.0.0 SQLAlchemy: 2.0.35 tenacity: 8.5.0 tiktoken: 0.8.0 typing-extensions: 4.12.2 root@77355f84363e:/usr/src/app#

bjschnei commented 1 month ago

I think the root cause of this may be that Pydantic types are part of the "parsed" message but are not Serializable:

See below the part that shows: {"parsed": {"lc": 1, "type": "not_implemented"

{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "output", "ChatGeneration"], "kwargs": {"text": "{\"limerick\":\"There once was a man from Peru,\nWho dreamt he was eating his shoe.\nHe awoke with a fright,\nIn the middle of the night,\nTo find his sock in a stew!\"}", "generation_info": {"finish_reason": "stop", "logprobs": null}, "type": "ChatGeneration", "message": {"lc": 1, "type": "constructor", "id": ["langchain", "schema", "messages", "AIMessage"], "kwargs": {"content": "{\"limerick\":\"There once was a man from Peru,\nWho dreamt he was eating his shoe.\nHe awoke with a fright,\nIn the middle of the night,\nTo find his sock in a stew!\"}", "additional_kwargs": {"parsed": {"lc": 1, "type": "not_implemented", "id": ["main", "Limerick"], "repr": "Limerick(limerick='There once was a man from Peru,\nWho dreamt he was eating his shoe.\nHe awoke with a fright,\nIn the middle of the night,\nTo find his sock in a stew!')"}

juuunoz commented 1 month ago

Hi, my name is Juno, I'm working with a team at UTSC as part of a group project to contribute to LangChain. We're interested in working on this issue, and were hoping to take a crack at it.

FarhanChowdhury248 commented 1 month ago

@cbornet I am working with @juuunoz on this matter. We investigated this issue a bit and found that it stems from the fact that Pydantic types do not inherit from the Serializable class, and therefore we run into issues when we trying to use langchain_core.load.load.Reviver to bring these values out of the cache and back into the Pydantic type it was defined as originally.

We saw that you have done a lot of work on the load.py file that handles a lot of the functionality stated above, and so we wanted to get your opinion on how we should proceed with this issue. We came up with a few ideas:

Do you have any thoughts on which changes should be considered and how we may approach them? Also, is there anyone else that we should be consulting regarding this matter?

FarhanChowdhury248 commented 1 month ago

@baskaryan You seem to have worked with langchain_community\cache.py. Do you know why many of the caches do not support ChatGeneration? Specifically regarding this issue, I am looking at the the following code in SqlAlchemyCache.lookup:

  try:
      return [loads(row[0]) for row in rows]
  except Exception:
      logger.warning(
          "Retrieving a cache value that could not be deserialized "
          "properly. This is likely due to the cache being in an "
          "older format. Please recreate your cache to avoid this "
          "error."
      )
      # In a previous life we stored the raw text directly
      # in the table, so assume it's in that format.
      return [Generation(text=row[0]) for row in rows]

The exception handling seems to return Generation[] due to legacy reasons. Is it feasible to return ChatGeneration[] instead?