langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.51k stars 14.81k forks source link

Issue: langchain agent doesnt work with the new json mode of gpt4-1106-preview #13003

Closed fisipro2 closed 5 months ago

fisipro2 commented 10 months ago

Issue you'd like to raise.

When using the new gpt's json mode by setting response_format={ "type": "json_object" }, the langchain agent failed to parse the openai output, is there any plan to support that?

Suggestion:

No response

dosubot[bot] commented 10 months ago

🤖

Hello @fisipro2,

Great to see you again here in the LangChain repository! I hope you're doing well. Looking forward to assisting you with your issue.

It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!

bhargav-11 commented 10 months ago

bump

shariqanwar20 commented 10 months ago

bump

GhaithDek commented 10 months ago

looks like we have a better answer here: https://github.com/langchain-ai/langchain/issues/12953#issuecomment-1796524810 or here: https://js.langchain.com/docs/integrations/chat/openai

zedle commented 10 months ago

looks like we have a better answer here: #12953 (comment) or here: https://js.langchain.com/docs/integrations/chat/openai

The kwarg method works, but breaks agents.

ForrestTrepte commented 9 months ago

In this notebook under the JSON mode section, the response format is set in Python like this:

chat = ChatOpenAI(model="gpt-3.5-turbo-1106").bind(
    response_format={"type": "json_object"}
)

It seems to work. If I removed "Return a JSON list." from the SystemMessage, I received an error confirming that the response_format was used.

MichaelPeterJoyce commented 9 months ago

Bumping this!

` llm = AzureChatOpenAI(

    model_kwargs={"response_format": {type: "json_object"}},
)`

When running this via AgentExecutor

I get the following error

File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) TypeError: keys must be str, int, float, bool or None, not type

jsfaber commented 9 months ago

Bump.

LizaKing0 commented 9 months ago

Bumping this!

` llm = AzureChatOpenAI(

    model_kwargs={"response_format": {type: "json_object"}},
)`

When running this via AgentExecutor

I get the following error

File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) TypeError: keys must be str, int, float, bool or None, not type

This particular case is because you're missing " around the type key. Should be

model_kwargs={"response_format": {"type": "json_object"}}
aiwalter commented 8 months ago

its still not working, no? Or is it working for anyone?

cklapperich commented 8 months ago

@aiwalter it seems to be working for me on python with the latest release, though I haven't tested it with Agents yet. :)