assafelovic / gpt-researcher

LLM based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
Apache License 2.0
14k stars 1.81k forks source link

Error: ChatOpenAI model none is not an allowed value #801

Open abrebion opened 2 weeks ago

abrebion commented 2 weeks ago

Describe the bug Hi, I'm testing the multi-agents example from https://docs.gptr.dev/docs/gpt-researcher/langgraph, but I got this error message in LangGraph Studio, after the initial research step. Looks like the Editor/Planner agent fails with:

_ValidationError: 1 validation error for ChatOpenAI model none is not an allowed value (type=type_error.none.notallowed)

Desktop (please complete the following information):

danieldekay commented 2 weeks ago

Can you be more specific in where this error comes from? Do you have a log file or trace to share?

abrebion commented 2 weeks ago

Yes, here a screenshot and the log of the error

error


langgraph-api-1       | EDITOR: Planning an outline layout based on initial research...
langgraph-api-1       | 2024-08-26T11:41:01.652595Z [info     ] GET /assistants/fe096781-5601-53d2-b2f6-0d3403f7e9ca/schemas 200 2ms [langgraph_api.server] api_revision=20fd12a api_variant=licensed filename=runners.py func_name=run latency_ms=2 lineno=118 method=GET path=/assistants/fe096781-5601-53d2-b2f6-0d3403f7e9ca/schemas path_params={'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca'} route=/assistants/{assistant_id}/schemas status=200
langgraph-api-1       | 2024-08-26T11:41:01.690703Z [info     ] Run Error                      [langgraph_api.shared.stream] api_revision=20fd12a api_variant=licensed filename=stream.py func_name=to_sse lineno=270
langgraph-api-1       | Traceback (most recent call last):
langgraph-api-1       |   File "/api/langgraph_api/shared/stream.py", line 260, in to_sse
langgraph-api-1       |   File "/api/langgraph_api/shared/stream.py", line 254, in astream_state
langgraph-api-1       |   File "/api/langgraph_api/shared/stream.py", line 128, in astream_state
langgraph-api-1       |   File "/api/langgraph_api/shared/asyncio.py", line 38, in wait_if_not_done
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1247, in astream_events
langgraph-api-1       |     async for event in event_stream:
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py", line 1005, in _astream_events_implementation_v2
langgraph-api-1       |     await task
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py", line 965, in consume_astream
langgraph-api-1       |     async for _ in event_streamer.tap_output_aiter(run_id, stream):
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py", line 204, in tap_output_aiter
langgraph-api-1       |     async for chunk in output:
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1270, in astream
langgraph-api-1       |     _panic_or_proceed(all_futures, loop.step, asyncio.TimeoutError)
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1450, in _panic_or_proceed
langgraph-api-1       |     raise exc
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 76, in arun_with_retry
langgraph-api-1       |     async for _ in task.proc.astream(task.input, task.config):
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3286, in astream
langgraph-api-1       |     async for chunk in self.atransform(input_aiter(), config, **kwargs):
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3269, in atransform
langgraph-api-1       |     async for chunk in self._atransform_stream_with_config(
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2161, in _atransform_stream_with_config
langgraph-api-1       |     chunk: Output = await asyncio.create_task(  # type: ignore[call-arg]
langgraph-api-1       |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py", line 181, in tap_output_aiter
langgraph-api-1       |     first = await py_anext(output, default=sentinel)
langgraph-api-1       |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 78, in anext_impl
langgraph-api-1       |     return await __anext__(iterator)
langgraph-api-1       |            ^^^^^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3239, in _atransform
langgraph-api-1       |     async for output in final_pipeline:
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1314, in atransform
langgraph-api-1       |     async for ichunk in input:
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1332, in atransform
langgraph-api-1       |     async for output in self.astream(final, config, **kwargs):
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 875, in astream
langgraph-api-1       |     yield await self.ainvoke(input, config, **kwargs)
langgraph-api-1       |           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langgraph/utils.py", line 124, in ainvoke
langgraph-api-1       |     ret = await asyncio.create_task(
langgraph-api-1       |           ^^^^^^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/multi_agents/agents/editor.py", line 58, in plan_research
langgraph-api-1       |     response = await call_model(prompt=prompt, model=task.get("model"), response_format="json", api_key=self.headers.get("openai_api_key"))
langgraph-api-1       |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/multi_agents/agents/utils/llms.py", line 14, in call_model
langgraph-api-1       |     response = ChatOpenAI(model=model, max_retries=max_retries, model_kwargs=optional_params, api_key=api_key).invoke(lc_messages).content
langgraph-api-1       |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 113, in __init__
langgraph-api-1       |     super().__init__(*args, **kwargs)
langgraph-api-1       |   File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
langgraph-api-1       |     raise validation_error
langgraph-api-1       | pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatOpenAI
langgraph-api-1       | model
langgraph-api-1       |   none is not an allowed value (type=type_error.none.not_allowed)
danieldekay commented 2 weeks ago

Is your .env file correct and in the right place?

ElishaKay commented 2 weeks ago

Plus 1 on @danieldekay's recommendation

Welcome @abrebion Try this:

Add a .env file to the root folder with these values:

OPENAI_API_KEY=
TAVILY_API_KEY=

Get the keys from here: https://app.tavily.com/sign-in https://platform.openai.com/api-keys

Then restart docker with: docker compose up --build

May the force be with you 🙏

abrebion commented 2 weeks ago

Hi, I already have the .env populated in the multi_agents subfolder (same level as langgraph.json) with the 2 keys. OPENAI_API_KEY= TAVILY_API_KEY=

Actually the first step (initial research) works as you can see in the screenshot.

ElishaKay commented 2 weeks ago

@abrebion

Looks like you're running lanngraph-api as a server

Does the gpt-researcher full-stack app meet your need?

Try this:

Add a .env file to the root folder with these values:

OPENAI_API_KEY= TAVILY_API_KEY= Get the keys from here: https://app.tavily.com/sign-in https://platform.openai.com/api-keys

Then restart docker with: docker compose up --build

Visit localhost:3000

May the force be with you 🙏