cohere-ai / notebooks

Code examples and jupyter notebooks for the Cohere Platform
MIT License
484 stars 130 forks source link

[UnprocessableEntityError]: status_code: 422, body: data=None message='No valid tool call or response generated' #237

Closed Zephyruss1 closed 1 week ago

Zephyruss1 commented 2 weeks ago

Issue Type

Bug Report

Notebook Name

No response

Description

I'm trying to run Cohere model=command-r-plus with with_structed_output on LangChain. But receiving error UnprocessableEntityError: status_code: 422, body: data=None message='No valid tool call or response generated'

So I think problem is coming from langchain_cohere.chat_models.ChatCohere --> def with_structured_output

I excepted to invoke message success.

Steps to Reproduce (for bugs)

No response

System Information

Additional Context

Code Snippet:

import getpass
import os
from dotenv import load_dotenv

from pydantic import BaseModel, Field, List

from langchain_core.prompts import ChatPromptTemplate
from langchain_cohere import ChatCohere

dotenv = load_dotenv(r"/home/zephyrus/Desktop/LangChain/project.env")

def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")
    else:
        os.getenv(var)

_set_env("COHERE_API_KEY")

os.environ["LANGCHAIN_TRACKING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Plan and Execute"

class Plan(BaseModel):
    """Plan to follow in future"""
    steps: List[str] = Field(
        description="different steps to follow, should be in sorted order"
    )

planner_prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            """For the given objective, come up with a simple step by step plan. \
This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps. \
The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps.""",
        ),
        ("placeholder", "{messages}"),
    ]
)
planner = planner_prompt | ChatCohere(
    model="command-r-plus"
).with_structured_output(Plan)

planner.invoke(
    {
        "messages": [
            ("user", "what is the hometown of the current Australia open winner?")
        ]
    }
)

Output:

UnprocessableEntityError: status_code: 422, body: data=None message='No valid tool call or response generated'

Traceback:

UnprocessableEntityError                  Traceback (most recent call last)
Cell In[25], line 1
----> 1 planner.invoke(
      2     {
      3         "messages": [
      4             ("user", "what is the hometown of the current Australia open winner?")
      5         ]
      6     }
      7 )

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_core/runnables/base.py:3024, in RunnableSequence.invoke(self, input, config, **kwargs)
   3022             input = context.run(step.invoke, input, config, **kwargs)
   3023         else:
-> 3024             input = context.run(step.invoke, input, config)
   3025 # finish the root run
   3026 except BaseException as e:

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_core/runnables/base.py:5354, in RunnableBindingBase.invoke(self, input, config, **kwargs)
   5348 def invoke(
   5349     self,
   5350     input: Input,
   5351     config: Optional[RunnableConfig] = None,
   5352     **kwargs: Optional[Any],
   5353 ) -> Output:
-> 5354     return self.bound.invoke(
   5355         input,
   5356         self._merge_configs(config),
   5357         **{**self.kwargs, **kwargs},
   5358     )

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    275 def invoke(
    276     self,
    277     input: LanguageModelInput,
   (...)
    281     **kwargs: Any,
    282 ) -> BaseMessage:
    283     config = ensure_config(config)
    284     return cast(
    285         ChatGeneration,
--> 286         self.generate_prompt(
    287             [self._convert_input(input)],
    288             stop=stop,
    289             callbacks=config.get("callbacks"),
    290             tags=config.get("tags"),
    291             metadata=config.get("metadata"),
    292             run_name=config.get("run_name"),
    293             run_id=config.pop("run_id", None),
    294             **kwargs,
    295         ).generations[0][0],
    296     ).message

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    778 def generate_prompt(
    779     self,
    780     prompts: list[PromptValue],
   (...)
    783     **kwargs: Any,
    784 ) -> LLMResult:
    785     prompt_messages = [p.to_messages() for p in prompts]
--> 786     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    641         if run_managers:
    642             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643         raise e
    644 flattened_outputs = [
    645     LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]
    646     for res in results
    647 ]
    648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    630 for i, m in enumerate(messages):
    631     try:
    632         results.append(
--> 633             self._generate_with_cache(
    634                 m,
    635                 stop=stop,
    636                 run_manager=run_managers[i] if run_managers else None,
    637                 **kwargs,
    638             )
    639         )
    640     except BaseException as e:
    641         if run_managers:

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    849 else:
    850     if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851         result = self._generate(
    852             messages, stop=stop, run_manager=run_manager, **kwargs
    853         )
    854     else:
    855         result = self._generate(messages, stop=stop, **kwargs)

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/langchain_cohere/chat_models.py:597, in ChatCohere._generate(self, messages, stop, run_manager, **kwargs)
    592     return generate_from_stream(stream_iter)
    594 request = get_cohere_chat_request(
    595     messages, stop_sequences=stop, **self._default_params, **kwargs
    596 )
--> 597 response = self.client.chat(**request)
    599 generation_info = self._get_generation_info(response)
    600 if "tool_calls" in generation_info:

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/cohere/client.py:103, in experimental_kwarg_decorator.<locals>._wrapped(*args, **kwargs)
     98 if check_kwarg(deprecated_kwarg, kwargs):
     99     logger.warning(
    100         f"The `{deprecated_kwarg}` parameter is an experimental feature and may change in future releases.\n"
    101         "To suppress this warning, set `log_warning_experimental_features=False` when initializing the client."
    102     )
--> 103 return func(*args, **kwargs)

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/cohere/client.py:35, in validate_args.<locals>._wrapped(*args, **kwargs)
     33 def _wrapped(*args: typing.Any, **kwargs: typing.Any) -> typing.Any:
     34     check_fn(*args, **kwargs)
---> 35     return method(*args, **kwargs)

File ~/Desktop/LangChain/project_env/lib/python3.12/site-packages/cohere/base_client.py:1101, in BaseCohere.chat(self, message, accepts, model, preamble, chat_history, conversation_id, prompt_truncation, connectors, search_queries_only, documents, citation_quality, temperature, max_tokens, max_input_tokens, k, p, seed, stop_sequences, frequency_penalty, presence_penalty, raw_prompting, return_prompt, tools, tool_results, force_single_step, response_format, safety_mode, request_options)
   1091     raise NotFoundError(
   1092         typing.cast(
   1093             typing.Optional[typing.Any],
   (...)
   1098         )
   1099     )
   1100 if _response.status_code == 422:
-> 1101     raise UnprocessableEntityError(
   1102         typing.cast(
   1103             UnprocessableEntityErrorBody,
   1104             construct_type(
   1105                 type_=UnprocessableEntityErrorBody,  # type: ignore
   1106                 object_=_response.json(),
   1107             ),
   1108         )
   1109     )
   1110 if _response.status_code == 429:
   1111     raise TooManyRequestsError(
   1112         typing.cast(
   1113             TooManyRequestsErrorBody,
   (...)
   1118         )
   1119     )
Zephyruss1 commented 2 weeks ago

@ai-yann

ai-yann commented 2 weeks ago

Thanks for the detailed issue.

What notebook were you trying to run?

Is there a specific reason you're using LangChain for this? With the Cohere SDK (v2) it is a pleasant experience :)

We do support LangChain, but unless you're referencing a particular notebook, we should probably tackle that as an issue in langchain-ai/langchain-cohere. And if you go that route, feel free to post the issue link here as well, and I'll take a look in that context.

Other notes:

Zephyruss1 commented 1 week ago

Hello @ai-yann,

I'm using jupyter notebook embeded in pycharm.

I will try Cohere SDK V2 thanks for recommendation. Also I'm posted this issue in langchain-cohere-issue

Thanks.

ai-yann commented 1 week ago

Moving this to langchain-cohere #105 since this is a LangChain-specific issue. I'll copy over the details there to help track and resolve this.