Closed lin-calvin closed 6 months ago
🤖
Hey @calvinweb! Good to see you again. I hope you're cracking the code like a pro today. Let's dive into this, shall we?
The llm.bind(stop=["\nObservation"])
line in the json_chat
agent is indeed meaningful. This line is used to bind the language model with a stop sequence. In this case, the stop sequence is "\nObservation"
. This means that the language model will stop generating further output when it encounters this sequence.
Even though the json_chat
agent relies on JSON for formatting its logic, the stop sequence is not related to the JSON format. It is a part of the language model's generation process. The language model generates text, and when it encounters the stop sequence, it stops generating further text. This is independent of how the generated text is formatted or used afterwards.
Here is the relevant code:
llm_with_stop = llm.bind(stop=["\nObservation"])
In this line, llm
is the language model, and bind
is a method that binds the language model with a stop sequence. The stop sequence is passed as a list of strings. In this case, the stop sequence is ["\nObservation"]
.
In the context of language models, the stop
parameter is used to specify a list of strings that the model should consider as end-of-text markers. When the model encounters any of these strings during the generation process, it stops generating further text.
This is evident from the following code snippet:
if isinstance(self.llm, BaseLanguageModel):
return self.llm.generate_prompt(
prompts,
stop,
callbacks=callbacks,
**self.llm_kwargs,
)
else:
results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
cast(List, prompts), {"callbacks": callbacks}
)
Here, if self.llm
is an instance of BaseLanguageModel
, the stop
parameter is passed directly to the generate_prompt
method. Otherwise, stop
is bound to self.llm
before calling the batch
method.
You can find more information about this in the LangChain codebase.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
This is also breaking certain LLMs (i.e. using Bedrock w/ Llama2 or Titan these preconfigured stop sequences aren't allowed.
Stop sequence key name for meta is not supported.
Stop sequence key name for meta is not supported
I am also getting this error using the Bedrock meta-model. Could you let me know if you have any solution? @steve-carpenter
Stop sequence key name for meta is not supported
I am also getting this error using the Bedrock meta-model. Could you let me know if you have any solution? @steve-carpenter
pass stop_sequence=False
when constructing the agent
How can I determine which models support stop sequences and which don't? @calvinweb
``I'm using crewai and mot of models failed
Only claud model worked form me, others gave me errors related :
[stop_sequences] is not permitted
Malformed input request: : string [Observation] does not match pattern ^(\|+|User:)
This worked for me:
from langchain_community.chat_models import BedrockChat
llm = BedrockChat(model_id="anthropic.claude-3-haiku-20240307-v1:0")
Checked other resources
Example Code
-
Description
https://github.com/langchain-ai/langchain/blob/ef75bb63ce5cc4fb76ba1631ebe582f56103ab7e/libs/langchain/langchain/agents/json_chat/base.py#L151 this seems useless because chat-json depends json to work,not this sort of completion
System Info
-
Related Components