Closed DhruvCMH closed 2 weeks ago
On further inspection I noticed that, when I run this with default gpt-4o-mini
model in debug mode, I see this in console:
[-1:checkpoint] State at the end of step -1:
{'messages': []}
[0:tasks] Starting step 0 with 1 task:
- __start__ -> {'messages': [HumanMessage(content='hi')]}
[0:writes] Finished step 0 with writes to 1 channel:
- messages -> [HumanMessage(content='hi')]
[0:checkpoint] State at the end of step 0:
{'messages': [HumanMessage(content='hi', id='e2b1b41b-7a36-4f13-b186-2f14c6c2e959')]}
[1:tasks] Starting step 1 with 1 task:
- agent -> {'is_last_step': False,
'messages': [HumanMessage(content='hi', id='e2b1b41b-7a36-4f13-b186-2f14c6c2e959')]}
[1:writes] Finished step 1 with writes to 1 channel:
- messages -> [AIMessage(content='Hello there! 🍷 How can I help you today? Are you looking for some delightful wine recommendations, perhaps a tasty recipe to pair with your favorite bottle, or maybe some local happenings around Yountville? Let’s chat!', response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_483d39d857'}, id='run-1e389208-ca9d-40e5-bb83-fa6bf76df051', usage_metadata={'input_tokens': 1155, 'output_tokens': 47, 'total_tokens': 1202})]
[1:checkpoint] State at the end of step 1:
{'messages': [HumanMessage(content='hi', id='e2b1b41b-7a36-4f13-b186-2f14c6c2e959'),
AIMessage(content='Hello there! 🍷 How can I help you today? Are you looking for some delightful wine recommendations, perhaps a tasty recipe to pair with your favorite bottle, or maybe some local happenings around Yountville? Let’s chat!', response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_483d39d857'}, id='run-1e389208-ca9d-40e5-bb83-fa6bf76df051', usage_metadata={'input_tokens': 1155, 'output_tokens': 47, 'total_tokens': 1202})]}
[14/Sep/2024 11:09:57] "GET /stream_reply_from_chatbot/?session_id=4d40c59a-1697-446c-b07e-cdaec1c8ced8&user_message=hi HTTP/1.1" 200 583
However, when I run this with fine-tuned model, I see this:
[-1:checkpoint] State at the end of step -1:
{'messages': []}
[0:tasks] Starting step 0 with 1 task:
- __start__ -> {'messages': [HumanMessage(content='hi')]}
[0:writes] Finished step 0 with writes to 1 channel:
- messages -> [HumanMessage(content='hi')]
[0:checkpoint] State at the end of step 0:
{'messages': [HumanMessage(content='hi', id='4409af73-5303-4fec-b5e3-5d55a8bcfc9f')]}
[1:tasks] Starting step 1 with 1 task:
- agent -> {'is_last_step': False,
'messages': [HumanMessage(content='hi', id='4409af73-5303-4fec-b5e3-5d55a8bcfc9f')]}
[14/Sep/2024 10:19:11] "GET /stream_reply_from_chatbot/?session_id=53159df5-7a6b-451b-bad4-0d27ab0f4f04&user_message=hi HTTP/1.1" 200 233
It seems like the agent is coming to step [1:tasks] Starting step 1 with 1 task:
in both cases but unable to move to next step.
Update:
I noticed that removing the trimmer from:
messages_filter = custom_filter_messages | trimmer
makes the model work fine.
It was due to the trimmer.
Earlier token_counter
in trimmer
was defined as per gpt-4o-mini
.
Since now I am using the fine-tuned model, so the token counter was unavailable for the new fine-tuned model. Hence to fix this, we have to create a new LLM instance of gpt-4o-mini model, and use that as a token counter here.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
gpt-4o-mini
model with my agent, and it worked perfectly fine.gpt-4o-mini
model, and replacedgpt-4o-mini
model name with name of the fine-tuned model.System Info
System Information
Package Information
Optional packages not installed
Other Dependencies