Description:
When using livekit-plugins-anthropic with the voice_assistant agent, I've encountered an issue where the LLM repeats the exact same stream text output after triggering a function call.
Steps to reproduce:
Set up a project using livekit-plugins-anthropic and voice_assistant agent
Implement a scenario that triggers a function call
Observe the LLM's output before and after the function call
Expected behavior:
The LLM should continue generating new, relevant text after the function call.
Actual behavior:
The LLM repeats the exact same stream of text that was output before the function call.
Environment:
Python version: 3.12.4
livekit-plugins-anthropic: Installed from latest GitHub code
(pip install 'git+https://github.com/livekit/agents.git#subdirectory=livekit-plugins/livekit-plugins-anthropic')
Description: When using livekit-plugins-anthropic with the voice_assistant agent, I've encountered an issue where the LLM repeats the exact same stream text output after triggering a function call.
Steps to reproduce:
Expected behavior: The LLM should continue generating new, relevant text after the function call.
Actual behavior: The LLM repeats the exact same stream of text that was output before the function call.
Environment:
pip install 'git+https://github.com/livekit/agents.git#subdirectory=livekit-plugins/livekit-plugins-anthropic'
)Additional information: