Closed xingwanying closed 2 months ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
petercat | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Sep 11, 2024 6:29am |
This PR introduces functionality to track token usage during stream chats. It includes changes to the Assistant component, server-side event handling, and the OpenAI client.
File | Summary |
---|---|
assistant/src/Assistant/index.md | Updated the token value in the Assistant component. |
server/agent/base.py | Added handling for 'on_chat_model_end' event to track token usage. |
server/agent/llm/clients/openai.py | Modified the OpenAI client to enable usage streaming and refactored the client initialization. |
Attention: Patch coverage is 35.71429%
with 9 lines
in your changes missing coverage. Please review.
Files with missing lines | Patch % | Lines |
---|---|---|
server/agent/base.py | 0.00% | 5 Missing :warning: |
server/agent/llm/clients/openai.py | 55.55% | 4 Missing :warning: |
Files with missing lines | Coverage Δ | |
---|---|---|
server/agent/llm/clients/openai.py | 78.94% <55.55%> (+3.94%) |
:arrow_up: |
server/agent/base.py | 24.27% <0.00%> (-1.24%) |
:arrow_down: |
LG