petercat-ai / petercat

A conversational Q&A agent configuration system, self-hosted deployment solutions, and a convenient all-in-one application SDK, allowing you to create intelligent Q&A bots for your GitHub repositories
https://petercat.ai
MIT License
605 stars 18 forks source link

feat: track token usage when stream chat #372

Closed xingwanying closed 2 months ago

xingwanying commented 2 months ago

image

vercel[bot] commented 2 months ago

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
petercat ✅ Ready (Inspect) Visit Preview 💬 Add feedback Sep 11, 2024 6:29am
petercat-assistant[bot] commented 2 months ago

Walkthrough

This PR introduces functionality to track token usage during stream chats. It includes changes to the Assistant component, server-side event handling, and the OpenAI client.

Changes

File Summary
assistant/src/Assistant/index.md Updated the token value in the Assistant component.
server/agent/base.py Added handling for 'on_chat_model_end' event to track token usage.
server/agent/llm/clients/openai.py Modified the OpenAI client to enable usage streaming and refactored the client initialization.
codecov[bot] commented 2 months ago

Codecov Report

Attention: Patch coverage is 35.71429% with 9 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
server/agent/base.py 0.00% 5 Missing :warning:
server/agent/llm/clients/openai.py 55.55% 4 Missing :warning:
Files with missing lines Coverage Δ
server/agent/llm/clients/openai.py 78.94% <55.55%> (+3.94%) :arrow_up:
server/agent/base.py 24.27% <0.00%> (-1.24%) :arrow_down:
RaoHai commented 2 months ago

LG