BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.05k stars 1.12k forks source link

Linting Refactor: New `ModelResponseChunk` for streaming #4219

Open krrishdholakia opened 1 week ago

krrishdholakia commented 1 week ago

Title

Uses a new pydantic object - ModelResponseChunk for streaming calls.

Relevant issues

Reference: https://github.com/BerriAI/litellm/issues/4206

This fixes linting errors for model response caused because it was used for streaming + non streaming calls

e.g. note the lack of linting errors for .message

Screenshot 2024-06-15 at 2 42 54 PM

Type

🧹 Refactoring

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

Screenshot 2024-06-15 at 2 42 54 PM
vercel[bot] commented 1 week ago

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ❌ Failed (Inspect) Jun 16, 2024 8:19am
krrishdholakia commented 1 week ago

This keeps the same response as ModelResponse for streaming.

The only change is moving it to be a response object of it's own (fixing linting errors).

This is not considered a breaking change.


We should make sure this is on release notes for the next few releases, to make sure people are aware of the change.