BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
14.07k stars 1.66k forks source link

(fix) add linting check to ban creating `AsyncHTTPHandler` during LLM calling #6855

Closed ishaan-jaff closed 6 hours ago

ishaan-jaff commented 14 hours ago

Relevant issues

Type

๐Ÿ†• New Feature ๐Ÿ› Bug Fix ๐Ÿงน Refactoring ๐Ÿ“– Documentation ๐Ÿš„ Infrastructure โœ… Test

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

Description by Korbit AI

What change is being made?

Add a linting check to ensure that AsyncHTTPHandler is not instantiated directly during LLM calls, and replace its usage with get_async_httpx_client.

Why are these changes being made?

Direct instantiation of AsyncHTTPHandler can lead to inconsistencies and issues in the handling of HTTP requests. Using get_async_httpx_client ensures a more standardized and safer approach by consolidating client creation logic. This change introduces a new test, ensure_async_clients_test.py, to automatically catch any violations in the codebase, thereby enforcing best practices and code maintainability.

Is this description stale? Ask me to generate a new description by commenting /korbit-generate-pr-description

vercel[bot] commented 14 hours ago

The latest updates on your projects. Learn more about Vercel for Git โ†—๏ธŽ

Name Status Preview Comments Updated (UTC)
litellm โœ… Ready (Inspect) Visit Preview ๐Ÿ’ฌ Add feedback Nov 22, 2024 3:03am