BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.65k stars 1.47k forks source link

[Feature]: Add testing to ensure `import litellm` will not fail #1099

Closed ishaan-jaff closed 9 months ago

ishaan-jaff commented 9 months ago

The Feature

we had a commit that recently led to failing import litellm on machines without tkinter

Motivation, pitch

Twitter / LinkedIn details

No response

ishaan-jaff commented 9 months ago

cc @aseifert

aseifert commented 9 months ago

@ishaan-jaff there's another tkinter import here: https://github.com/BerriAI/litellm/blob/da9976441b76067b6fe4614ef45439f184bb7697/litellm/llms/openai.py#L1

ishaan-jaff commented 9 months ago

removed - thanks for raising. cc @krrishdholakia I think ur vscode is acting up🤣

krrishdholakia commented 9 months ago

yea i think so. I'm surprised this passed circle ci. Open to feedback on how to prevent future issues @aseifert @ishaan-jaff

aseifert commented 9 months ago

Thanks! Build is passing on render.com now.

Not sure how to prevent this. I mean you COULD just set up auto-deploy to render (a free deployment) and then check for failing builds :-) not the most elegant way, but it IS a way ;-)

ishaan-jaff commented 9 months ago

we do this now, we auto-deploy litellm to railway https://litellm-api.up.railway.app/

builds are monitored to check for regressions