BerriAI / litellm

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.16k stars 1.41k forks source link

[Feature]: Add linting / code quality check to ensure `load_dotenv()` is not called #4895

Open ishaan-jaff opened 1 month ago

ishaan-jaff commented 1 month ago

The Feature

we've had 1-2 issues regarding not calling load_dotenv(). We should ensure that we don't add code in the future that calls it. It would really help to have some linting / code quality checks around this

https://github.com/BerriAI/litellm/issues/4888

Motivation, pitch

-

Twitter / LinkedIn details

No response

K-dash commented 1 month ago

@ishaan-jaff Hi!

Regarding the functionality this issue aims to achieve, it was something I also wanted and thought there would be demand for, so I've created a Flake8 plugin to address it.

https://github.com/K-dash/flake8-import-guard (Please refer to the README.md for detailed functionality)

If my plugin meets your requirements, would you consider incorporating it into this project? Also, if you have any questions, please feel free to ask!