Closed krrishdholakia closed 1 year ago
Tagging @saum7800 who is currently working on this as well!
Thank you for the PR @krrishdholakia ! Excited to see contribution from you. Feel free to join discord and the open-source-llms channel where we are discussing how to make this change. A few more things to consider in addition to what @neubig mentioned:
Your current changes would be from the ChatGPT agent, whereas we should probably have a unified interface from which the generation for specified model can happen.
The instruction parser, model_retriever, and dataset_generator files are also handling exceptions specific to OpenAI. I am not sure how your code changes might react to error codes from different base LLMs.
Hey @saum7800,
Thanks for the feedback.
Happy to make any changes, increase coverage on our end as required.
Can confirm this works for openai. Will run through testing on other models/providers now.
Tested on anthropic, can confirm this works when you set 'ANTHROPIC_API_KEY' in the .env
Regarding Errors:
We currently provide support for 3 exceptions across all providers (here's our implementation logic - https://github.com/BerriAI/litellm/blob/cc4be8dd73497b02f4a9443b12c514737a657cae/litellm/utils.py#L1420)
I'll add coverage for the remaining 3, tomorrow:
After reviewing the relevant http status codes + docs, as well as running testing on our end.
@neubig There is no change in the way API keys need to be set, unless I'm missing something?
However, it would make sense to add documentation explaining how users can use different providers. I'm happy to add that too.
Yes, could you please add just a brief mention and possibly a link to the litellm doc regarding API keys? No need to list all of them on prompt2model
Hi @saum7800 @neubig,
Support for all openai exceptions is now added (if we don't have an exact match, it defaults to the OpenAI APIError type).
I've also updated the readme with a link to the list of supported providers.
Please let me know if there's anything else required for this PR
Awesome, thank you for the contribution @krrishdholakia !
Hi @neubig
Thanks for your comment on another PR. I came across prompt2model and would love to help out if possible using LiteLLM (https://github.com/BerriAI/litellm).
I added support for the providers listed above by replacing the chatcompletion w/ completion and chatcompletion.acreate with acompletion. The code is pretty similar to the OpenAI class - as litellm follows the same pattern as the openai-python sdk.
Would love to know if this helps.
Happy to add additional tests / update documentation, if the initial PR looks good to you.