neulab / prompt2model

prompt2model - Generate Deployable Models from Natural Language Instructions
Apache License 2.0
1.96k stars 177 forks source link

Adding support for Anthropic, Cohere, TogetherAI, Aleph Alpha, Huggingface Inference Endpoints, etc. #324

Closed krrishdholakia closed 1 year ago

krrishdholakia commented 1 year ago

Hi @neubig

Thanks for your comment on another PR. I came across prompt2model and would love to help out if possible using LiteLLM (https://github.com/BerriAI/litellm).

I added support for the providers listed above by replacing the chatcompletion w/ completion and chatcompletion.acreate with acompletion. The code is pretty similar to the OpenAI class - as litellm follows the same pattern as the openai-python sdk.

Would love to know if this helps.

Happy to add additional tests / update documentation, if the initial PR looks good to you.

viswavi commented 1 year ago

Tagging @saum7800 who is currently working on this as well!

saum7800 commented 1 year ago

Thank you for the PR @krrishdholakia ! Excited to see contribution from you. Feel free to join discord and the open-source-llms channel where we are discussing how to make this change. A few more things to consider in addition to what @neubig mentioned:

  1. Your current changes would be from the ChatGPT agent, whereas we should probably have a unified interface from which the generation for specified model can happen.

  2. The instruction parser, model_retriever, and dataset_generator files are also handling exceptions specific to OpenAI. I am not sure how your code changes might react to error codes from different base LLMs.

krrishdholakia commented 1 year ago

Hey @saum7800,

Thanks for the feedback.

Happy to make any changes, increase coverage on our end as required.

krrishdholakia commented 1 year ago

Can confirm this works for openai. Will run through testing on other models/providers now.

krrishdholakia commented 1 year ago

Tested on anthropic, can confirm this works when you set 'ANTHROPIC_API_KEY' in the .env

krrishdholakia commented 1 year ago

Regarding Errors:

We currently provide support for 3 exceptions across all providers (here's our implementation logic - https://github.com/BerriAI/litellm/blob/cc4be8dd73497b02f4a9443b12c514737a657cae/litellm/utils.py#L1420)

I'll add coverage for the remaining 3, tomorrow:

After reviewing the relevant http status codes + docs, as well as running testing on our end.

krrishdholakia commented 1 year ago

@neubig There is no change in the way API keys need to be set, unless I'm missing something?

However, it would make sense to add documentation explaining how users can use different providers. I'm happy to add that too.

neubig commented 1 year ago

Yes, could you please add just a brief mention and possibly a link to the litellm doc regarding API keys? No need to list all of them on prompt2model

krrishdholakia commented 1 year ago

Hi @saum7800 @neubig,

Support for all openai exceptions is now added (if we don't have an exact match, it defaults to the OpenAI APIError type).

I've also updated the readme with a link to the list of supported providers.

Please let me know if there's anything else required for this PR

saum7800 commented 1 year ago

Awesome, thank you for the contribution @krrishdholakia !