Open sweep-ai[bot] opened 1 year ago
The implementation is not correct. Please refer to the LangChain documentation: https://python.langchain.com/docs/
Specifically the LiteLLM section in the docs: https://python.langchain.com/docs/integrations/chat/litellm
The implementation is not correct. Please refer to the LangChain documentation: https://python.langchain.com/docs/
Specifically the LiteLLM section in the docs: https://python.langchain.com/docs/integrations/chat/litellm Hi @jontstaz,
I decided to make the following changes:
File Path | Proposed Changes |
---|---|
gpt_engineer/ai.py |
Modify gpt_engineer/ai.py with contents: * In the create_chat_model function, replace the current implementation for creating an instance of LiteLLMChatModel with the correct implementation as per the LangChain documentation. The LiteLLMChatModel should be initialized with the model parameter set to the path of the LiteLLM model file and the temperature parameter set to the desired temperature.• In the get_tokenizer function, replace the current implementation for creating an instance of LiteLLMTokenizer with the correct implementation as per the LangChain documentation. The LiteLLMTokenizer should be initialized with the model parameter set to the path of the LiteLLM model file. |
Hey @jontstaz thanks for using litellm. Would love to invite you to our community
happy to create a dedicated support channel for you as well, to make sure we can solve any issues/feature requests you have.
❌ Could not find files to change
Please join our Discord to report this issue.
❌ Could not find files to change
Please join our Discord to report this issue.
❌ Could not find files to change
Please join our Discord to report this issue.
❌ Could not find files to change
Please join our Discord to report this issue.
Description
This PR modifies the AI class in
gpt_engineer/ai.py
to support alternative LLMs/Models such as LiteLLM and Llama API, in addition to the existing support for GPT-4 and GPT-3.5. It also updates the documentation to reflect these changes.Summary of Changes
create_chat_model
function to handle LiteLLM and Llama API. The function now creates instances of the appropriate chat model based on the given model name and temperature.get_tokenizer
function to handle LiteLLM and Llama API. The function now returns the appropriate tokenizer for each model.docs/intro/quick_overview.md
to mention the support for LiteLLM and Llama API in the AI class.docs/api_reference.rst
to include the new functions added for LiteLLM and Llama API.Please review and merge this PR to enable support for alternative LLMs/Models in the AI class.
Fixes #1.
To checkout this PR branch, run the following command in your terminal:
To get Sweep to edit this pull request, leave a comment below or in the code. Leaving a comment in the code will only modify the file but commenting below can change the entire PR.