jontstaz / gpt-engineer-local-llm

Fork of GPT Engineer to work with Local LMMs (WIP)
https://gpt-engineer.readthedocs.io
MIT License
0 stars 0 forks source link

Modify AI class to support alternative LLMs/Models #3

Open sweep-ai[bot] opened 1 year ago

sweep-ai[bot] commented 1 year ago

Description

This PR modifies the AI class in gpt_engineer/ai.py to support alternative LLMs/Models such as LiteLLM and Llama API, in addition to the existing support for GPT-4 and GPT-3.5. It also updates the documentation to reflect these changes.

Summary of Changes

Please review and merge this PR to enable support for alternative LLMs/Models in the AI class.

Fixes #1.


To checkout this PR branch, run the following command in your terminal:

git checkout sweep/modify-ai-class

To get Sweep to edit this pull request, leave a comment below or in the code. Leaving a comment in the code will only modify the file but commenting below can change the entire PR.

jontstaz commented 1 year ago

The implementation is not correct. Please refer to the LangChain documentation: https://python.langchain.com/docs/

Specifically the LiteLLM section in the docs: https://python.langchain.com/docs/integrations/chat/litellm

sweep-ai[bot] commented 1 year ago

The implementation is not correct. Please refer to the LangChain documentation: https://python.langchain.com/docs/

Specifically the LiteLLM section in the docs: https://python.langchain.com/docs/integrations/chat/litellm Hi @jontstaz,

I decided to make the following changes:

File Path Proposed Changes
gpt_engineer/ai.py Modify gpt_engineer/ai.py with contents:
* In the create_chat_model function, replace the current implementation for creating an instance of LiteLLMChatModel with the correct implementation as per the LangChain documentation. The LiteLLMChatModel should be initialized with the model parameter set to the path of the LiteLLM model file and the temperature parameter set to the desired temperature.
• In the get_tokenizer function, replace the current implementation for creating an instance of LiteLLMTokenizer with the correct implementation as per the LangChain documentation. The LiteLLMTokenizer should be initialized with the model parameter set to the path of the LiteLLM model file.
krrishdholakia commented 12 months ago

Hey @jontstaz thanks for using litellm. Would love to invite you to our community

happy to create a dedicated support channel for you as well, to make sure we can solve any issues/feature requests you have.

sweep-ai[bot] commented 12 months ago

❌ Could not find files to change

Please join our Discord to report this issue.

sweep-ai[bot] commented 9 months ago

❌ Could not find files to change

Please join our Discord to report this issue.

sweep-ai[bot] commented 9 months ago

❌ Could not find files to change

Please join our Discord to report this issue.

sweep-ai[bot] commented 9 months ago

❌ Could not find files to change

Please join our Discord to report this issue.

sweep-ai[bot] commented 7 months ago

❌ Could not find files to change

Please join our Discord to report this issue.

This is an automated message generated by Sweep AI.

sweep-ai[bot] commented 7 months ago

❌ Could not find files to change

Please join our Discord to report this issue.

This is an automated message generated by Sweep AI.

sweep-ai[bot] commented 7 months ago

❌ Could not find files to change

Please join our Discord to report this issue.

This is an automated message generated by Sweep AI.