haseeb-heaven / code-interpreter

An innovative open-source Code Interpreter with (GPT,Gemini,Claude,LLaMa) models.
https://pypi.org/project/open-code-interpreter/
MIT License
219 stars 40 forks source link

Offline-Model via LM Studio does not work as expected / local-model confusion #10

Closed toovy closed 3 months ago

toovy commented 3 months ago

Hi, the readme says, this project would work with a local LM Studio server. The docs says, you have to edit the config/offline-model.config, which does not exist, but the file with the similar purpose is 'config/local-model.config', at least I guess it's meant like that. When trying to run 'python interpreter.py -md 'code' -m 'local-model' -dc' it will ask for the .env with keys for the remote LMs. Creating an empty .env or and .env with empty keys does not change anything. Looks like a bug to me. Would be great to use this project without using proprietary models. Thanks, BR Tobias

haseeb-heaven commented 3 months ago

Okay yes it assumes you already have .env files from previously using sessions of online models, will work on this bug asap. Thanks

haseeb-heaven commented 3 months ago

Fixed the bug in the latest PR