Closed bcosculluela closed 3 months ago
Okay i will take a look at this issue today.
Thanks! In my case, the issue has been solved by setting: model = "ollama/llama2" And removing the variable _custom_llmprovider. Just in case it helps! 😄
Fix this bug in this PR : https://github.com/haseeb-heaven/code-interpreter/pull/14
Hello! Regarding this issue, I am currently using LM Studio. When using local-model, it does not work. As I can see in code, in interpreter_lib.py, line 324, variable _custom_llmprovider is set to 'openai', so it expects de openai api key. Which has to be the value of this variable when using open-source LLMs as Mistral?