OpenCodeInterpreter is a suite of open-source code generation systems aimed at bridging the gap between large language models and sophisticated proprietary systems like the GPT-4 Code Interpreter. It significantly enhances code generation capabilities by integrating execution and iterative refinement functionalities.
currently I feel forced to use huggingface for inference which is not an option for what I am trying to do.
Is it possible to use the OpenCodeInterpreter completely locally by also running the models locally? For example with Ollama or AnythingLLM?
Hello,
currently I feel forced to use huggingface for inference which is not an option for what I am trying to do. Is it possible to use the OpenCodeInterpreter completely locally by also running the models locally? For example with Ollama or AnythingLLM?