Open anodynos opened 4 months ago
With the release of new LLM dedicated to coding (e.g. codegemma, etc.), it would be great to be able to connect to a choice of LLMs running locally.
Depending on your organization's preferences, sending your code to OpenAI may not be required, as CodiumAI offers on-premises solutions. However, local models, such as those running on your edge machine, have not yet achieved the desired quality. Nonetheless, we are closely monitoring advancements in this area.
Do you plan to support local models powered by GPU, instead of having to send our code and pay for ChatGPT 3.5/4 ?