Currently, there are a few concerns mainly related to 'ollama' availability in the CDE:
If a user installs an extension in the CDE where ollama is not available, they will be asked to install it with sudo permissions:
In general, containers running as root is a significant security risk + containers are supposed to be immutable and installing anything inside the container is not recommended since after the restart all the packages will vanish. What is more containers are running using the Arbitrary User IDs (This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node).
Create a workspace based on the 'Ollama' getting started sapmple
Once workspace is started install the extention
SUCEESS: extension detecs that Ollama is running
Download and Configure Granite model via wizzard
SUCCESS: Continue dropdown is updated with the 'granite' model
Ask 'Who Are You' in the chat
ERROR: OpenAI model is used
I am an AI language model developed by OpenAI. I can answer your questions, summarize long pieces of text, and generate human-like text based on my training data. I am designed to assist with a wide range of tasks, from writing emails to tutoring in math.
Currently, there are a few concerns mainly related to 'ollama' availability in the CDE:
In general, containers running as root is a significant security risk + containers are supposed to be immutable and installing anything inside the container is not recommended since after the restart all the packages will vanish. What is more containers are running using the Arbitrary User IDs (This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node).
In terms of Dev Spaces if ollama is expected to be used it should be defined as part of devfile as described in https://developers.redhat.com/articles/2024/08/12/integrate-private-ai-coding-assistant-ollama
the extension detects ollama, and seem to configure the granite model, but in reality another model is used:
Steps to reproduce: