The Gemma2B model is provided by Ollama, vLLM, and possibly others. Originally, we included it as ragna.assistants.Gemma2B from Ollama, but there could be a naming conflict if vLLM is included in the future. In order to prevent that possibility in the future, I renamed it to ragna.assistants.OllamaGemma2B. The same issue applies to other models provided by Ollama, but only Gemma2B is currently included in Ragna.
It would be helpful to get others' opinions and ideas on this naming scheme.
@pmeier mentioned that we should make a decision before the next Ragna release.
I have an RFD in #450 that, if accepted, would make this discussion obsolete. We would have an OllamaAssistant and a VllmAssistant and just provide the respective model as parameter.
The Gemma2B model is provided by Ollama, vLLM, and possibly others. Originally, we included it as
ragna.assistants.Gemma2B
from Ollama, but there could be a naming conflict if vLLM is included in the future. In order to prevent that possibility in the future, I renamed it toragna.assistants.OllamaGemma2B
. The same issue applies to other models provided by Ollama, but only Gemma2B is currently included in Ragna.It would be helpful to get others' opinions and ideas on this naming scheme.
@pmeier mentioned that we should make a decision before the next Ragna release.