Is there any way we can have support for Codestral Mamba? Codestral Mamba doesn't llama.cpp support (neither ollama).
So checking here if there is other viable solution that can be implemented to support Codestral Mamba.
First you'd need to deploy the model somehow, neither Emacs nor gptel can help with that. Once it's running gptel can support it if the deployment offers a HTTP API endpoint.
Codestral, while having a great output quality is really slow.
Codestral Mamba looks to improve without sacrificing the result quality: https://mistral.ai/news/codestral-mamba/
Is there any way we can have support for Codestral Mamba? Codestral Mamba doesn't llama.cpp support (neither ollama). So checking here if there is other viable solution that can be implemented to support Codestral Mamba.