Closed langyajiekou closed 7 months ago
Thanks for the suggestion, that would be a great feature.
Give me a couple of days... 🙂
@langyajiekou I've now added support for local models that are accessible via the API endpoint that Ollama exposes. Please test it out and let me know if you run into any issues.
The quality of the outputs is of course highly dependent on the capability of the local model, but I've tested Mistral 7B and Gemma and both generated reasonable scenarios.
📝 A note for anyone reading this - Ollama is only supported when it and the Streamlit app are running on the same local machine.
how to support local ollama?