This makes it possible to use Prompta as a client with local LLMs.
One way to do this is to use Ollama together with LiteLLM. For example:
# (run each command in a separate terminal)
ollama serve
ollama pull llama2-uncensored
litellm --api_base http://localhost:11434
You can then set the base url in Prompta's settings to http://<your-ollama-ip>:8000/v1, set the model name to ollama/llama2-uncensored and chat with llama2-uncensored using Prompta.
This makes it possible to use Prompta as a client with local LLMs.
One way to do this is to use Ollama together with LiteLLM. For example:
You can then set the base url in Prompta's settings to
http://<your-ollama-ip>:8000/v1
, set the model name toollama/llama2-uncensored
and chat with llama2-uncensored using Prompta.