Closed BluePhi09 closed 1 day ago
This should be easy:
from langchain.llms import Ollama
ollama = Ollama(base_url='http://localhost:11434', model="llama2")
response = ollama(prompt)
Cheers!
from langchain.llms import Ollama
ollama = Ollama(base_url='http://localhost:11434', model="llama2")
prompt = "Your prompt goes here"
response = ollama(prompt)
print(response)
Where do we add this?
Same question; how do we utilize this to use ollama with superagi? Thank you.
You can now just use Ollama Openai compatible API
Is there any way to use Ollama to host the llm models?