Open LisaGsGithub opened 1 month ago
Hi @LisaGsGithub,
thank you very much!
What do you mean with "Ollama API"? Are you running the Ollama application (https://ollama.com/download) on a web server or have you installed it on you pc/laptop locally?
I try to guess what the issue is:
I am running the Ollama server locally on my laptop. And I use ollama-instructor
to develop and test the repository by using the local server (http://localhost:11434). So, yes you can use ollama-instructor
or the ollama
client itself to make something with a local Ollama model (e.g. llama3.2).
Example for Ollama client:
from ollama import Client
client = Client(host="http://localhost:11434")
response = client.chat(
model='llama3.2',
messages=[
{
'role': 'user',
'content': 'Why is the sky blue?'
}
]
)
print(response['message']['content'])
As ollama-instructor
is just a wrapper around the Ollama client, it is designed to be used with the Ollama server.
Example for use of ollama-instructor
:
from ollama_instructor.ollama_instructor_client import OllamaInstructorClient
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
client = OllamaInstructorClient(...)
response = client.chat_completion(
model='phi3',
pydantic_model=Person,
messages=[
{
'role': 'user',
'content': 'Jason is 30 years old.'
}
]
)
print(response['message']['content'])
Did I answer your question or misinterpreted it?
Thank you for your work. I am having issues with an unstable connection when using the Ollama API. Is it possible to use your code with a local Ollama model?