If I make a request to the ollama-service with http://localhost:11434 then I get a correct response.
If I make a request from another Docker container:
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import requests
import json
from ollama import Client
client = Client(host='http://ollama-service:11434')
app = FastAPI()
class UserInput(BaseModel):
input: str
@app.post("/process")
def process_input(user_input: UserInput):
model_name = "llama3"
messages = [
{
'role': 'user',
'content': user_input.input,
},
]
try:
response = client.chat(model=model_name, messages=messages)
result = response['message']['content']
return {"answer": result}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error processing the request with Ollama: {str(e)}")
I get the error:
model 'llama3' not found, try pulling it first
If I run this FastAPI app outside the Docker container and replace the 'ollama-service' with 'localhost', everything works fine.
If I make a request to the ollama-service with
http://localhost:11434
then I get a correct response. If I make a request from another Docker container:I get the error:
model 'llama3' not found, try pulling it first
If I run this FastAPI app outside the Docker container and replace the 'ollama-service' with 'localhost', everything works fine.
Docker compose file
Dockerfile