Closed Claudioappassionato closed 5 months ago
Are you sure that your ollama instance runs on this IP and port?
You can change the ollama instance domain / IP in the docker-compose file
Just so you dont make the same mistake as me (not used to using docker). Docker won't automagically connect that ip to your localhost 🤦🏼
Edit to be more helpful to future people:
OLLAMA_HOST=0.0.0.0 ollama serve
to listen on all interfacesFor me I already was hosting ollama with OLLAMA_HOST=0.0.0.0, I still had to manually pull all-minilm:v2 for the app to work so I think there's still something up.
Also, this embeddings model should really be configurable I think? #21
I changed this hard coded string in llm_backends.go to include the v2 suffix and it worked
func NewOllamaEmbeddingLLM() (*ollama.LLM, error) {
modelName := "all-minilm:v2" // <-- add "v2"
return NewOllama(modelName)
}
ohh, good spotting, PR submitted - https://github.com/nilsherzig/LLocalSearch/pull/25
I changed this hard coded string in llm_backends.go to include the v2 suffix and it worked
func NewOllamaEmbeddingLLM() (*ollama.LLM, error) { modelName := "all-minilm:v2" // <-- add "v2" return NewOllama(modelName) }
Ah fuck, im so sorry for that :/
I really need some e2e tests for this project, its so easy to muss something like that. Thanks :)
@nilsherzig you're pretty damn wonderful! stuck into this, 1 hour, pull...Back in business. 🤟 THX @sammcj @texuf @eduardvercaemer @Claudioappassionato
Yeah that’s an amazing response! Thanks so much. Awesome project!
Yeah that’s an amazing response! Thanks so much. Awesome project!
thanks and sorry for skipping your pull request haha
@nilsherzigsei dannatamente meraviglioso! bloccato in questo, 1 ora, tira... Torna in affari. 🤟 GRAZIE@sammcj @texuf @eduardvercaemer @Claudioappassionato
sorry but how can I understand exactly what my OLLAMA IP is? in my docker I found this: Resources Network Configure the way Docker containers interact with the network
192.168.65.0/24 Docker subnet default: 192.168.65.0/24
@nilsherzigsei dannatamente meraviglioso! bloccato in questo, 1 ora, tira... Torna in affari. 🤟 GRAZIE@sammcj @texuf @eduardvercaemer @Claudioappassionato
sorry but how can I understand exactly what my OLLAMA IP is? in my docker I found this: Resources Network Configure the way Docker containers interact with the network
192.168.65.0/24 Docker subnet default: 192.168.65.0/24
i changed a lot of configs to more reasonable defaults since your first comment. Could you try pulling the repo again and check if it works by now? Good chance your problem got resolved :)
@nilsherzigsei dannatamente meraviglioso! bloccato in questo, 1 ora, tira... Torna in affari. 🤟 GRAZIE@sammcj@texuf @eduardvercaemer @Claudioappassionato
scusate ma come faccio a capire esattamente qual'è il mio IP OLLAMA? nella mia finestra mobile ho trovato questo: Risorse Rete Configura il modo in cui i contenitori Docker interagiscono con la rete 192.168.65.0/24 Impostazione predefinita della sottorete Docker: 192.168.65.0/24
ho cambiato molte configurazioni con valori predefiniti più ragionevoli dal tuo primo commento. Potresti provare a estrarre di nuovo il repository e verificare se funziona adesso? Ci sono buone probabilità che il tuo problema sia stato risolto :)
Sorry but I'm not very good with codes. I try to do my best. But if you explain to me step by step what I need to change, you'll make me happy
try deleting the whole project from your pc and running the tutorial / setup steps again :)
2. OLLAMA_HOST=0.0.0.0 ollama serve
👎
I am still having this issue, even with the updated code. I don't know what to change or add.
You dont have to censor 192...* ips, those are local ones behind your router. Please show me your ollama start command, i assume you're not listening on the right interface
Is this what you are looking for:
or this
Or This:
È questo quello che cerchi: oppure questo can you put text instead of images? 😊💕
Oh sorry, I felt like images are just good descriptors.
package utils
import (
"context"
"fmt"
"log/slog"
"os"
"github.com/google/uuid"
"github.com/ollama/ollama/api"
"github.com/tmc/langchaingo/llms/ollama"
)
func NewOllamaEmbeddingLLM() (*ollama.LLM, error) {
modelName := "all-minilm:v2"
return NewOllama(modelName)
}
func NewOllama(modelName string) (*ollama.LLM, error) {
return ollama.New(ollama.WithModel(modelName), ollama.WithServerURL(os.Getenv("OLLAMA_HOST")), ollama.WithRunnerNumCtx(16000))
}
This is part of the yaml dev file:
services:
backend:
volumes:
- ./backend/:/app/
build:
context: ./backend
dockerfile: Dockerfile.dev
environment:
- OLLAMA_HOST=${OLLAMA_HOST:-http://host.docker.internal:11434}
- CHROMA_DB_URL=${CHROMA_DB_URL:-http://chromadb:8000}
- SEARXNG_DOMAIN=${SEARXNG_DOMAIN:-http://searxng:8080}
- MAX_ITERATIONS=${MAX_ITERATIONS:-30}
networks:
- llm_network_dev
frontend:
depends_on:
- backend
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./:/app/
ports:
- '3000:5173'
networks:
- llm_network_dev
chromadb:
image: chromadb/chroma
networks:
- llm_network_dev
# attach: false
# logging:
# driver: none
Oh sorry, I felt like images are just good descriptors. Eureka It works hahaha Thank you with all my heart. I'm happy that someone helped me. The only thing is that it's very slow and the page jumps up and down during the search. anyway thanks
I don't think that's the issue of downloading miniLM because it is just a sentence transformer https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
Update 1: I fixed the issue. To solve my issue I had to have Ollama actively running and a model had to be downloaded ahead of time. That made my solution work.
Update 2: It seems like the model is not responding at all:
Or this:
I've tried everything and I don't know what to do anymore - I always get this error : Model all-minilm:v2 does not exist and could not be pulled: Post "http://192.168.0.109:11434/api/pull": dial tcp 192.168.0.109:11434: connect: connection refused