rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs
https://www.farfalle.dev/
Apache License 2.0
2.2k stars 164 forks source link

500: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.','type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} #56

Open vvstubbs opened 1 week ago

vvstubbs commented 1 week ago

As the title indicates, it's says I don't have openAI quota, but I'm not wanting to use openAI or anything else outside of my own system. I have a local OLLAMA, with many local models, which works with standard chat clients, I have SearXNG local, everything is to me in house, except for the need to access the internet for search. below is my my compose file. My computer network has everything local anything could ever need, latest ollama engine, 2 types of databases, redis, memcache, etc, all running bare metal local, I don't understand where I'm going wrong

networks:
default:
driver: bridge
docker4:
name: docker4
external: true
driver: bridge

services:
backend:
build:
context: .
dockerfile: ./src/backend/Dockerfile
restart: always
ports:
- "8000:8000"
environment:
- OLLAMA_HOST=${OLLAMA_HOST:-http://192.168.0.191:11434}
- TAVILY_API_KEY=${TAVILY_API_KEY}
- BING_API_KEY=${BING_API_KEY}
- SERP_API_KEY=${SERP_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}
- SEARCH_PROVIDER=${SEARCH_PROVIDER:-searxng}
- SEARXNG_BASE_URL=${SEARXNG_BASE_URL:-http://host.docker.internal:8080}
- REDIS_URL=${REDIS_URL:-redis://192.168.0.200:6379}
develop:
watch:
- action: sync
path: ./src/backend
target: /workspace/src/backend
extra_hosts:
- "host.docker.internal:host-gateway"

frontend:
depends_on:
- backend
build:
context: .
dockerfile: ./src/frontend/Dockerfile
restart: always
environment:
- NEXT_PUBLIC_API_URL=http://192.168.0.191:8000
- NEXT_PUBLIC_LOCAL_MODE_ENABLED=true
ports:
- "3009:3000"
develop:
watch:
- action: sync
path: ./src/frontend
target: /app
ignore:
- node_modules/

searxng:
container_name: searxng
image: docker.io/searxng/searxng:latest
restart: unless-stopped
networks:
- docker4
ports:
- "8082:8080"
volumes:
- ./searxng:/etc/searxng:rw
environment:
- SEARXNG_BASE_URL=http://127.0.0.1:8082

the .env file

OLLAMA_HOST=http://192.168.0.191:11434
OPENAI_API_KEY=
TAVILY_API_KEY=
BING_API_KEY=
SERP_API_KEY=
GROQ_API_KEY=
SEARXNG_BASE_URL=http://192.168.0.190:8082
REDIS_URL=redis://192.168.0.200:6379

the response from :http://192.168.0.191:11434

Ollama is running
rashadphz commented 1 week ago

Have you switched to local mode in the UI?

vvstubbs commented 1 week ago

Well I do feel a bit stupid now, In my defense I was understanding that "switch" the opposite way around, but now it is using my local ollama, but after a while it returns 500: and nothing elese, I'm resuming a time out

rashadphz commented 3 days ago

Hmm, that's weird. Can you try running ollama serve, then making a query from Farfalle. There might be something in the ollama serve logs to look at.