Open root-reindeer-flotilla opened 1 month ago
Hello @root-reindeer-flotilla! I'm here to assist you with any bugs or questions you have about Quivr. Let's get this sorted out together!
I found a similar open issue that might help you resolve your problem: [Bug]: Not able to login. Dead home page. The suggested solution involves several steps:
Environment Configuration: Ensure that the NEXT_PUBLIC_FRONTEND_URL
in your environment configuration matches your application's public URL, including the correct protocol and port. For your setup, this means including http://
and specifying port 3000
if that's what you're using.
Browser Settings: Make sure JavaScript is enabled in your browser. Check the browser's console for any JavaScript errors that might reveal more about the issue.
Network Requests: Use your browser's developer tools to monitor network requests when attempting to log in. Look for any requests that fail or return error responses.
Service Status: Confirm that all related services, including Docker containers and Supabase, are running correctly. This involves checking that Docker containers are up and that Supabase services are accessible.
Additionally, another issue might be related to the configuration of your environment variables in the .env
file. Specifically, the NEXT_PUBLIC_BACKEND_URL
and NEXT_PUBLIC_SUPABASE_URL
should be set to the IP address of your server instead of localhost
. Here is the corrected configuration:
NEXT_PUBLIC_BACKEND_URL=http://10.10.10.101:5050
NEXT_PUBLIC_SUPABASE_URL=http://10.10.10.101:54321
Make sure to update these values in your .env
file and restart the services for the changes to take effect [1][2].
For the frontend appearing incomplete, ensure that all necessary environment variables are correctly set and that your Docker configuration is accurate. Verify the dockerfile
path and build context in the docker-compose.yml
file. Also, check the deployment logs for any errors during the build and deployment process [3].
By following these steps, you should be able to identify and resolve the issues preventing login and causing the frontend to appear incomplete.
To continue talking to Dosu, mention @dosu.
@dosu (I am aware this is an AI)
Changing those did not do anything.
I think you misunderstood. It's still on my local network and the internal IP address is 10.10.10.101. The backend/supabase IP is still local to the frontend. eg http://10.10.10.101:5050 aka http://localhost:5050.
My working theory is that docker isn't properly forwarding ports from the internal containers but I could be completely wrong. Any other ideas to try?
The issue with Docker not properly forwarding ports from the internal containers, preventing login and causing the frontend to appear incomplete, could be due to several reasons. Here are some potential causes and solutions:
Port Configuration: Ensure that the ports are correctly mapped and not being blocked by any firewall or network settings. The docker-compose.yml
file shows the following port mappings:
3000:3000
5050:5050
5555:5555
6379:6379
5678:5678
(from docker-compose.dev.yml
)Environment Variables: The frontend service relies on several environment variables (NEXT_PUBLIC_BACKEND_URL
, NEXT_PUBLIC_SUPABASE_URL
, etc.). Ensure these variables are correctly set and accessible.
Network Configuration: The services are connected via a custom bridge network (quivr-network
). Ensure that this network is properly created and that there are no conflicts with other networks.
Service Dependencies: The frontend service depends on the backend API service. Ensure that the backend API service is running and healthy. The health check for the backend API is configured to check http://localhost:5050/healthz
.
Docker Host Configuration: The extra_hosts
configuration is used to map host.docker.internal
to host-gateway
. Ensure that this mapping is correct and that the host gateway is accessible.
Volume Mounts: Ensure that the volume mounts are correctly set up, especially for services like backend-api
, notifier
, worker
, beat
, and flower
, which have volumes mounted from the host to the container. This ensures that the latest code changes are reflected inside the containers.
Here is the relevant part of the docker-compose.yml
and docker-compose.dev.yml
files for reference:
services:
frontend:
image: quivr-frontend-prebuilt
pull_policy: never
build:
context: frontend
dockerfile: Dockerfile
args:
- NEXT_PUBLIC_ENV=local
- NEXT_PUBLIC_BACKEND_URL=${NEXT_PUBLIC_BACKEND_URL}
- NEXT_PUBLIC_SUPABASE_URL=${NEXT_PUBLIC_SUPABASE_URL}
- NEXT_PUBLIC_SUPABASE_ANON_KEY=${NEXT_PUBLIC_SUPABASE_ANON_KEY}
- NEXT_PUBLIC_CMS_URL=${NEXT_PUBLIC_CMS_URL}
- NEXT_PUBLIC_FRONTEND_URL=${NEXT_PUBLIC_FRONTEND_URL}
- NEXT_PUBLIC_AUTH_MODES=${NEXT_PUBLIC_AUTH_MODES}
container_name: web
depends_on:
- backend-api
restart: always
ports:
- 3000:3000
backend-api:
image: quivr-backend-api:latest
pull_policy: if_not_present
env_file:
- .env
build:
context: backend
dockerfile: Dockerfile
container_name: backend-api
extra_hosts:
- "host.docker.internal:host-gateway"
healthcheck:
test: [ "CMD", "curl", "http://localhost:5050/healthz" ]
command: >
/bin/bash -c "python -m uvicorn quivr_api.main:app --host 0.0.0.0 --log-level info --reload --port 5050 --loop uvloop"
restart: always
ports:
- 5050:5050
notifier:
image: backend-api:latest
extra_hosts:
- "host.docker.internal:host-gateway"
env_file:
- .env
container_name: notifier
volumes:
- ./backend/:/app/
command: >
/bin/bash -c "python /app/worker/quivr_worker/celery_monitor.py"
restart: always
depends_on:
- redis
- worker
worker:
image: backend-api:latest
extra_hosts:
- "host.docker.internal:host-gateway"
env_file:
- .env
container_name: worker
volumes:
- ./backend/:/app/
command: >
/bin/bash -c "python -m celery -A quivr_worker.celery_worker worker -l info -E"
restart: always
depends_on:
- redis
redis:
image: redis:latest
container_name: redis
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
ports:
- 6379:6379
beat:
image: backend-api:latest
extra_hosts:
- "host.docker.internal:host-gateway"
env_file:
- .env
container_name: beat
command: >
/bin/bash -c "python -m celery -A quivr_worker.celery_worker beat -l info"
volumes:
- ./backend/:/app/
restart: always
depends_on:
- redis
flower:
image: backend-api:latest
extra_hosts:
- "host.docker.internal:host-gateway"
env_file:
- .env
container_name: flower
command: >
/bin/bash -c "python -m celery -A quivr_worker.celery_worker flower -l info --port=5555"
volumes:
- ./backend/:/app/
restart: always
depends_on:
- redis
- worker
- beat
ports:
- 5555:5555
Check these configurations and ensure that all services are running correctly and that the ports are accessible from your local network setup with IP 10.10.10.101
[1][2].
To continue talking to Dosu, mention @dosu.
Hi, you mentioned that you once got everything worked on your local machine, do you mind sharing more details of what models you used (such as ChatGPT, Ollama) and how you add in the models to Quivr?
What happened?
I installed quivr on my local windows machine with docker desktop. I got everything to work yesterday after a few hours. I decided to move my install to my server so I can access it over my network. I'm using proxmox to which I installed a debian container that has docker installed on it. I set up everything mostly the same but I'm getting this very weird frontend bug. I try to enter admin@quivr.app admin and login and nothing happens at all.
The supabase install works well, and even 10.10.10.101:5050/docs#/ works as I can visit them like normal.
Does anyone have any ideas?
Relevant log output
No response
Twitter / LinkedIn details
No response