QuivrHQ / quivr

Open-source RAG Framework for building GenAI Second Brains 🧠 Build productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Efficient retrieval augmented generation framework
https://quivr.com
Other
35.99k stars 3.5k forks source link

[Bug]: I have deployed it on the cloud server, but the interface keeps throwing errors. #1652

Closed xiaoqib closed 8 months ago

xiaoqib commented 10 months ago

What happened?

A bug happened! The error message is: Error occurred while fetching your chats

Relevant log output

root@iZ6we7nwen3zaknisrym6xZ:~/quivr# docker-compose -f docker-compose.local.yml up
redis is up-to-date
traefik is up-to-date
beat is up-to-date
worker is up-to-date
web is up-to-date
backend-core is up-to-date
flower is up-to-date
Attaching to redis, traefik, beat, worker, web, backend-core, flower
backend-core    | INFO:     Will watch for changes in these directories: ['/code']
backend-core    | WARNING:  "workers" flag is ignored when reloading is enabled.
backend-core    | INFO:     Uvicorn running on http://0.0.0.0:5050 (Press CTRL+C to quit)
backend-core    | INFO:     Started reloader process [1] using StatReload
backend-core    | INFO:     Started server process [7]
backend-core    | INFO:     Waiting for application startup.
backend-core    | [INFO] Downloading pandoc from https://github.com/jgm/pandoc/releases/download/3.1.9/pandoc-3.1.9-1-amd64.deb ...
backend-core    | 2023-11-16 19:01:03,993:INFO - Downloading pandoc from https://github.com/jgm/pandoc/releases/download/3.1.9/pandoc-3.1.9-1-amd64.deb ...
backend-core    | [INFO] Unpacking pandoc-3.1.9-1-amd64.deb to tempfolder...
backend-core    | 2023-11-16 19:01:05,825:INFO - Unpacking pandoc-3.1.9-1-amd64.deb to tempfolder...
backend-core    | [INFO] Copying pandoc to /root/bin ...
backend-core    | 2023-11-16 19:01:06,892:INFO - Copying pandoc to /root/bin ...
backend-core    | [INFO] Making /root/bin/pandoc executeable...
backend-core    | 2023-11-16 19:01:07,004:INFO - Making /root/bin/pandoc executeable...
backend-core    | [INFO] Copying copyright to /root/bin ...
backend-core    | 2023-11-16 19:01:07,005:INFO - Copying copyright to /root/bin ...
backend-core    | INFO:     Application startup complete.
backend-core    | INFO:     172.18.0.2:56616 - "GET /.git/config HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:56618 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core    | INFO:     172.18.0.2:45804 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:34654 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:49182 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core    | INFO:     172.18.0.2:46670 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:46670 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:54740 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:54740 - "HEAD /favicon.ico HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:45068 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:45068 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:55704 - "GET /robots.txt HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:55704 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:43258 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:49698 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:49698 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:40356 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:47018 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:50948 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:50948 - "GET /Public/home/js/check.js HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:50948 - "GET /static/admin/javascript/hetong.js HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:34062 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core    | INFO:     172.18.0.2:50430 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:50430 - "GET /Public/home/js/check.js HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:50430 - "GET /static/admin/javascript/hetong.js HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:55734 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:55734 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:57138 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:57138 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:50866 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:50866 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:36076 - "HEAD / HTTP/1.1" 405 Method Not Allowed
backend-core    | INFO:     172.18.0.2:52514 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:38860 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:54758 - "GET / HTTP/1.1" 200 OK
backend-core    | INFO:     172.18.0.2:54758 - "GET /favicon.ico HTTP/1.1" 404 Not Found
backend-core    | INFO:     172.18.0.2:55762 - "GET / HTTP/1.1" 200 OK
flower          | 2023-11-16 19:00:59,260:INFO - Visit me at http://0.0.0.0:5555
flower          | 2023-11-16 19:00:59,265:INFO - Broker: redis://redis:6379/0
flower          | 2023-11-16 19:00:59,268:INFO - Registered tasks: 
flower          | ['celery.accumulate',
flower          |  'celery.backend_cleanup',
flower          |  'celery.chain',
flower          |  'celery.chord',
flower          |  'celery.chord_unlock',
flower          |  'celery.chunks',
flower          |  'celery.group',
flower          |  'celery.map',
flower          |  'celery.starmap',
flower          |  'celery_task.create_embedding_for_document',
flower          |  'celery_worker.remove_onboarding_more_than_x_days_task',
flower          |  'process_crawl_and_notify',
flower          |  'process_file_and_notify']
flower          | 2023-11-16 19:00:59,272:INFO - Connected to redis://redis:6379/0
flower          | 2023-11-16 19:01:00,298:WARNING - Inspect method registered failed
flower          | 2023-11-16 19:01:00,301:WARNING - Inspect method active_queues failed
flower          | 2023-11-16 19:01:00,307:WARNING - Inspect method scheduled failed
flower          | 2023-11-16 19:01:00,308:WARNING - Inspect method stats failed
flower          | 2023-11-16 19:01:00,312:WARNING - Inspect method active failed
flower          | 2023-11-16 19:01:00,313:WARNING - Inspect method reserved failed
beat            | [2023-11-16 19:00:58,577: INFO/MainProcess] beat: Starting...
beat            | [2023-11-17 00:00:00,002: INFO/MainProcess] Scheduler: Sending due task remove_onboarding_more_than_x_days_task (celery_worker.remove_onboarding_more_than_x_days_task)
beat            | [2023-11-18 00:00:00,000: INFO/MainProcess] Scheduler: Sending due task remove_onboarding_more_than_x_days_task (celery_worker.remove_onboarding_more_than_x_days_task)
redis           | 1:C 16 Nov 2023 19:00:53.811 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis           | 1:C 16 Nov 2023 19:00:53.811 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis           | 1:C 16 Nov 2023 19:00:53.811 * Redis version=7.2.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis           | 1:C 16 Nov 2023 19:00:53.811 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis           | 1:M 16 Nov 2023 19:00:53.811 * monotonic clock: POSIX clock_gettime
redis           | 1:M 16 Nov 2023 19:00:53.812 * Running mode=standalone, port=6379.
redis           | 1:M 16 Nov 2023 19:00:53.812 * Server initialized
redis           | 1:M 16 Nov 2023 19:00:53.812 * Ready to accept connections tcp
redis           | 1:M 16 Nov 2023 19:55:49.455 * 100 changes in 300 seconds. Saving...
redis           | 1:M 16 Nov 2023 19:55:49.455 * Background saving started by pid 19
redis           | 19:C 16 Nov 2023 19:55:49.459 * DB saved on disk
redis           | 19:C 16 Nov 2023 19:55:49.459 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 16 Nov 2023 19:55:49.556 * Background saving terminated with success
redis           | 1:M 16 Nov 2023 20:55:50.073 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 16 Nov 2023 20:55:50.074 * Background saving started by pid 20
redis           | 20:C 16 Nov 2023 20:55:50.077 * DB saved on disk
redis           | 20:C 16 Nov 2023 20:55:50.077 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 16 Nov 2023 20:55:50.174 * Background saving terminated with success
redis           | 1:M 16 Nov 2023 21:55:51.093 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 16 Nov 2023 21:55:51.094 * Background saving started by pid 21
redis           | 21:C 16 Nov 2023 21:55:51.097 * DB saved on disk
redis           | 21:C 16 Nov 2023 21:55:51.098 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 16 Nov 2023 21:55:51.194 * Background saving terminated with success
redis           | 1:M 16 Nov 2023 22:55:52.015 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 16 Nov 2023 22:55:52.016 * Background saving started by pid 22
redis           | 22:C 16 Nov 2023 22:55:52.019 * DB saved on disk
redis           | 22:C 16 Nov 2023 22:55:52.019 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 16 Nov 2023 22:55:52.116 * Background saving terminated with success
redis           | 1:M 16 Nov 2023 23:55:53.090 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 16 Nov 2023 23:55:53.090 * Background saving started by pid 23
redis           | 23:C 16 Nov 2023 23:55:53.093 * DB saved on disk
redis           | 23:C 16 Nov 2023 23:55:53.094 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 16 Nov 2023 23:55:53.190 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 00:55:54.058 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 00:55:54.058 * Background saving started by pid 24
redis           | 24:C 17 Nov 2023 00:55:54.062 * DB saved on disk
redis           | 24:C 17 Nov 2023 00:55:54.062 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 00:55:54.159 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 01:55:55.035 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 01:55:55.035 * Background saving started by pid 25
redis           | 25:C 17 Nov 2023 01:55:55.039 * DB saved on disk
redis           | 25:C 17 Nov 2023 01:55:55.039 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 01:55:55.136 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 02:55:56.000 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 02:55:56.001 * Background saving started by pid 26
redis           | 26:C 17 Nov 2023 02:55:56.004 * DB saved on disk
redis           | 26:C 17 Nov 2023 02:55:56.005 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 02:55:56.101 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 03:55:57.094 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 03:55:57.095 * Background saving started by pid 27
redis           | 27:C 17 Nov 2023 03:55:57.098 * DB saved on disk
redis           | 27:C 17 Nov 2023 03:55:57.098 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 03:55:57.195 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 04:55:58.014 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 04:55:58.014 * Background saving started by pid 28
redis           | 28:C 17 Nov 2023 04:55:58.018 * DB saved on disk
redis           | 28:C 17 Nov 2023 04:55:58.018 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 04:55:58.115 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 05:55:59.064 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 05:55:59.064 * Background saving started by pid 29
redis           | 29:C 17 Nov 2023 05:55:59.067 * DB saved on disk
redis           | 29:C 17 Nov 2023 05:55:59.067 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 05:55:59.164 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 06:56:00.036 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 06:56:00.036 * Background saving started by pid 30
redis           | 30:C 17 Nov 2023 06:56:00.040 * DB saved on disk
redis           | 30:C 17 Nov 2023 06:56:00.040 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 06:56:00.136 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 07:56:01.054 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 07:56:01.054 * Background saving started by pid 31
redis           | 31:C 17 Nov 2023 07:56:01.058 * DB saved on disk
redis           | 31:C 17 Nov 2023 07:56:01.058 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 07:56:01.154 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 08:56:02.024 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 08:56:02.025 * Background saving started by pid 32
redis           | 32:C 17 Nov 2023 08:56:02.028 * DB saved on disk
redis           | 32:C 17 Nov 2023 08:56:02.029 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 08:56:02.125 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 09:56:03.064 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 09:56:03.064 * Background saving started by pid 33
redis           | 33:C 17 Nov 2023 09:56:03.067 * DB saved on disk
redis           | 33:C 17 Nov 2023 09:56:03.068 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 09:56:03.164 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 10:56:04.062 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 10:56:04.062 * Background saving started by pid 34
redis           | 34:C 17 Nov 2023 10:56:04.066 * DB saved on disk
redis           | 34:C 17 Nov 2023 10:56:04.066 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 10:56:04.163 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 11:56:05.034 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 11:56:05.034 * Background saving started by pid 35
worker          | /usr/local/lib/python3.11/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
worker          | absolutely not recommended!
worker          | 
worker          | Please specify a different user using the --uid option.
worker          | 
worker          | User information: uid=0 euid=0 gid=0 egid=0
worker          | 
worker          |   warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
worker          |  
worker          |  -------------- celery@c365f830436b v5.2.7 (dawn-chorus)
worker          | --- ***** ----- 
worker          | -- ******* ---- Linux-5.15.0-86-generic-x86_64-with-glibc2.31 2023-11-16 19:00:58
worker          | - *** --- * --- 
worker          | - ** ---------- [config]
worker          | - ** ---------- .> app:         celery_worker:0x7ff576fa7dd0
worker          | - ** ---------- .> transport:   redis://redis:6379/0
worker          | - ** ---------- .> results:     redis://redis:6379/0
worker          | - *** --- * --- .> concurrency: 2 (prefork)
worker          | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
worker          | --- ***** ----- 
worker          |  -------------- [queues]
worker          |                 .> celery           exchange=celery(direct) key=celery
worker          |                 
worker          | 
worker          | [tasks]
worker          |   . celery_task.create_embedding_for_document
worker          |   . celery_worker.remove_onboarding_more_than_x_days_task
worker          |   . process_crawl_and_notify
worker          |   . process_file_and_notify
worker          | 
worker          | [2023-11-16 19:00:59,130: INFO/MainProcess] Connected to redis://redis:6379/0
worker          | [2023-11-16 19:00:59,135: INFO/MainProcess] mingle: searching for neighbors
worker          | [2023-11-16 19:01:00,158: INFO/MainProcess] mingle: all alone
worker          | [2023-11-16 19:01:00,168: INFO/MainProcess] celery@c365f830436b ready.
worker          | [2023-11-16 19:01:04,271: INFO/MainProcess] Events of group {task} enabled by remote.
worker          | [2023-11-17 00:00:00,008: INFO/MainProcess] Task celery_worker.remove_onboarding_more_than_x_days_task[46261aef-9722-48cf-9bfe-7869445ce991] received
worker          | [2023-11-17 00:00:00,477: INFO/ForkPoolWorker-2] HTTP Request: DELETE https://kmcofyegyrjeparmwfzg.supabase.co/rest/v1/onboardings?creation_time=lt.2023-11-10%2000%3A00%3A00.034599 "HTTP/1.1 200 OK"
worker          | [2023-11-17 00:00:00,482: INFO/ForkPoolWorker-2] Task celery_worker.remove_onboarding_more_than_x_days_task[46261aef-9722-48cf-9bfe-7869445ce991] succeeded in 0.47273045600013575s: None
worker          | [2023-11-18 00:00:00,002: INFO/MainProcess] Task celery_worker.remove_onboarding_more_than_x_days_task[a8b739ee-7ff7-45b9-b3c1-9ceff81974e4] received
worker          | [2023-11-18 00:00:00,323: INFO/ForkPoolWorker-2] HTTP Request: DELETE https://kmcofyegyrjeparmwfzg.supabase.co/rest/v1/onboardings?creation_time=lt.2023-11-11%2000%3A00%3A00.016979 "HTTP/1.1 200 OK"
worker          | [2023-11-18 00:00:00,324: INFO/ForkPoolWorker-2] Task celery_worker.remove_onboarding_more_than_x_days_task[a8b739ee-7ff7-45b9-b3c1-9ceff81974e4] succeeded in 0.3214102079946315s: None
redis           | 35:C 17 Nov 2023 11:56:05.038 * DB saved on disk
redis           | 35:C 17 Nov 2023 11:56:05.038 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 11:56:05.135 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 12:56:06.006 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 12:56:06.006 * Background saving started by pid 36
redis           | 36:C 17 Nov 2023 12:56:06.010 * DB saved on disk
redis           | 36:C 17 Nov 2023 12:56:06.010 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 12:56:06.106 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 13:56:07.054 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 13:56:07.054 * Background saving started by pid 37
redis           | 37:C 17 Nov 2023 13:56:07.058 * DB saved on disk
redis           | 37:C 17 Nov 2023 13:56:07.058 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 13:56:07.154 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 14:56:08.099 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 14:56:08.099 * Background saving started by pid 38
redis           | 38:C 17 Nov 2023 14:56:08.103 * DB saved on disk
redis           | 38:C 17 Nov 2023 14:56:08.103 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 14:56:08.200 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 15:56:09.014 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 15:56:09.015 * Background saving started by pid 39
redis           | 39:C 17 Nov 2023 15:56:09.018 * DB saved on disk
redis           | 39:C 17 Nov 2023 15:56:09.018 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 15:56:09.115 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 16:56:10.014 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 16:56:10.014 * Background saving started by pid 40
redis           | 40:C 17 Nov 2023 16:56:10.017 * DB saved on disk
redis           | 40:C 17 Nov 2023 16:56:10.018 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 16:56:10.114 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 17:56:11.031 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 17:56:11.032 * Background saving started by pid 41
redis           | 41:C 17 Nov 2023 17:56:11.035 * DB saved on disk
redis           | 41:C 17 Nov 2023 17:56:11.035 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 17:56:11.132 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 18:56:12.001 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 18:56:12.001 * Background saving started by pid 42
redis           | 42:C 17 Nov 2023 18:56:12.005 * DB saved on disk
redis           | 42:C 17 Nov 2023 18:56:12.006 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 18:56:12.102 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 19:56:13.046 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 19:56:13.046 * Background saving started by pid 43
redis           | 43:C 17 Nov 2023 19:56:13.049 * DB saved on disk
redis           | 43:C 17 Nov 2023 19:56:13.050 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 19:56:13.146 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 20:56:14.047 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 20:56:14.047 * Background saving started by pid 44
redis           | 44:C 17 Nov 2023 20:56:14.051 * DB saved on disk
redis           | 44:C 17 Nov 2023 20:56:14.051 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 20:56:14.148 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 21:56:15.091 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 21:56:15.092 * Background saving started by pid 45
redis           | 45:C 17 Nov 2023 21:56:15.095 * DB saved on disk
redis           | 45:C 17 Nov 2023 21:56:15.095 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 21:56:15.192 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 22:56:16.012 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 22:56:16.012 * Background saving started by pid 46
redis           | 46:C 17 Nov 2023 22:56:16.015 * DB saved on disk
redis           | 46:C 17 Nov 2023 22:56:16.016 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 22:56:16.112 * Background saving terminated with success
redis           | 1:M 17 Nov 2023 23:56:17.081 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 17 Nov 2023 23:56:17.081 * Background saving started by pid 47
redis           | 47:C 17 Nov 2023 23:56:17.084 * DB saved on disk
redis           | 47:C 17 Nov 2023 23:56:17.085 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 17 Nov 2023 23:56:17.181 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 00:56:18.052 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 00:56:18.053 * Background saving started by pid 48
redis           | 48:C 18 Nov 2023 00:56:18.056 * DB saved on disk
redis           | 48:C 18 Nov 2023 00:56:18.057 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 00:56:18.153 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 01:56:19.049 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 01:56:19.049 * Background saving started by pid 49
redis           | 49:C 18 Nov 2023 01:56:19.053 * DB saved on disk
redis           | 49:C 18 Nov 2023 01:56:19.054 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 01:56:19.150 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 02:56:20.023 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 02:56:20.024 * Background saving started by pid 50
redis           | 50:C 18 Nov 2023 02:56:20.027 * DB saved on disk
redis           | 50:C 18 Nov 2023 02:56:20.028 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 02:56:20.124 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 03:56:21.089 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 03:56:21.089 * Background saving started by pid 51
redis           | 51:C 18 Nov 2023 03:56:21.092 * DB saved on disk
redis           | 51:C 18 Nov 2023 03:56:21.093 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 03:56:21.189 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 04:56:22.038 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 04:56:22.038 * Background saving started by pid 52
redis           | 52:C 18 Nov 2023 04:56:22.041 * DB saved on disk
redis           | 52:C 18 Nov 2023 04:56:22.042 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 04:56:22.138 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 05:56:23.005 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 05:56:23.005 * Background saving started by pid 53
redis           | 53:C 18 Nov 2023 05:56:23.009 * DB saved on disk
redis           | 53:C 18 Nov 2023 05:56:23.010 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 05:56:23.106 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 06:56:24.094 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 06:56:24.095 * Background saving started by pid 54
redis           | 54:C 18 Nov 2023 06:56:24.098 * DB saved on disk
redis           | 54:C 18 Nov 2023 06:56:24.099 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 06:56:24.195 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 07:56:25.022 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 07:56:25.022 * Background saving started by pid 55
redis           | 55:C 18 Nov 2023 07:56:25.026 * DB saved on disk
redis           | 55:C 18 Nov 2023 07:56:25.026 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 07:56:25.123 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 08:56:26.009 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 08:56:26.009 * Background saving started by pid 56
redis           | 56:C 18 Nov 2023 08:56:26.013 * DB saved on disk
redis           | 56:C 18 Nov 2023 08:56:26.013 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 08:56:26.110 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 09:56:27.041 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 09:56:27.041 * Background saving started by pid 57
redis           | 57:C 18 Nov 2023 09:56:27.045 * DB saved on disk
redis           | 57:C 18 Nov 2023 09:56:27.045 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 09:56:27.142 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 10:56:28.039 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 10:56:28.039 * Background saving started by pid 58
redis           | 58:C 18 Nov 2023 10:56:28.043 * DB saved on disk
redis           | 58:C 18 Nov 2023 10:56:28.043 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 10:56:28.140 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 11:56:29.004 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 11:56:29.004 * Background saving started by pid 59
redis           | 59:C 18 Nov 2023 11:56:29.007 * DB saved on disk
redis           | 59:C 18 Nov 2023 11:56:29.008 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 11:56:29.104 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 12:56:30.052 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 12:56:30.052 * Background saving started by pid 60
redis           | 60:C 18 Nov 2023 12:56:30.055 * DB saved on disk
redis           | 60:C 18 Nov 2023 12:56:30.056 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 12:56:30.152 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 13:56:31.074 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 13:56:31.074 * Background saving started by pid 61
redis           | 61:C 18 Nov 2023 13:56:31.078 * DB saved on disk
redis           | 61:C 18 Nov 2023 13:56:31.078 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 13:56:31.175 * Background saving terminated with success
redis           | 1:M 18 Nov 2023 14:56:32.002 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 14:56:32.002 * Background saving started by pid 62
redis           | 62:C 18 Nov 2023 14:56:32.006 * DB saved on disk
redis           | 62:C 18 Nov 2023 14:56:32.006 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 14:56:32.103 * Background saving terminated with success
traefik         | time="2023-11-16T19:00:54Z" level=info msg="Configuration loaded from flags."
traefik         | time="2023-11-17T15:27:01Z" level=error msg="Error while Peeking first byte: read tcp 172.18.0.2:443->118.123.105.93:47976: read: connection timed out"
traefik         | time="2023-11-17T16:44:01Z" level=error msg="Error while Peeking first byte: read tcp 172.18.0.2:443->121.62.19.203:57936: read: connection timed out"
web             | yarn run v1.22.19
web             | $ next start
web             | - ready started server on [::]:3000, url: http://localhost:3000
web             | react-i18next:: You will need to pass in an i18next instance by using initReactI18next
web             | i18next: languageChanged en
web             | i18next: initialized {
web             |   debug: true,
web             |   initImmediate: true,
web             |   ns: [ 'translation' ],
web             |   defaultNS: 'translation',
web             |   fallbackLng: [ 'en' ],
web             |   fallbackNS: false,
web             |   supportedLngs: false,
web             |   nonExplicitSupportedLngs: false,
web             |   load: 'all',
web             |   preload: false,
web             |   simplifyPluralSuffix: true,
web             |   keySeparator: '.',
web             |   nsSeparator: ':',
web             |   pluralSeparator: '_',
web             |   contextSeparator: '_',
web             |   partialBundledLanguages: false,
web             |   saveMissing: false,
web             |   updateMissing: false,
web             |   saveMissingTo: 'fallback',
web             |   saveMissingPlurals: true,
web             |   missingKeyHandler: false,
web             |   missingInterpolationHandler: false,
web             |   postProcess: false,
web             |   postProcessPassResolved: false,
web             |   returnNull: false,
web             |   returnEmptyString: true,
web             |   returnObjects: false,
web             |   joinArrays: false,
web             |   returnedObjectHandler: false,
web             |   parseMissingKeyHandler: false,
web             |   appendNamespaceToMissingKey: false,
web             |   appendNamespaceToCIMode: false,
web             |   overloadTranslationOptionHandler: [Function: handle],
web             |   interpolation: {
web             |     escapeValue: false,
web             |     format: [Function: bound format],
web             |     prefix: '{{',
web             |     suffix: '}}',
web             |     formatSeparator: ',',
web             |     unescapePrefix: '-',
web             |     nestingPrefix: '$t(',
web             |     nestingSuffix: ')',
web             |     nestingOptionsSeparator: ',',
web             |     maxReplaces: 1000,
web             |     skipOnVariables: true
web             |   },
web             |   lng: 'en',
web             |   resources: {
web             |     en: {
web             |       brain: [Object],
web             |       chat: [Object],
web             |       config: [Object],
web             |       contact: [Object],
web             |       explore: [Object],
web             |       home: [Object],
web             |       invitation: [Object],
web             |       login: [Object],
web             |       logout: [Object],
web             |       monetization: [Object],
web             |       translation: [Object],
web             |       upload: [Object],
web             |       user: [Object],
web             |       delete_or_unsubscribe_from_brain: [Object],
web             |       knowledge: {},
web             |       external_api_definition: [Object]
web             |     },
web             |     es: {
web             |       brain: [Object],
web             |       chat: [Object],
web             |       config: [Object],
web             |       contact: [Object],
web             |       explore: [Object],
web             |       home: [Object],
web             |       invitation: [Object],
web             |       login: [Object],
web             |       logout: [Object],
web             |       monetization: [Object],
web             |       translation: [Object],
web             |       upload: [Object],
web             |       user: [Object],
web             |       delete_or_unsubscribe_from_brain: [Object],
web             |       knowledge: {},
web             |       external_api_definition: [Object]
web             |     },
web             |     fr: {
web             |       brain: [Object],
web             |       chat: [Object],
web             |       config: [Object],
web             |       contact: [Object],
web             |       explore: [Object],
web             |       home: [Object],
web             |       invitation: [Object],
web             |       login: [Object],
web             |       logout: [Object],
web             |       monetization: [Object],
web             |       translation: [Object],
web             |       upload: [Object],
web             |       user: [Object],
web             |       delete_or_unsubscribe_from_brain: [Object],
web             |       knowledge: {},
web             |       external_api_definition: [Object]
web             |     },
web             |     ptbr: {
web             |       brain: [Object],
web             |       chat: [Object],
web             |       config: [Object],
web             |       contact: [Object],
web             |       explore: [Object],
web             |       home: [Object],
web             |       invitation: [Object],
web             |       login: [Object],
web             |       logout: [Object],
web             |       monetization: [Object],
web             |       translation: [Object],
web             |       upload: [Object],
web             |       user: [Object],
web             |       delete_or_unsubscribe_from_brain: [Object],
web             |       knowledge: {},
web             |       external_api_definition: [Object]
web             |     },
web             |     ru: {
web             |       brain: [Object],
web             |       chat: [Object],
web             |       config: [Object],
web             |       contact: [Object],
web             |       explore: [Object],
web             |       home: [Object],
web             |       invitation: [Object],
web             |       login: [Object],
web             |       logout: [Object],
web             |       monetization: [Object],
web             |       translation: [Object],
web             |       upload: [Object],
web             |       user: [Object],
web             |       delete_or_unsubscribe_from_brain: [Object],
web             |       knowledge: {},
web             |       external_api_definition: [Object]
web             |     },
web             |     zh_cn: {
web             |       brain: [Object],
web             |       chat: [Object],
web             |       config: [Object],
web             |       contact: [Object],
web             |       explore: [Object],
web             |       home: [Object],
web             |       invitation: [Object],
web             |       login: [Object],
web             |       logout: [Object],
web             |       monetization: [Object],
web             |       translation: [Object],
web             |       upload: [Object],
web             |       user: [Object],
web             |       delete_or_unsubscribe_from_brain: [Object],
web             |       knowledge: {},
web             |       external_api_definition: [Object]
web             |     }
web             |   },
web             |   ignoreJSONStructure: true
web             | }
redis           | 1:M 18 Nov 2023 15:56:33.089 * 1 changes in 3600 seconds. Saving...
redis           | 1:M 18 Nov 2023 15:56:33.090 * Background saving started by pid 63
redis           | 63:C 18 Nov 2023 15:56:33.093 * DB saved on disk
redis           | 63:C 18 Nov 2023 15:56:33.094 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis           | 1:M 18 Nov 2023 15:56:33.190 * Background saving terminated with success

Twitter / LinkedIn details

No response

dosubot[bot] commented 10 months ago

🤖

Hello @xiaoqib,

I hope you're doing well! I'm Dosu, a friendly bot here to assist you with bugs, answer your questions, and guide you to contribute to the Quivr repository.

While we wait for a human maintainer to join us, I'll be reviewing your posted issue. I'll get back to you shortly with a detailed response. Hang tight!

dosubot[bot] commented 10 months ago

🤖

Hello @xiaoqib,

Nice to meet you! I'm Dosu, an AI assistant here to help you with your issues, answer your questions, and guide you to be a contributor to our Quivr repository. While we wait for a human maintainer to assist you further, feel free to ask me anything. Looking forward to our interaction!

From your description, it seems like there are a few issues that need to be addressed.

Firstly, the "Error occurred while fetching your chats" message suggests that there might be an issue with the chat_router in the backend-core service. This router is responsible for handling chat-related requests. The 404 Not Found and 405 Method Not Allowed errors indicate that the server is unable to find the requested resource or the HTTP method used is not supported for the requested resource. This could be due to incorrect endpoints being hit or the wrong HTTP methods being used. You might want to check your application's API calls to ensure they are correct.

Secondly, the warnings about memory overcommit in the Redis service suggest that your server might be running out of memory. Redis saves changes to the database every 3600 seconds as per your configuration, and if the changes are too large to fit into memory, it could cause issues. You might want to consider reducing the save interval or increasing the server's memory if possible.

Lastly, running the worker with superuser privileges is generally not recommended due to potential security risks. It's advisable to run services with the least privileges necessary. You might want to consider changing this in your configuration.

Regarding the interaction between the backend-core service and the Redis service, as per the provided context, the backend-core service depends on the Redis service as indicated in the docker-compose.dev.yml file. This means that the backend-core service will not start until the Redis service is up and running. However, the exact nature of the interaction (e.g., data exchange, function calls, etc.) is not specified in the provided context. For more detailed information about how backend-core interacts with Redis, you would need to look at the application code within the backend service.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

xiaoqib commented 10 months ago

@dosu-bot EMAIL=apple.jp@yitouchina.cn DOMAIN_NAME=bot.house360.co.jp API_DOMAIN_NAME=api.house360.co.jp
The configuration for the FRONTEND .env file is as follows:

NEXT_PUBLIC_ENV=production NEXT_PUBLIC_BACKEND_URL=https://bot.house360.co.jp:5050/ NEXT_PUBLIC_FRONTEND_URL=https://bot.house360.co.jp:*/ NEXT_PUBLIC_SUPABASE_URL=xxxxx NEXT_PUBLIC_SUPABASE_ANON_KEY=xxxxxx

NEXT_PUBLIC_GROWTHBOOK_CLIENT_KEY= NEXT_PUBLIC_GROWTHBOOK_URL=

NEXT_PUBLIC_JUNE_API_KEY= NEXT_PUBLIC_GA_ID=

NEXT_PUBLIC_E2E_URL=http://localhost:3003/ NEXT_PUBLIC_E2E_EMAIL= NEXT_PUBLIC_E2E_PASSWORD=

NEXT_PUBLIC_CMS_URL=https://cms.quivr.app/

NEXT_PUBLIC_STRIPE_PRICING_TABLE_ID= NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY= NEXT_PUBLIC_STRIPE_MANAGE_PLAN_URL=

NEXT_PUBLIC_AUTH_MODES=magic_link,password

What is wrong with my configuration?

dosubot[bot] commented 10 months ago

🤖

From the information you've provided, there are several potential issues that could be causing the errors you're experiencing. Let's go through them one by one:

  1. 404 Not Found and 405 Method Not Allowed errors: These errors typically indicate that the server is unable to find the requested resource or the HTTP method used is not allowed. This could be due to incorrect routing or misconfiguration in your backend service. Please ensure that the routes and methods in your backend service match those being requested by the frontend.

  2. Memory overcommit warnings from the Redis service: This warning is usually thrown when the system does not have enough memory to fulfill a request by Redis. You might want to consider increasing the memory allocated to your Docker containers or optimizing your application to use less memory.

  3. Running worker with superuser privileges: It's generally not recommended to run services with superuser privileges due to potential security risks. You might want to consider running your services with a non-root user.

Regarding your FRONTEND .env file configuration, it seems to be correctly set up. However, please ensure that all the URLs and keys are correct and accessible. Also, make sure that the NEXT_PUBLIC_BACKEND_URL is correctly pointing to your backend service and that the service is running and accessible.

Please note that without access to your backend service configuration and the specific error messages, it's difficult to provide a more precise diagnosis. If you could provide more details about your backend service configuration and the specific error messages, it would be helpful in identifying the root cause of the issues.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

xiaoqib commented 10 months ago

@dosu-bot 怎么确保请确保 正确指向后端服务,并且该服务正在运行且可访问。NEXT_PUBLIC_BACKEND_URL,我需要查看哪些文件 或者哪个.env 文件的哪个熟悉需要确认

dosubot[bot] commented 10 months ago

🤖

To confirm that NEXT_PUBLIC_BACKEND_URL is correctly pointing to your backend service, you should check the .frontend_env.example file in the root of the project. This file contains the NEXT_PUBLIC_BACKEND_URL variable, which should be set to the URL where your backend service is running. In your case, it's currently set to https://bot.house360.co.jp:5050/.

To verify that the backend service is running and accessible, you can use a tool like curl or wget in your terminal, or a web-based tool like Postman. Send a request to the URL specified in NEXT_PUBLIC_BACKEND_URL and check the response. If the service is running and accessible, you should receive a successful HTTP response.

Here's an example using curl:

curl -I https://bot.house360.co.jp:5050/

If the service is running, you should see an HTTP/1.1 200 OK (or similar) response. If not, you might see a different status code, or no response at all, indicating that the service is not accessible.

For more information, you can refer to the env.md and .frontend_env.example files in the Quivr repository.

Regarding the Dockerfile for the backend-core service, it's structured as follows:

  1. It starts with a slim version of Python 3.11 as the base image.
  2. It accepts an argument DEV_MODE and sets it as an environment variable.
  3. It installs several dependencies including the GEOS library, Rust, and others. After the installation, it cleans up the package list.
  4. It adds Rust binaries to the PATH.
  5. It sets the working directory to /code.
  6. It copies the requirements file into the Docker image.
  7. It upgrades pip and installs the Python dependencies from the requirements file. If DEV_MODE is set to true, it also installs debugpy.
  8. It copies the rest of the application into the Docker image.
  9. It sets the default command to start the application using uvicorn with the specified options.

You can find the Dockerfile for the backend-core service in the backend/Dockerfile file in the Quivr repository.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 10 months ago

🤖

Based on the information you've provided and the context from the Quivr repository, the "Connection refused" error could be due to several reasons:

  1. The backend service might not be running or might not have started successfully. You can check the logs of the backend-core service to see if it started successfully and is running on port 5050. Use the command docker logs backend-core to check the logs.

  2. There might be a network issue. The backend-core service might not be reachable from the frontend service. You can check the network configuration in your Docker Compose file. Use the command docker network inspect <network-name> to check the network configuration.

  3. The backend-core service might not be listening on port 5050. You can check the application code or the Dockerfile of the backend-core service to see if it's configured to listen on port 5050.

In the Quivr codebase, there is a specific configuration that needs to be set for the backend service to listen on port 5050. In the main.py file, the uvicorn.run() function should be set to listen on port 5050:

if __name__ == "__main__":
    # run main.py to debug backend
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=5050)  # Set the port number here

Also, in the docker-compose.yml file, the backend-core service should be configured to expose and map to port 5050:

backend-core:
  ...
  ports:
    - 5050:5050

And in the Dockerfile, the application should be set to run on port 5050:

CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0", "--port", "5050", "--workers", "6"]

Please check these configurations and let me know if you still encounter the issue.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

xiaoqib commented 10 months ago

1. root@iZ6we7nwen3zaknisrym6xZ:~/quivr/backend# docker logs backend-core INFO: Will watch for changes in these directories: ['/code'] WARNING: "workers" flag is ignored when reloading is enabled. INFO: Uvicorn running on http://0.0.0.0:5050 (Press CTRL+C to quit) INFO: Started reloader process [1] using StatReload INFO: Started server process [7] INFO: Waiting for application startup. [INFO] Downloading pandoc from https://github.com/jgm/pandoc/releases/download/3.1.9/pandoc-3.1.9-1-amd64.deb ... 2023-11-16 19:01:03,993:INFO - Downloading pandoc from https://github.com/jgm/pandoc/releases/download/3.1.9/pandoc-3.1.9-1-amd64.deb ... [INFO] Unpacking pandoc-3.1.9-1-amd64.deb to tempfolder... 2023-11-16 19:01:05,825:INFO - Unpacking pandoc-3.1.9-1-amd64.deb to tempfolder... [INFO] Copying pandoc to /root/bin ... 2023-11-16 19:01:06,892:INFO - Copying pandoc to /root/bin ... [INFO] Making /root/bin/pandoc executeable... 2023-11-16 19:01:07,004:INFO - Making /root/bin/pandoc executeable... [INFO] Copying copyright to /root/bin ... 2023-11-16 19:01:07,005:INFO - Copying copyright to /root/bin ... INFO: Application startup complete. INFO: 172.18.0.2:56616 - "GET /.git/config HTTP/1.1" 404 Not Found INFO: 172.18.0.2:56618 - "HEAD / HTTP/1.1" 405 Method Not Allowed INFO: 172.18.0.2:45804 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:34654 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:49182 - "HEAD / HTTP/1.1" 405 Method Not Allowed INFO: 172.18.0.2:46670 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:46670 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:54740 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:54740 - "HEAD /favicon.ico HTTP/1.1" 404 Not Found INFO: 172.18.0.2:45068 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:45068 - "GET /favicon.ico HTTP/1.1" 404 Not Found INFO: 172.18.0.2:55704 - "GET /robots.txt HTTP/1.1" 404 Not Found INFO: 172.18.0.2:55704 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:43258 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:49698 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:49698 - "GET /favicon.ico HTTP/1.1" 404 Not Found INFO: 172.18.0.2:40356 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:47010 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:47018 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:50948 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:50948 - "GET /Public/home/js/check.js HTTP/1.1" 404 Not Found INFO: 172.18.0.2:50948 - "GET /static/admin/javascript/hetong.js HTTP/1.1" 404 Not Found INFO: 172.18.0.2:34062 - "HEAD / HTTP/1.1" 405 Method Not Allowed INFO: 172.18.0.2:50430 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:50430 - "GET /Public/home/js/check.js HTTP/1.1" 404 Not Found INFO: 172.18.0.2:50430 - "GET /static/admin/javascript/hetong.js HTTP/1.1" 404 Not Found INFO: 172.18.0.2:55734 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:55734 - "GET /favicon.ico HTTP/1.1" 404 Not Found INFO: 172.18.0.2:57138 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:57138 - "GET /favicon.ico HTTP/1.1" 404 Not Found INFO: 172.18.0.2:50866 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:50866 - "GET /favicon.ico HTTP/1.1" 404 Not Found INFO: 172.18.0.2:36076 - "HEAD / HTTP/1.1" 405 Method Not Allowed INFO: 172.18.0.2:52514 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:38860 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:54758 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:54758 - "GET /favicon.ico HTTP/1.1" 404 Not Found INFO: 172.18.0.2:55762 - "GET / HTTP/1.1" 200 OK INFO: 172.18.0.2:33338 - "GET / HTTP/1.1" 200 OK 这个是服务的日志 我应该怎么修改配置文件

xiaoqib commented 10 months ago

@dosu-bot 关于Redis 内存不足,会导致backend-core 服务 无法正常使用吗?我怎么去查看redis 的内存需求

dosubot[bot] commented 10 months ago

🤖

Based on the information provided in the Quivr repository, there are no specific configuration settings for the Redis service, including memory requirements. The Redis service is defined in the docker-compose.dev.yml file with a basic configuration that doesn't include any specific memory settings.

However, the warning about memory overcommit you're seeing in your logs could potentially cause issues with the backend-core service if Redis runs out of memory. Redis, like many data stores, requires enough memory to store all the data you're using it for. If it runs out of memory, it could start evicting data or even fail to write new data, which could cause issues with any service that relies on it.

To check the memory requirements of the Redis service, you can use the INFO memory command in Redis, which provides information about memory usage. This can give you an idea of how much memory Redis is currently using and whether you need to allocate more.

If you find that Redis is using more memory than you have available, you may need to adjust your Redis configuration to better suit your memory constraints. This could involve setting a maxmemory limit, enabling data eviction policies, or optimizing your data structures for memory efficiency.

Please note that these are general Redis performance tuning strategies and may not be applicable to your specific use case. You may need to adjust these strategies based on your specific application requirements and infrastructure constraints.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 10 months ago

🤖

Hello @dosubot[bot],

Nice to meet you! I'm Dosu, a friendly bot here to assist you with your Quivr-related issues and queries. While you're waiting for a human maintainer, I'm here to help you with bug troubleshooting, answering questions, and even guiding you on how to contribute to the project. Let's get started!

To check the memory usage of the Redis service in Quivr, you can use the docker stats command followed by the ID of your Redis container. This command will provide real-time statistics about the Docker container, including memory usage. Here's how you can do it:

docker stats <container_id>

Replace <container_id> with the ID of your Redis container.

If you find that the Redis service is using more memory than expected, you can limit its memory usage by adding a mem_limit directive under the redis service in your docker-compose file. Here's an example of how to do it:

redis:
  image: redis:latest
  container_name: redis
  restart: unless-stopped
  mem_limit: 500m
  ports:
    - 6379:6379

In this example, the memory usage of the Redis service is limited to 500 megabytes. You can adjust this value according to your needs.

As for the default memory allocation for the Redis service in the Quivr codebase, I wasn't able to find this information in the repository. It's possible that this information is available elsewhere or I may have missed it. If you have any more questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 10 months ago

🤖

From the logs you've provided, it seems like there are a few files that the server is trying to access but can't find, hence the 404 errors. These files include '/favicon.ico', '/Public/home/js/check.js', and '/static/admin/javascript/hetong.js'.

The '/favicon.ico' is a standard file that browsers look for when loading a website. If your application doesn't need a favicon, this error can be safely ignored. However, if you want to get rid of this error, you can add a favicon.ico file to your server's root directory.

The '/Public/home/js/check.js' and '/static/admin/javascript/hetong.js' files seem to be JavaScript files that your application is trying to load. If these files are necessary for your application, you should ensure that they exist in the specified paths. If they are not necessary, you might want to check your application's code to see why it's trying to load these files and possibly remove those references if they are not needed.

As for the error message "Error occurred while fetching your chats", it's hard to say exactly what the issue is without more information. It could be a problem with the server's connection to the database, an issue with the chat data itself, or a bug in the code that fetches the chat data. You might want to check the part of your application that handles chat data to see if you can find any issues.

Regarding the 405 Method Not Allowed errors, these are occurring because your server is receiving HEAD requests but doesn't have a handler set up for them. HEAD is a type of HTTP request method that asks for a response identical to that of a GET request, but without the response body. If your application doesn't need to handle HEAD requests, these errors can be safely ignored. If you do want to handle HEAD requests, you'll need to set up a handler for them in your server's code.

I hope this helps! If you have any more information or if there's anything else you're unsure about, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 10 months ago

🤖

Based on the information you've provided and the context from the Quivr repository, there are a few things that could be causing the issue with the core chat service.

Firstly, the error message "No chat_id provided" suggests that the chat_id is not being passed correctly to the update_chat function. This could be due to a problem in the code where this function is being called. You might want to check the part of your code where you're calling this function and ensure that the chat_id is being correctly passed.

Secondly, the issue could be related to missing or incorrect environment variables. The environment variables in the .env files are crucial for the correct operation of the Quivr application. For example, the SUPABASE_URL and SUPABASE_SERVICE_KEY are used to configure the connection to the Supabase backend service, and the OPENAI_API_KEY is used to configure the OpenAI service. If any of these variables are missing or incorrect, it could cause issues with the core chat service.

In a similar issue found in the Quivr repository, the problem was related to missing API keys for the analytics tools. You might want to check your .env files and ensure that all the necessary API keys are present and correct. You can get these keys from the respective service providers.

Lastly, it's worth noting that the PG_DATABASE_URL in your Backend/.env file is set to notimplementedyet. If you're using a PostgreSQL database, you'll need to replace this with the correct database URL.

I hope this helps! If you're still having trouble, could you provide more information about the exact error message you're seeing and the steps you're taking when the error occurs? This will help in diagnosing the problem.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 10 months ago

🤖

Yes, the environment variables GOOGLE_APPLICATION_CREDENTIALS, GOOGLE_CLOUD_PROJECT, RESEND_API_KEY, and RESEND_CONTACT_SALES_TO are indeed mandatory for the operation of Quivr. They are used to authenticate your application with Google Cloud services and the Resend service, as well as to identify your project and specify the recipient's email address for sales inquiries. Here's how you can set them:

  1. GOOGLE_APPLICATION_CREDENTIALS: This should be set to the path of the JSON file that contains your service account key. This is used to authenticate your application with Google Cloud services.

  2. GOOGLE_CLOUD_PROJECT: This should be set to the ID of your Google Cloud project. This is used to identify your project when interacting with Google Cloud services.

  3. RESEND_API_KEY: This should be set to your Resend API key. This is used to authenticate your application with the Resend service.

  4. RESEND_CONTACT_SALES_TO: This should be set to the recipient's email address for sales inquiries using the "RESEND" service.

Here's an example of how these variables could be set:

GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/service-account-key.json
GOOGLE_CLOUD_PROJECT=your-google-cloud-project-id
RESEND_API_KEY=your-resend-api-key
RESEND_CONTACT_SALES_TO=sales-inquiries@example.com

Please replace the values after the equals sign (=) with your actual values.

In addition to these, there are other environment variables in the backend/.env file that are crucial for Quivr's operation, such as SUPABASE_URL, SUPABASE_SERVICE_KEY, OPENAI_API_KEY, JWT_SECRET_KEY, CELERY_BROKER_URL, RESEND_EMAIL_ADDRESS, RESEND_CONTACT_SALES_FROM, PREMIUM_MAX_BRAIN_NUMBER, PREMIUM_MAX_BRAIN_SIZE, and PREMIUM_DAILY_CHAT_CREDIT. These should also be correctly set according to their respective purposes.

For more information, you can refer to the .backend_env.example and env.md files in the Quivr repository.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

BabaSarf commented 10 months ago

change to this NEXT_PUBLIC_ENV=prod NEXT_PUBLIC_BACKEND_URL=https://api.bot.house360.co.jp NEXT_PUBLIC_FRONTEND_URL=https://bot.house360.co.jp/:*

xiaoqib commented 10 months ago

更改为此 NEXT_PUBLIC_ENV=prod NEXT_PUBLIC_BACKEND_URL=https://api.bot.house360.co.jp NEXT_PUBLIC_FRONTEND_URL=https://bot.house360.co.jp/:*

我没https://api.bot.house360.co.jp 这个域名, 是不是应该api.house360.co.jp ?

github-actions[bot] commented 9 months ago

Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.