zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://privategpt.dev
Apache License 2.0
53.6k stars 7.21k forks source link

How to run privateGPT in kubernetes with HA (2 replicas)? #1561

Open minixxie opened 7 months ago

minixxie commented 7 months ago

Discussed in https://github.com/imartinez/privateGPT/discussions/1558

Originally posted by **minixxie** January 30, 2024 Hello, First thank you so much for providing this awesome project! I'm able to run this in kubernetes, but when I try to scale out to 2 replicas (2 pods), I found that the documents ingested are not shared among 2 pods. First, I found the data being persisted in "local_data/" folder, so I found the doc and spin up qdrant, and change the settings.yaml as follow: ``` qdrant: #path: local_data/private_gpt/qdrant prefer_grpc: false host: qdrant.qdrant.svc.cluster.local ``` I saw the log of the pod showing the check on qdrant was successful: ``` 08:54:27.979 [INFO ] httpx - HTTP Request: GET http://qdrant.qdrant.svc.cluster.local:6333/collections/make_this_parameterizable_per_api_call "HTTP/1.1 200 OK" ``` After I ingested the doc inside the 1st pod: ``` worker@private-gpt-58fccb48c6-l2m4q:/home/worker/app$ curl -X POST --url "http://localhost:8080/v1/ingest/text" --header "Content-Type: application/json" --header "Accept: application/json" --data '{"file_name": "Student winter uniform requirements","text": "Boys students need to wear white long sleeves shirt, and gray long pants. While girl students need to wear pale blue long sleeves shirt, and dark blue skirt. Both boys and girls need to wear a tie."}' {"object":"list","model":"private-gpt","data":[{"object":"ingest.document","doc_id":"750a86fd-896c-4fd9-af59-fa0905a5fed9","doc_metadata":{"file_name":"Student winter uniform requirements"}}]} ``` I'm able to get the doc from the list endpoint: ``` worker@private-gpt-58fccb48c6-l2m4q:/home/worker/app$ curl -X GET --url "http://localhost:8080/v1/ingest/list" --header "Accept: application/json" {"object":"list","model":"private-gpt","data":[{"object":"ingest.document","doc_id":"750a86fd-896c-4fd9-af59-fa0905a5fed9","doc_metadata":{"file_name":"Student winter uniform requirements"}}]} ``` However, if I check the list endpoint in the 2nd pod, it's empty: ``` worker@private-gpt-58fccb48c6-f9fj4:/home/worker/app$ curl -X GET --url "http://localhost:8080/v1/ingest/list" --header "Accept: application/json" {"object":"list","model":"private-gpt","data":[]} ``` This means they are not sharing the data from the vector database? Is there any way to run it in HA mode, so all replicas share the same set of documents ingested? docker image I'm using: 3x3cut0r/privategpt:0.2.0 ``` 3x3cut0r/privategpt 0.2.0 0bfaeacab058 5 hours ago linux/arm64 6.3 GiB 4.7 GiB ``` OS: mac OS mac book pro (Apple M2) runtime: colima: ``` PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS default Running aarch64 4 8GiB 100GiB containerd+k3s ```
minixxie commented 7 months ago

Seems no one came across this issue? or I'm the only person running it in 2 pods? Today I've tried again and successfully see data saved in the qdrant database, but when I check the list of docs saved, it's sometimes returnning empty (from the 2nd pod):

curl -X GET --url "http://private-gpt.local/v1/ingest/list" --header "Accept: application/json"

{"object":"list","model":"private-gpt","data":[{"object":"ingest.document","doc_id":"227ea4a8-863f-47d3-9cbf-75aa2bebc447","doc_metadata":{"file_name":"b"}},{"object":"ingest.document","doc_id":"6392894d-1da3-4d77-abd1-7c65e5a33535","doc_metadata":{"file_name":"a"}}]}
curl -X GET --url "http://private-gpt.local/v1/ingest/list" --header "Accept: application/json"

{"object":"list","model":"private-gpt","data":[]}

After some investigation, I found that there is a data file saving the doc on harddrive:

# in POD 1
worker@private-gpt-55cb54b557-2rp2g:/home/worker/app/local_data/private_gpt$ grep -l 6392894d-1da3-4d77-abd1-7c65e5a33535 * 
docstore.json
# in POD 2
worker@private-gpt-55cb54b557-vfhdb:/home/worker/app/local_data/private_gpt$ grep -l 6392894d-1da3-4d77-abd1-7c65e5a33535 *
## <empty result>

Anyone knows how to avoid this so it can share the states across 2 pods? Thanks very much.

minixxie commented 7 months ago

I'm able to make it run in 2 PODs by changing the document store and index store to use MongoDB: https://github.com/imartinez/privateGPT/compare/main...minixxie:privateGPT:dev

Not sure if this is the correct way of making it stateless.

Anhui-tqhuang commented 6 months ago

@minixxie hey i recently added the support to use the pgvector as the vector store https://github.com/imartinez/privateGPT/pull/1624

which might be shared, moreover, have you used a shared PV/PVC or other storage for doc store or index store?