zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://privategpt.dev
Apache License 2.0
54.27k stars 7.3k forks source link

chatGPT ui Error #1671

Open spsach opened 8 months ago

spsach commented 8 months ago

I got the privateGPT 2.0 app working. It is able to answer questions from LLM without using loaded files. I am also able to upload a pdf file without any errors. However when I submit a query or ask it so summarize the document, it comes up with no response but just shows me name of the uploaded file as source ChatGPT-Error

felciano commented 8 months ago

I'm seeing the same

Vivek-C-Shah commented 8 months ago

can you share the pdf? it seems some issue with that

spsach commented 8 months ago

Here is the pdf file. I had used the same file with another RAG implementation and it worked fine.

Thanks for your help

Satya From: Ramon Felciano @.> Sent: Friday, March 1, 2024 7:50 PM To: imartinez/privateGPT @.> Cc: spsach @.>; Author @.> Subject: Re: [imartinez/privateGPT] chatGPT ui Error (Issue #1671)

I'm seeing the same

— Reply to this email directly, view it on GitHubhttps://github.com/imartinez/privateGPT/issues/1671#issuecomment-1974187565, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BGF5AMXKESUAMT62IHVFFR3YWEV55AVCNFSM6AAAAABECL6GUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZUGE4DONJWGU. You are receiving this because you authored the thread.Message ID: @.**@.>>

spsach commented 8 months ago

I tried loading another pdf file and it is still the same. The LM Studio log shows that the LLM generated the output but it does not get displayed in the app with “Query Files” option

Thanks for your help

From: Vivek Shah @.> Sent: Friday, March 1, 2024 9:55 PM To: imartinez/privateGPT @.> Cc: spsach @.>; Author @.> Subject: Re: [imartinez/privateGPT] chatGPT ui Error (Issue #1671)

can you share the pdf? it seems some issue with that

— Reply to this email directly, view it on GitHubhttps://github.com/imartinez/privateGPT/issues/1671#issuecomment-1974257707, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BGF5AMWDNNW2KDT43D4GAYLYWFEPTAVCNFSM6AAAAABECL6GUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZUGI2TONZQG4. You are receiving this because you authored the thread.Message ID: @.**@.>>

PS C:\Users\satya\privateGPT> c:\MinGW\bin\make.exe run poetry run python -m private_gpt 15:41:24.721 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'vllm'] 15:41:38.209 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=openailike 15:41:40.014 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=local 15:41:48.975 [INFO ] llama_index.indices.loading - Loading all indices. 15:41:49.210 [INFO ] private_gpt.ui.ui - Mounting the gradio UI, at path=/ INFO: Started server process [22740] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit) INFO: ::1:56619 - "GET / HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/index-2e3ef8b2.js HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/index-8d4a258a.css HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/svelte/svelte.js HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-a927ce67.js HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/Index-882fff57.css HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /info HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /theme.css HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/Blocks-7026e92e.js HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Button-2a9911a9.css HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Button-816a5c8f.js HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Blocks-4a034b77.css HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Index-7cbc3ec9.js HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/Index-5538a1d8.css HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Index-815f98ca.css HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Index-404650cc.js HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-24a33ce1.js HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/Index-2abed479.css HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Index-dc08ef34.css HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Example-55d94c71.css HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/Index-f43ff834.css HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/ModifyUpload-7d41bdb3.css HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/Example-168de94c.css HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-11e1685d.css HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Example-aca1592e.css HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Index-6e9da292.css HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/Textbox-4c6022f4.css HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/Image-7b829b60.css HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-7ab72019.css HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Example-e8628e88.css HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Video-540eb7aa.css HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Index-7ea7c4ca.css HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/Index-a9a298fa.js HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Index-a90cda25.css HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-e45a2b11.css HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/BlockTitle-76077052.js HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Info-914ec0e6.js HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Index-a4fc5dd6.js HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Example-b79eee1b.js HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/index-21d69e4c.js HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-7633de45.js HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/utils-013c0d40.js HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/ModifyUpload.svelte_svelte_type_style_lang-f4299494.js HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Index.svelte_svelte_type_style_lang-bffb318d.js HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/Example.svelte_svelte_type_style_lang-0d960532.js HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Example-7d6bb9b6.js HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/dsv-a37bb3db.js HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/prism-python-e02ab82a.js HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Index-4c364175.js HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Index-362bc944.js HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Check-f7edb5d9.js HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Textbox-a79ff3fa.js HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/Copy-a69620a8.js HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Example-e0df15d4.js HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Index-0d7ad3de.js HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-cbf1aa1d.js HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/ShareButton-056bb30d.js HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/IconButton-387c96f6.js HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/file-url-3e95666f.js HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/Image-4d34d604.js HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/Video-d99e1661.js HTTP/1.1" 200 OK INFO: ::1:56620 - "GET /assets/BlockLabel-f1b72dca.js HTTP/1.1" 200 OK INFO: ::1:56619 - "GET /assets/Index-756928a1.js HTTP/1.1" 200 OK INFO: ::1:56626 - "GET /assets/Index-a3ffbeb0.js HTTP/1.1" 200 OK INFO: ::1:56624 - "GET /assets/Index-b2efa79d.js HTTP/1.1" 200 OK INFO: ::1:56621 - "GET /assets/api-logo-5346f193.svg HTTP/1.1" 200 OK INFO: ::1:56622 - "GET /assets/logo-3707f936.svg HTTP/1.1" 200 OK INFO: ::1:56622 - "POST /run/predict HTTP/1.1" 200 OK INFO: ::1:56638 - "POST /queue/join HTTP/1.1" 200 OK INFO: ::1:56638 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK INFO: ::1:56638 - "POST /queue/join HTTP/1.1" 200 OK INFO: ::1:56638 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK 15:42:24.482 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=5e6ba9c7-7da5-4df7-b877-309437361c97 in the doc and index store 15:42:24.718 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=4e81b844-91cd-4260-a181-a5e40c1b75cd in the doc and index store 15:42:24.851 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=467718e5-7b13-47b1-9e14-977799d44352 in the doc and index store 15:42:24.972 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=014b127f-2751-4aee-9c7f-47498950c495 in the doc and index store 15:42:25.035 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=6b4a5e75-d460-49b1-8875-242705dd4a00 in the doc and index store 15:42:25.245 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=84ceb751-556d-4e1a-9051-e105b35b9004 in the doc and index store 15:42:25.411 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=9dcd86d0-8551-4aaa-b9c3-85d53a24485a in the doc and index store 15:42:25.560 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=4c77e2f5-7ffe-4867-8760-451cc3e6d53d in the doc and index store 15:42:25.635 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=3d85a333-b22b-4ac6-957c-820b4b2c8f02 in the doc and index store 15:42:25.785 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=308e19d9-e5b9-455e-bc3f-d0f7a50d6bf7 in the doc and index store 15:42:25.818 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=a5605f45-10a1-4923-9f88-ccefcaf12110 in the doc and index store 15:42:25.932 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=34537b2a-3a11-48ef-b65e-17387679ec28 in the doc and index store 15:42:26.128 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=079e37c0-752d-4f8f-a93a-593fb74b6ce7 in the doc and index store 15:42:26.152 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=e9af5cc5-d309-4d31-bf39-95cc58468087 in the doc and index store 15:42:26.282 [INFO ] private_gpt.server.ingest.ingest_service - Deleting the ingested document=8c09ddc6-e0df-476d-83fa-a6fc603cb594 in the doc and index store INFO: ::1:56638 - "POST /queue/join HTTP/1.1" 200 OK INFO: ::1:56638 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK INFO: ::1:56638 - "POST /queue/join HTTP/1.1" 200 OK INFO: ::1:56638 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK INFO: ::1:56981 - "POST /upload HTTP/1.1" 200 OK INFO: ::1:56981 - "POST /queue/join HTTP/1.1" 200 OK 15:47:49.688 [INFO ] private_gpt.server.ingest.ingest_service - Ingesting file_names=['ontology101.pdf'] INFO: ::1:56981 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 70.66it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 23/23 [00:01<00:00, 16.66it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 94.05it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 35/35 [00:01<00:00, 29.73it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 99.75it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 25/25 [00:01<00:00, 19.39it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 23/23 [00:00<00:00, 33.64it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 224.58it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 32/32 [00:01<00:00, 31.16it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 62.10it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 27/27 [00:01<00:00, 23.48it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 21/21 [00:00<00:00, 34.04it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 284.07it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 21/21 [00:01<00:00, 12.60it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 83.18it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 32/32 [00:01<00:00, 30.60it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 25.73it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21.18it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 19/19 [00:00<00:00, 29.27it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 599.61it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 25/25 [00:01<00:00, 21.28it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 24/24 [00:00<00:00, 27.17it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 42.61it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 23/23 [00:00<00:00, 25.86it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 437.27it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 26/26 [00:01<00:00, 22.29it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 264.98it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 25/25 [00:01<00:00, 24.76it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 123.26it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 21/21 [00:00<00:00, 24.66it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 319.59it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 27/27 [00:00<00:00, 28.50it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 137.50it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 24/24 [00:00<00:00, 43.63it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 71.38it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 31/31 [00:00<00:00, 36.68it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 27/27 [00:00<00:00, 36.22it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 82.44it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 60/60 [00:01<00:00, 34.91it/s] Generating embeddings: 0it [00:00, ?it/s] Parsing nodes: 100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 113.21it/s] Generating embeddings: 100%|███████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 44.06it/s] Generating embeddings: 0it [00:00, ?it/s] 15:48:18.863 [INFO ] private_gpt.server.ingest.ingest_service - Finished ingestion file_name=['ontology101.pdf'] INFO: ::1:56981 - "POST /queue/join HTTP/1.1" 200 OK INFO: ::1:56981 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK 15:48:19.107 [ERROR ] asyncio - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "C:\Users\satya\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py", line 80, in _run self._context.run(self._callback, *self._args) File "C:\Users\satya\AppData\Local\Programs\Python\Python311\Lib\asyncio\proactor_events.py", line 165, in _call_connection_lost self._sock.shutdown(socket.SHUT_RDWR) ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host INFO: ::1:56985 - "POST /queue/join HTTP/1.1" 200 OK INFO: ::1:56985 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /theme.css HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/index-2e3ef8b2.js.map HTTP/1.1" 200 OK INFO: ::1:56988 - "GET /assets/Index-a927ce67.js.map HTTP/1.1" 200 OK INFO: ::1:56989 - "GET /assets/Blocks-7026e92e.js.map HTTP/1.1" 200 OK INFO: ::1:56991 - "GET /assets/Index-7cbc3ec9.js.map HTTP/1.1" 200 OK INFO: ::1:56990 - "GET /assets/Button-816a5c8f.js.map HTTP/1.1" 200 OK INFO: ::1:56992 - "GET /assets/Index-404650cc.js.map HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/Index-24a33ce1.js.map HTTP/1.1" 200 OK INFO: ::1:56991 - "GET /assets/BlockTitle-76077052.js.map HTTP/1.1" 200 OK INFO: ::1:56990 - "GET /assets/Index-a9a298fa.js.map HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/Index-a4fc5dd6.js.map HTTP/1.1" 200 OK INFO: ::1:56992 - "GET /assets/Info-914ec0e6.js.map HTTP/1.1" 200 OK INFO: ::1:56989 - "GET /assets/index-21d69e4c.js.map HTTP/1.1" 200 OK INFO: ::1:56991 - "GET /assets/Example-b79eee1b.js.map HTTP/1.1" 200 OK INFO: ::1:56990 - "GET /assets/utils-013c0d40.js.map HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/Index-7633de45.js.map HTTP/1.1" 200 OK INFO: ::1:56988 - "GET /assets/Example-7d6bb9b6.js.map HTTP/1.1" 200 OK INFO: ::1:56989 - "GET /assets/Index.svelte_svelte_type_style_lang-bffb318d.js.map HTTP/1.1" 200 OK INFO: ::1:56992 - "GET /assets/ModifyUpload.svelte_svelte_type_style_lang-f4299494.js.map HTTP/1.1" 200 OK INFO: ::1:56990 - "GET /assets/prism-python-e02ab82a.js.map HTTP/1.1" 200 OK INFO: ::1:56991 - "GET /assets/dsv-a37bb3db.js.map HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/Index-4c364175.js.map HTTP/1.1" 200 OK INFO: ::1:56992 - "GET /assets/Index-362bc944.js.map HTTP/1.1" 200 OK INFO: ::1:56988 - "GET /assets/Textbox-a79ff3fa.js.map HTTP/1.1" 200 OK INFO: ::1:56989 - "GET /assets/Check-f7edb5d9.js.map HTTP/1.1" 200 OK INFO: ::1:56991 - "GET /assets/Copy-a69620a8.js.map HTTP/1.1" 200 OK INFO: ::1:56990 - "GET /assets/Example-e0df15d4.js.map HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/Index-0d7ad3de.js.map HTTP/1.1" 200 OK INFO: ::1:56992 - "GET /assets/ShareButton-056bb30d.js.map HTTP/1.1" 200 OK INFO: ::1:56989 - "GET /assets/IconButton-387c96f6.js.map HTTP/1.1" 200 OK INFO: ::1:56988 - "GET /assets/Example.svelte_svelte_type_style_lang-0d960532.js.map HTTP/1.1" 200 OK INFO: ::1:56990 - "GET /assets/file-url-3e95666f.js.map HTTP/1.1" 200 OK INFO: ::1:56991 - "GET /assets/Index-cbf1aa1d.js.map HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/Image-4d34d604.js.map HTTP/1.1" 200 OK INFO: ::1:56992 - "GET /assets/BlockLabel-f1b72dca.js.map HTTP/1.1" 200 OK INFO: ::1:56989 - "GET /assets/Video-d99e1661.js.map HTTP/1.1" 200 OK INFO: ::1:56991 - "GET /assets/Index-756928a1.js.map HTTP/1.1" 200 OK INFO: ::1:56987 - "GET /assets/Index-b2efa79d.js.map HTTP/1.1" 200 OK INFO: ::1:56990 - "GET /assets/Index-a3ffbeb0.js.map HTTP/1.1" 200 OK INFO: ::1:56996 - "POST /run/predict HTTP/1.1" 200 OK INFO: ::1:57001 - "POST /run/predict HTTP/1.1" 200 OK INFO: ::1:57002 - "POST /run/predict HTTP/1.1" 200 OK INFO: ::1:57002 - "POST /run/predict HTTP/1.1" 200 OK INFO: ::1:57002 - "POST /queue/join HTTP/1.1" 200 OK INFO: ::1:57002 - "GET /queue/data?session_hash=dq7e0at2dhg HTTP/1.1" 200 OK 15:49:16.358 [INFO ] httpx - HTTP Request: POST http://localhost:1234/v1/chat/completions "HTTP/1.1 200 OK" 15:50:16.500 [WARNING ] llama_index.chat_engine.types - Encountered exception writing response to history: timed out INFO: ::1:57002 - "POST /run/predict HTTP/1.1" 200 OK INFO: ::1:57002 - "GET /file%3DC%3A/Users/satya/AppData/Local/Temp/gradio/533bd8ba49221de137dda94fdfeed4bebe7a7878/avatar-bot.ico HTTP/1.1" 200 OK INFO: Shutting down INFO: Waiting for application shutdown. INFO: Application shutdown complete. INFO: Finished server process [22740] Makefile:36: recipe for target 'run' failed make.exe: *** [run] Error 1 [2024-03-04 15:38:39.880] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED [2024-03-04 15:38:39.890] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234 [2024-03-04 15:38:39.891] [INFO] [LM STUDIO SERVER] Supported endpoints: [2024-03-04 15:38:39.892] [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models [2024-03-04 15:38:39.893] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions [2024-03-04 15:38:39.895] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions [2024-03-04 15:38:39.896] [INFO] [LM STUDIO SERVER] Logs are saved into C:\tmp\lmstudio-server-log.txt [2024-03-04 15:49:16.302] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-03-04 15:49:16.304] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "role": "system", "content": "Context information is below.\n--------------------\nMedicine, for\nexample, has produced large, standardized, structured vocabularies such as SNOMED (Price and\nSpackman 2000) and the semantic network of the Unified Medical Language System\n(Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For\nexample, the United Nations Development Program and Dun & Bradstreet combined their\nefforts to develop the UNSPSC ontology which provides terminology for products and\nservices (www.unspsc.org).\n An ontology defines a common vocabulary for researchers who need to share information in\na domain. It includes machine-interpretable definitions of basic concepts in the domain and\nrelations among them.\n Why would someone want to develop an ontology?\n\n5Step 1. Determine the domain and scope of the ontology\nWe suggest starting the development of an ontology by defining its domain and scope. That\nis, answer several basic questions:\n• What is the domain that the ontology will cover?\n • For what we are going to use the ontology?\n • For what types of questions the information in the ontology should provide answers?\n--------------------\nYou can only answer questions about the provided context. If you know the answer but it is not based in the provided context, don't provide the answer, just state the answer is not in the context provided." }, { "role": "user", "content": " What is an Ontology as defined in ontologu101.pdf" } ], "model": "Llama", "stream": true, "temperature": 0.1 } [2024-03-04 15:49:16.318] [INFO] [LM STUDIO SERVER] Context Overflow Policy is: Rolling Window [2024-03-04 15:49:16.334] [INFO] [LM STUDIO SERVER] Streaming response... [2024-03-04 15:50:16.423] [INFO] [LM STUDIO SERVER] Client disconnected. Stopping generation... (if the model is busy processing the prompt, it will finish first)) [2024-03-04 15:51:25.642] [INFO] Finished streaming response

nunomlucio commented 7 months ago

Same here, the error message in LM Studio is:

" Client disconnected. Stopping generation... (if the model is busy processing the prompt, it will finish first))"

As a result privategpt only displays the docs that were being used to reply and nothing more.

I tought it would be on LM Studio side that a timeout value of 60 seconds existed somewhere, but they replied in Discord that no such limit exists in LM Studio and should be on client side (privategpt)

Try to add on the vllm yaml file some things like KEEP_ALIVE, REQUEST_TIMEOUT....but nothing worked so far.... but... I'm a noob....

spsach commented 7 months ago

I had to increase timeout to 300 in llm_component.py file. I was using ollama. It resolved the problem for me, ollama_settings = settings.ollama self.llm = Ollama( model=ollama_settings.llm_model, base_url=ollama_settings.api_base, request_timeout=300

nunomlucio commented 7 months ago

I had to increase timeout to 300 in llm_component.py file. I was using ollama. It resolved the problem for me, ollama_settings = settings.ollama self.llm = Ollama( model=ollama_settings.llm_model, base_url=ollama_settings.api_base, request_timeout=300

You are a lifesaver!!!

In my case also in llm_component.py

openai_settings = settings.openai ....

At the end just added:

timeout=300, )

Done!