wsxqaza12 / RAG_LangChain_streamlit

14 stars 7 forks source link

您好,我也在送出問題的時候遇到問題 #7

Open ssa567832 opened 1 month ago

ssa567832 commented 1 month ago

首先感謝您的教學, 我也遇到跟issue一樣的問題,有看到解決方法是改成 http://127.0.0.1:8080/v1

未命2 未命名

想問是單純像這樣改嗎?

3

因為還是出錯,所以前來詢問, 再次感您的教學,非常的易懂!

wsxqaza12 commented 1 month ago

HI @ssa567832, 感謝提問,你方便把 error 的完整訊息貼過來嗎?

ssa567832 commented 1 month ago

我的方法是照您的llama.cpp的教學建置的,這邊也再次感謝您的教學! 首先這是URL使用http://127.0.0.1:8080/的結果 Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.dict) File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 140, in boot() File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 134, in boot response = query_llm_direct(query) File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 71, in query_llm_direct result = llm_chain.invoke({"query": query}) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\base.py", line 166, in invoke raise e File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\llm.py", line 126, in _call response = self.generate([inputs], run_manager=run_manager) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\llm.py", line 138, in generate return self.llm.generate_prompt( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 599, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, kwargs) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 456, in generate raise e File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 446, in generate self._generate_with_cache( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 671, in _generate_with_cache result = self._generate( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_openai\chat_models\base.py", line 537, in _generate response = self.client.create(messages=message_dicts, params) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_utils_utils.py", line 277, in wrapper return func(*args, **kwargs) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai\resources\chat\completions.py", line 606, in create return self._post( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 1240, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 921, in request return self._request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 976, in _request return self._retry_request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 1053, in _retry_request return self._request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 976, in _request return self._retry_request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 1053, in _retry_request return self._request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 986, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

這是使用http://127.0.0.1:8080/v1的結果 C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core_api\deprecation.py:139: LangChainDeprecationWarning: The class LLMChain was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use RunnableSequence, e.g., prompt | llm instead. warn_deprecated( 2024-06-17 16:36:37.850 Uncaught app exception Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 69, in map_httpcore_exceptions yield File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 233, in handle_request resp = self._pool.handle_request(req) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\connection_pool.py", line 216, in handle_request raise exc from None File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\connection_pool.py", line 196, in handle_request response = connection.handle_request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http_proxy.py", line 207, in handle_request return self._connection.handle_request(proxy_request) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\connection.py", line 101, in handle_request return self._connection.handle_request(request) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 143, in handle_request raise exc File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 113, in handle_request ) = self._receive_response_headers(**kwargs) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 186, in _receive_response_headers event = self._receive_event(timeout=timeout) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 238, in _receive_event raise RemoteProtocolError(msg) httpcore.RemoteProtocolError: Server disconnected without sending a response.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 952, in _request response = self._client.send( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 914, in send response = self._send_handling_auth( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 942, in _send_handling_auth response = self._send_handling_redirects( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 979, in _send_handling_redirects response = self._send_single_request(request) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 1015, in _send_single_request response = transport.handle_request(request) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 233, in handle_request resp = self._pool.handle_request(req) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\contextlib.py", line 137, in exit self.gen.throw(typ, value, traceback) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: Server disconnected without sending a response.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.dict) File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 140, in boot() File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 134, in boot response = query_llm_direct(query) File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 71, in query_llm_direct result = llm_chain.invoke({"query": query}) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\base.py", line 166, in invoke raise e File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\llm.py", line 126, in _call response = self.generate([inputs], run_manager=run_manager) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain\chains\llm.py", line 138, in generate return self.llm.generate_prompt( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 599, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, kwargs) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 456, in generate raise e File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 446, in generate self._generate_with_cache( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core\language_models\chat_models.py", line 671, in _generate_with_cache result = self._generate( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_openai\chat_models\base.py", line 537, in _generate response = self.client.create(messages=message_dicts, params) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_utils_utils.py", line 277, in wrapper return func(*args, **kwargs) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai\resources\chat\completions.py", line 606, in create return self._post( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 1240, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 921, in request return self._request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 976, in _request return self._retry_request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 1053, in _retry_request return self._request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 976, in _request return self._retry_request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 1053, in _retry_request return self._request( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 986, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

我建置完成的llm的port是http://127.0.0.1:8080/,並沒有做任何更改,如上面附的第一張圖

wsxqaza12 commented 1 month ago

@ssa567832 我猜是版本問題,我明後天會測試一下

ssa567832 commented 1 month ago

非常謝謝您

wsxqaza12 commented 4 weeks ago

@ssa567832 我這邊測起來可以正常運作,環境 Ubuntu 20.04.6 LTS

langchain==0.2.5
streamlit==1.35.0
unstructured==0.14.6
unstructured[pdf]==0.3.12
chromadb==0.5.0
sentence-transformers==3.0.1
langchain-community==0.2.5
langchain-openai==0.1.8

因為你的 Error 中出現 httpx.RemoteProtocolError: Server disconnected without sending a response. 推測是 streamlit API 找不到 llamacpp 所導致,你的 llamacpp 與 streamlit 分別是使用什麼環境呢?

另外有開啟 VPN 嗎? 可以試試看 http://localhost:8080/v1 不行的話可以提供我一個 Minimal reproducible example 讓我測試。