Open ssa567832 opened 1 month ago
HI @ssa567832, 感謝提問,你方便把 error 的完整訊息貼過來嗎?
我的方法是照您的llama.cpp的教學建置的,這邊也再次感謝您的教學!
首先這是URL使用http://127.0.0.1:8080/的結果
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 140, in
這是使用http://127.0.0.1:8080/v1的結果
C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\langchain_core_api\deprecation.py:139: LangChainDeprecationWarning: The class LLMChain
was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use RunnableSequence, e.g., prompt | llm
instead.
warn_deprecated(
2024-06-17 16:36:37.850 Uncaught app exception
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 69, in map_httpcore_exceptions
yield
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\connection_pool.py", line 216, in handle_request
raise exc from None
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http_proxy.py", line 207, in handle_request
return self._connection.handle_request(proxy_request)
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\connection.py", line 101, in handle_request
return self._connection.handle_request(request)
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 143, in handle_request
raise exc
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 113, in handle_request
) = self._receive_response_headers(**kwargs)
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 186, in _receive_response_headers
event = self._receive_event(timeout=timeout)
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpcore_sync\http11.py", line 238, in _receive_event
raise RemoteProtocolError(msg)
httpcore.RemoteProtocolError: Server disconnected without sending a response.
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\openai_base_client.py", line 952, in _request response = self._client.send( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 914, in send response = self._send_handling_auth( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 942, in _send_handling_auth response = self._send_handling_redirects( File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 979, in _send_handling_redirects response = self._send_single_request(request) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_client.py", line 1015, in _send_single_request response = transport.handle_request(request) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 233, in handle_request resp = self._pool.handle_request(req) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\contextlib.py", line 137, in exit self.gen.throw(typ, value, traceback) File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\httpx_transports\default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: Server disconnected without sending a response.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\RAG_streamlit\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "C:\Users\N000192076\Desktop\RAG_LangChain_streamlit\rag_engine.py", line 140, in
我建置完成的llm的port是http://127.0.0.1:8080/,並沒有做任何更改,如上面附的第一張圖
@ssa567832 我猜是版本問題,我明後天會測試一下
非常謝謝您
@ssa567832 我這邊測起來可以正常運作,環境 Ubuntu 20.04.6 LTS
langchain==0.2.5
streamlit==1.35.0
unstructured==0.14.6
unstructured[pdf]==0.3.12
chromadb==0.5.0
sentence-transformers==3.0.1
langchain-community==0.2.5
langchain-openai==0.1.8
因為你的 Error 中出現 httpx.RemoteProtocolError: Server disconnected without sending a response. 推測是 streamlit API 找不到 llamacpp 所導致,你的 llamacpp 與 streamlit 分別是使用什麼環境呢?
另外有開啟 VPN 嗎? 可以試試看 http://localhost:8080/v1 不行的話可以提供我一個 Minimal reproducible example 讓我測試。
首先感謝您的教學, 我也遇到跟issue一樣的問題,有看到解決方法是改成 http://127.0.0.1:8080/v1
想問是單純像這樣改嗎?
因為還是出錯,所以前來詢問, 再次感您的教學,非常的易懂!