wsxqaza12 / RAG_LangChain_streamlit

18 stars 8 forks source link

送出問題時遇到error message #1

Closed CallieHsu closed 6 months ago

CallieHsu commented 6 months ago

Hi @wsxqaza12 , 感謝你無私地提供詳盡的RAG教學, 從中學習到很多, 在實作這個repo時遇到了一點問題, 如下:

操作步驟

git clone https://github.com/wsxqaza12/RAG_LangChain_streamlit.git
cd RAG_LangChain_streamlit
conda create -n RAG_streamlit python=3.10
conda activate RAG_streamlit
pip install -r requirements.txt
streamlit run rag_engine.py

上傳範例pdf檔後, 在送出問題的時候出現error:

2024-02-16 15-25-17 的螢幕擷圖

error message

$ streamlit run rag_engine.py
  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://

/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
  warn_deprecated(
2024-02-16 15:10:10.594 Uncaught app exception
Traceback (most recent call last):
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
    exec(code, module.__dict__)
  File "/home/callie/RAG_LangChain_streamlit/rag_engine.py", line 144, in <module>
    boot()
  File "/home/callie/RAG_LangChain_streamlit/rag_engine.py", line 136, in boot
    response = query_llm(st.session_state.retriever, query)
  File "/home/callie/RAG_LangChain_streamlit/rag_engine.py", line 65, in query_llm
    result = qa_chain({'question': query, 'chat_history': st.session_state.messages})
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__
    return self.invoke(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
    raise e
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 166, in _call
    answer = self.combine_docs_chain.run(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 543, in run
    return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__
    return self.invoke(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
    raise e
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 136, in _call
    output, extra_return_dict = self.combine_docs(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/llm.py", line 293, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__
    return self.invoke(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
    raise e
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain/chains/llm.py", line 115, in generate
    return self.llm.generate_prompt(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache
    return self._generate(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 438, in _generate
    response = self.client.create(messages=message_dicts, **params)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/openai/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 663, in create
    return self._post(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/openai/_base_client.py", line 1200, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/openai/_base_client.py", line 889, in request
    return self._request(
  File "/home/callie/anaconda3/envs/RAG_streamlit/lib/python3.10/site-packages/openai/_base_client.py", line 980, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: File Not Found
wsxqaza12 commented 6 months ago

@CallieHsu 感謝提問。 會出現這個問題主要是 streamlit 找不到 LLM 的模型, 你的實作中 LLM 的 URL 是:http://127.0.0.1:8080/ 但這邊我們使用 openai 的接口,正確的 URL 是:http://127.0.0.1:8080/v1 細節可以參考 llama.cpp 的文件 希望有幫助到你 :)

CallieHsu commented 6 months ago

@wsxqaza12 謝謝回覆, 很詳細, 修正輸入的URL後就能正常運行沒有問題~ 再次感謝你的分享 :smiley: