Closed raoxinyi closed 1 year ago
上传文件时,文件完全上传了么?
对的,文件是有完全上传了
@raoxinyi 我也遇到了这个问题,请问最后有解决方法吗?
没有解决哟
weifan-zhao @.***> 于2023年5月8日周一 16:58写道:
@raoxinyi https://github.com/raoxinyi 我也遇到了这个问题,请问最后有解决方法吗?
— Reply to this email directly, view it on GitHub https://github.com/thomas-yanxin/LangChain-ChatGLM-Webui/issues/49#issuecomment-1538008525, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4P5DG6L3ED477CRAFHWWDXFCYRTANCNFSM6AAAAAAXWQJ53E . You are receiving this because you were mentioned.Message ID: @.***>
上传文件时,文件完全上传了么?
Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/gradio/routes.py", line 289, in run_predict output = await app.blocks.process_api( File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 990, in process_api result = await self.call_function(fn_index, inputs, iterator) File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 832, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, args) File "app.py", line 215, in predict resp = knowladge_based_chat_llm.get_knowledge_based_answer( File "app.py", line 127, in get_knowledge_based_answer vector_store = FAISS.load_local('faiss_index', self.embeddings) File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/faiss.py", line 449, in load_local index = faiss.read_index( File "/usr/local/lib/python3.8/dist-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index return _swigfaiss_avx2.read_index(args) RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char*) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory 成功配置后发送问题会报如上错误,在网页中显示error
上传文件时,文件完全上传了么?
Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/gradio/routes.py", line 289, in run_predict output = await app.blocks.process_api( File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 990, in process_api result = await self.call_function(fn_index, inputs, iterator) File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 832, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "app.py", line 215, in predict resp = knowladge_based_chat_llm.get_knowledge_based_answer( File "app.py", line 127, in get_knowledge_based_answer vector_store = FAISS.load_local('faiss_index', self.embeddings) File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/faiss.py", line 449, in load_local index = faiss.read_index( File "/usr/local/lib/python3.8/dist-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index return _swigfaiss_avx2.read_index(args) RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory 成功配置后发送问题会报如上错误,在网页中显示error
应该还是文件没有上传完成的问题。
上传文件时,文件完全上传了么?
Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/gradio/routes.py", line 289, in run_predict output = await app.blocks.process_api( File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 990, in process_api result = await self.call_function(fn_index, inputs, iterator) File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 832, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "app.py", line 215, in predict resp = knowladge_based_chat_llm.get_knowledge_based_answer( File "app.py", line 127, in get_knowledge_based_answer vector_store = FAISS.load_local('faiss_index', self.embeddings) File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/faiss.py", line 449, in load_local index = faiss.read_index( File "/usr/local/lib/python3.8/dist-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index return _swigfaiss_avx2.read_index(args) RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory 成功配置后发送问题会报如上错误,在网页中显示error
应该还是文件没有上传完成的问题。
想问一下是哪些文件需要上传,目前的状态是已经按照教程将text2vec以及chalglm的压缩包下载到本地并且解压
问题解决了吗?
使用原代码会有问题:File "/media/will/E10/python/LangChain-ChatGLM-Webui/bot/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 454, in load_local index = faiss.read_index( File "/media/will/E10/python/LangChain-ChatGLM-Webui/bot/lib/python3.10/site-packages/faiss/swigfaiss_avx2.py", line 10208, in read_index return _swigfaiss_avx2.read_index(args)RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory
将privateGPT的代码移植过来,faiss不用,使用chroma。 使用model_name:'GanymedeNil/text2vec-base-chinese',这个是index512
使用: qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=self.retriever, return_source_documents= not self.args.hide_source) res = qa(query) 取代之前的代码。
这样使用ingest.py可以成批的输入文档,向量保存到db中。(langchain要用最新版)
使用原代码会有问题:File "/media/will/E10/python/LangChain-ChatGLM-Webui/bot/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 454, in load_local index = faiss.read_index( File "/media/will/E10/python/LangChain-ChatGLM-Webui/bot/lib/python3.10/site-packages/faiss/swigfaiss_avx2.py", line 10208, in read_index return _swigfaiss_avx2.read_index(args)RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory
将privateGPT的代码移植过来,faiss不用,使用chroma。 使用model_name:'GanymedeNil/text2vec-base-chinese',这个是index512
使用: qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=self.retriever, return_source_documents= not self.args.hide_source) res = qa(query) 取代之前的代码。
这样使用ingest.py可以成批的输入文档,向量保存到db中。(langchain要用最新版)
这是没有点生成知识库向量文件
导致没有读取到index.faiss
吧?跟faiss和chroma有什么关系…
对的
root@aivrs01:/home/slifeai/workspace/LangChain# python3 app.py 2 No sentence-transformers model found with name /home/slifeai/workspace/LangChain/model/text2vec-large-chinese/text2vec-large-chinese. Creating a new one with MEAN pooling. No sentence-transformers model found with name /home/slifeai/workspace/LangChain/model/text2vec-large-chinese/text2vec-large-chinese. Creating a new one with MEAN pooling. No compiled kernel found. Compiling kernels : /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.c Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.c -shared -o /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.so Load kernel : /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.so Setting CPU quantization kernel threads to 24 Using quantization cache Applying quantization to glm layers The dtype of attention mask (torch.int64) is not bool Running on local URL: http://0.0.0.0:7861
To create a public link, set share=True
in launch()
.
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/opt/conda/lib/python3.8/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, args)
File "app.py", line 215, in predict
resp = knowladge_based_chat_llm.get_knowledge_based_answer(
File "app.py", line 127, in get_knowledge_based_answer
vector_store = FAISS.load_local('faiss_index', self.embeddings)
File "/opt/conda/lib/python3.8/site-packages/langchain/vectorstores/faiss.py", line 449, in load_local
index = faiss.read_index(
File "/opt/conda/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index
return _swigfaiss_avx2.read_index(args)
RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char*) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory
这是我运行app.py文件之后的,在页面发送问题时的报错情况,请帮忙看看,谢谢
root@aivrs01:/home/slifeai/workspace/LangChain# python3 app.py 2 No sentence-transformers model found with name /home/slifeai/workspace/LangChain/model/text2vec-large-chinese/text2vec-large-chinese. Creating a new one with MEAN pooling. No sentence-transformers model found with name /home/slifeai/workspace/LangChain/model/text2vec-large-chinese/text2vec-large-chinese. Creating a new one with MEAN pooling. No compiled kernel found. Compiling kernels : /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.c Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.c -shared -o /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.so Load kernel : /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.so Setting CPU quantization kernel threads to 24 Using quantization cache Applying quantization to glm layers The dtype of attention mask (torch.int64) is not bool Running on local URL: http://0.0.0.0:7861
To create a public link, set
share=True
inlaunch()
. Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/gradio/routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 1075, in process_api result = await self.call_function( File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 884, in call_function prediction = await anyio.to_thread.run_sync( File "/opt/conda/lib/python3.8/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "app.py", line 215, in predict resp = knowladge_based_chat_llm.get_knowledge_based_answer( File "app.py", line 127, in get_knowledge_based_answer vector_store = FAISS.load_local('faiss_index', self.embeddings) File "/opt/conda/lib/python3.8/site-packages/langchain/vectorstores/faiss.py", line 449, in load_local index = faiss.read_index( File "/opt/conda/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index return _swigfaiss_avx2.read_index(args) RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory 这是我运行app.py文件之后的,在页面发送问题时的报错情况,请帮忙看看,谢谢
两个可能:
知识库文件向量化
这个按钮确实是 感谢大佬。非常优秀
------------------ 原始邮件 ------------------ 发件人: thomas-yanxin @.> 发送时间: 2023年6月6日 22:39 收件人: thomas-yanxin/LangChain-ChatGLM-Webui @.> 抄送: chenjf2015103095 @.>, Comment @.> 主题: Re: [thomas-yanxin/LangChain-ChatGLM-Webui]模型加载提示已成功,但发送问题会报error
@.***:/home/slifeai/workspace/LangChain# python3 app.py 2 No sentence-transformers model found with name /home/slifeai/workspace/LangChain/model/text2vec-large-chinese/text2vec-large-chinese. Creating a new one with MEAN pooling. No sentence-transformers model found with name /home/slifeai/workspace/LangChain/model/text2vec-large-chinese/text2vec-large-chinese. Creating a new one with MEAN pooling. No compiled kernel found. Compiling kernels : /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.c Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.c -shared -o /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.so Load kernel : /root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int8/quantization_kernels_parallel.so Setting CPU quantization kernel threads to 24 Using quantization cache Applying quantization to glm layers The dtype of attention mask (torch.int64) is not bool Running on local URL: http://0.0.0.0:7861
To create a public link, set share=True in launch(). Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/gradio/routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 1075, in process_api result = await self.call_function( File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 884, in call_function prediction = await anyio.to_thread.run_sync( File "/opt/conda/lib/python3.8/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "app.py", line 215, in predict resp = knowladge_based_chat_llm.get_knowledge_based_answer( File "app.py", line 127, in get_knowledge_based_answer vector_store = FAISS.load_local('faiss_index', self.embeddings) File "/opt/conda/lib/python3.8/site-packages/langchain/vectorstores/faiss.py", line 449, in load_local index = faiss.read_index( File "/opt/conda/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index return _swigfaiss_avx2.read_index(args) RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open faiss_index/index.faiss for reading: No such file or directory 这是我运行app.py文件之后的,在页面发送问题时的报错情况,请帮忙看看,谢谢
两个可能:
没有点知识库文件向量化这个按钮
faiss没有安装好
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Setting CPU quantization kernel threads to 6 Using quantization cache Applying quantization to glm layers Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/gradio/routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1075, in process_api result = await self.call_function( File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 884, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "app.py", line 198, in predict print(file_obj.name) AttributeError: 'NoneType' object has no attribute 'name' ^CKeyboard interruption in main thread... closing server.