netease-youdao / QAnything

Question and Answer based on Anything.
https://qanything.ai
GNU Affero General Public License v3.0
11.38k stars 1.1k forks source link

[BUG] <运行出现错误,日志中文乱码> #106

Open waniani opened 7 months ago

waniani commented 7 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

我在win11下使用wsl2的ubunutu进行安装,安装之后运行的时候遇到了错误 终端日志输出的LLM有错误输出,前端可以访问,发询问消息的时候会弹出出错了的提示

期望行为 | Expected Behavior

希望能够解决一下这个问题,让其能够正常运行

运行环境 | Environment

OS: Ubuntu 22.04 / Windows 11 WSL2
NVIDIA Driver:   537.58
CUDA: 12.2
Docker Compose:  1.29.2
NVIDIA GPU Memory: 16GB

QAnything日志 | QAnything logs

qanything-container-local | The CJS build of Vite's Node API is deprecated. See https://vitejs.dev/guide/troubleshooting.html#vite-cjs-node-api-deprecated for more details. qanything-container-local | ��� Local: http://localhost:5052/qanything qanything-container-local | ��� Network: http://172.21.0.6:5052/qanything qanything-container-local | The front-end service is ready!...(7/8) qanything-container-local | ���������������������!...(7/8) qanything-container-local | I0205 13:49:29.703147 113 grpc_server.cc:377] Thread started for CommonHandler qanything-container-local | I0205 13:49:29.711328 113 infer_handler.cc:629] New request handler for ModelInferHandler, 0 qanything-container-local | I0205 13:49:29.711478 113 infer_handler.h:1025] Thread started for ModelInferHandler qanything-container-local | I0205 13:49:29.723535 113 infer_handler.cc:629] New request handler for ModelInferHandler, 0 qanything-container-local | I0205 13:49:29.723561 113 infer_handler.h:1025] Thread started for ModelInferHandler qanything-container-local | I0205 13:49:29.725644 113 stream_infer_handler.cc:122] New request handler for ModelStreamInferHandler, 0 qanything-container-local | I0205 13:49:29.725668 113 infer_handler.h:1025] Thread started for ModelStreamInferHandler qanything-container-local | I0205 13:49:29.725672 113 grpc_server.cc:2450] Started GRPCInferenceService at 0.0.0.0:9001 qanything-container-local | I0205 13:49:29.725824 113 http_server.cc:3555] Started HTTPService at 0.0.0.0:9000 qanything-container-local | I0205 13:49:29.771217 113 http_server.cc:185] Started Metrics Service at 0.0.0.0:9002 qanything-container-local | I0205 13:49:44.700985 113 http_server.cc:3449] HTTP request: 0 /v2/health/ready qanything-container-local | The embedding and rerank service is ready!. (7.5/8) qanything-container-local | Embedding ��� Rerank ������������������������(7.5/8) qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | model, tokenizer = adapter.load_compress_model( qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/fastchat/model/model_adapter.py", line 111, in load_compress_model qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | return load_compress_model( qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/fastchat/model/compression.py", line 189, in load_compress_model qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | tmp_state_dict = torch.load( qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1028, in load qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | return _legacy_load(opened_file, map_location, pickle_module, pickle_load_args) qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1246, in _legacy_load qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | magic_number = pickle_module.load(f, pickle_load_args) qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | _pickle.UnpicklingError: invalid load key, 'v'. qanything-container-local | % Total % Received % Xferd Average Speed Time Time Time Current qanything-container-local | Dload Upload Total Spent Left Speed 100 13 100 13 0 0 1362 0 --:--:-- --:--:-- --:--:-- 1444 qanything-container-local | The llm service is starting up, it can be long... you have time to make a coffee :) qanything-container-local | LLM ���������������������������������������������...��������������������������� :) 。 。 。 。 qanything-container-local | The llm service is starting up, it can be long... you have time to make a coffee :) qanything-container-local | LLM ���������������������������������������������...��������������������������� :) The llm service is starting up, it can be long... you have time to make a coffee :)qanything-container-local | ������ LLM ��������������������������� /workspace/qanything_local/logs/debug_logs/fastchat_logs/fschat_model_worker_7801.log ���������������Error... qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | tmp_state_dict = torch.load( qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1028, in load qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | return _legacy_load(opened_file, map_location, pickle_module, pickle_load_args) qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1246, in _legacy_load qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | magic_number = pickle_module.load(f, pickle_load_args) qanything-container-local | 2024-02-05 21:49:31 | ERROR | stderr | _pickle.UnpicklingError: invalid load key, 'v'. qanything-container-local | ���������������������������������������������������

复现方法 | Steps To Reproduce

按照官方手册进行安装 sudo bash ./run.sh -c local -i 0 -b hf -m Qwen-7B-QAnything -t qwen-7b-qanything

之后遇到

备注 | Anything else?

No response

songkq commented 7 months ago

@waniani Where did you download the Qwen-7B-QAnything model? huggingface/wismodel/modelscope

Try this way to run MiniChat-2-3B and upload the complete log file logs/debug_logs/fastchat_logs/fschat_model_worker_7801.log for debugging.

## Step 1. Download the public LLM model (e.g., MiniChat-2-3B) and save to "/path/to/QAnything/assets/custom_models"
cd /path/to/QAnything/assets/custom_models
git clone https://huggingface.co/GeneZC/MiniChat-2-3B

## Step 2. Execute the service startup command.  Here we use "-b hf" to specify the Huggingface transformers backend.
## Here we use "-b hf" to specify the transformers backend that will load model in 8 bits but do bf16 inference as default for saving VRAM.
cd /path/to/QAnything
bash ./run.sh -c local -i 0 -b hf -m MiniChat-2-3B -t minichat
waniani commented 7 months ago

qanything-container-local | The MiniChat-2-3B folder does not exist under QAnything/assets/custom_models/. Please check your setup. qanything-container-local | ���QAnything/assets/custom_models/������������MiniChat-2-3B������������������������������������������

看信息,说是模型不存在,我应该去哪里下载

Jackiexiao commented 6 months ago

https://github.com/netease-youdao/QAnything/issues/50#issuecomment-1905137061

CzsGit commented 6 months ago

中文乱码的问题怎么解决啊?我也是这个情况

tcexeexe commented 4 months ago

中文乱码的问题怎么解决啊?我也是这个情况

同样遇到中文乱码的问题

qanything-container-local | ==================================================== qanything-container-local | **** ������������ **** qanything-container-local | ==================================================== qanything-container-local | qanything-container-local | ���������������FasterTransformer������������Nvidia RTX 30���������40��������������������������������������� NVIDIA GeForce RTX 2080 Ti, ������������������������������������������������������ qanything-container-local | ���������������������������������������������huggingface������ qanything-container-local | ��������������������� 22528 MiB ������������7B������ qanything-container-local | The triton server for embedding and reranker will start on 1 GPUs qanything-container-local | The -t folder does not exist under QAnything/assets/custom_models/. Please check your setup. qanything-container-local | ���QAnything/assets/custom_models/������������-t������������������������������������������

ysun commented 4 months ago

我也遇到了,脚本刚开始的时候中文还是可以显示的,就是跑到后面的时候,中文显示不了了。