THUDM / VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Apache License 2.0
4.1k stars 418 forks source link

无法启动VisualGLM,在加载checkpoint阶段突然出现已杀死 #141

Closed Maoweicao closed 1 year ago

Maoweicao commented 1 year ago

如题,环境是Ubuntu Mate 20.04 lts Tesla M40 24G计算卡配合24核E5 2651 V2和16GB内存 CUDA环境是11.7,chatGLM-6B运行正常

[2023-06-23 17:15:15,775] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) /home/maoweicao/miniconda3/envs/visual-glm/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/maoweicao/miniconda3/envs/visual-glm/lib/python3.10/site-packages/torchvision/image.so: undefined symbol: _ZN3c104impl8GPUTrace13gpuTraceStateE'If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source? warn( [2023-06-23 17:15:25,284] [INFO] building VisualGLMModel model ... [2023-06-23 17:15:25,323] [INFO] [RANK 0] > initializing model parallel with size 1 [2023-06-23 17:15:25,326] [INFO] [RANK 0] You are using model-only mode. For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK. /home/maoweicao/miniconda3/envs/visual-glm/lib/python3.10/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op warnings.warn("Initializing zero-element tensors is a no-op") [2023-06-23 17:16:09,071] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 7810582016 [2023-06-23 17:16:31,534] [INFO] [RANK 0] global rank 0 is loading checkpoint /home/maoweicao/.sat_models/visualglm-6b/1/mp_rank_00_model_states.pt 已杀死

非常奇怪的提示。

Maoweicao commented 1 year ago

然后我查看了/var/syslog/他是这样提示的:

total-vm:73812400kB, anon-rss:15380324kB, file-rss:47552kB, shmem-rss:4096kB, UID:1000 pgtables:36084kB oom_score_adj:0

是否应该提示一下最低运行环境标准,毕竟不是每个人都有条件使用A100集群

Maoweicao commented 1 year ago

懂了,预留的内存太少了,把交换区开到256GB,外加改了一下webui的代码就可以正常使用了。 image

Maoweicao commented 1 year ago

已经证明可以成功在M40机器上运行了,下面是我的配置:

       .-/+oossssoo+/-.               maoweicao@maoweicao-virtual-machine

:+ssssssssssssssssss+: ----------------------------------- -+ssssssssssssssssssyyssss+- OS: Ubuntu 20.04.6 LTS x86_64 .ossssssssssssssssssdMMMNysssso. Host: VMware7,1 None /ssssssssssshdmmNNmmyNMMMMhssssss/ Kernel: 5.15.0-75-generic +ssssssssshmydMMMMMMMNddddyssssssss+ Uptime: 14 hours, 53 mins /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ Packages: 1987 (dpkg), 5 (snap) .ssssssssdMMMNhsssssssssshNMMMdssssssss. Shell: bash 5.0.17 +sssshhhyNMMNyssssssssssssyNMMMysssssss+ Resolution: 1024x768 ossyNMMMNyMMhsssssssssssssshmmmhssssssso Terminal: /dev/pts/0 ossyNMMMNyMMhsssssssssssssshmmmhssssssso CPU: Intel Xeon E5-2651 v2 (24) @ 1.799GHz +sssshhhyNMMNyssssssssssssyNMMMysssssss+ GPU: NVIDIA Tesla M40 .ssssssssdMMMNhsssssssssshNMMMdssssssss. Memory: 2852MiB / 15955MiB /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/ +sssssssssdmydMMMMMMMMddddyssssssss+ /ssssssssssshdmNNNNmyNMMMMhssssss/ .ossssssssssssssssssdMMMNysssso. -+sssssssssssssssssyyyssss+- :+ssssssssssssssssss+: .-/+oossssoo+/-.

image

可能需要的帮助信息: 调整交换区大小:https://blog.csdn.net/qq_38327769/article/details/109775989 对模型添加half()以使得可以正常运行:

1.打开下载目录\model文件夹找到infer_util.py文件 2.编辑第27行修改model = model.cuda()model = model.half().cuda() 3.执行运行命令,并且确保网络畅通(不然可能喜提443 SSL错误EOF报错) 4.然后可以打开浏览器开始体验了!