THUDM / VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Apache License 2.0
4.07k stars 415 forks source link

CUDA Error: no kernel image is available for execution on the device #247

Open vergil-ong opened 1 year ago

vergil-ong commented 1 year ago

[2023-08-23 09:29:44,487] [INFO] [RANK 0] replacing layer 27 attention with lora [2023-08-23 09:29:45,906] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 7811368448 [2023-08-23 09:29:57,249] [INFO] [RANK 0] global rank 0 is loading checkpoint finetune-visualglm-6b-08-16-16-49/1500/mp_rank_00_model_states.pt [2023-08-23 09:30:14,586] [INFO] [RANK 0] > successfully loaded finetune-visualglm-6b-08-16-16-49/1500/mp_rank_00_model_states.pt [2023-08-23 09:30:16,798] [INFO] [RANK 0] > Quantizing model weight to 4 bits Traceback (most recent call last): File "/opt/python/VisualGLM-6B/app_web.py", line 61, in quantize(model.transformer, 4) File "/opt/conda/lib/python3.10/site-packages/sat/quantization/kernels.py", line 274, in quantize replace_linear(model) File "/opt/conda/lib/python3.10/site-packages/sat/quantization/kernels.py", line 272, in replace_linear replace_linear(sub_module) File "/opt/conda/lib/python3.10/site-packages/sat/quantization/kernels.py", line 272, in replace_linear replace_linear(sub_module) File "/opt/conda/lib/python3.10/site-packages/sat/quantization/kernels.py", line 272, in replace_linear replace_linear(sub_module) [Previous line repeated 1 more time] File "/opt/conda/lib/python3.10/site-packages/sat/quantization/kernels.py", line 241, in replace_linear setattr(module, name, QuantizedColumnParallelLinear( File "/opt/conda/lib/python3.10/site-packages/sat/quantization/kernels.py", line 172, in init self.weight = compress_int4_weight(self.weight) File "/opt/conda/lib/python3.10/site-packages/sat/quantization/kernels.py", line 81, in compress_int4_weight kernels.int4WeightCompression( File "/opt/conda/lib/python3.10/site-packages/cpm_kernels/kernels/base.py", line 48, in call func = self._prepare_func() File "/opt/conda/lib/python3.10/site-packages/cpm_kernels/kernels/base.py", line 40, in _prepare_func self._module.get_module(), self._func_name File "/opt/conda/lib/python3.10/site-packages/cpm_kernels/kernels/base.py", line 24, in get_module self._module[curr_device] = cuda.cuModuleLoadData(self._code) File "/opt/conda/lib/python3.10/site-packages/cpm_kernels/library/base.py", line 94, in wrapper return f(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/cpm_kernels/library/cuda.py", line 233, in cuModuleLoadData checkCUStatus(cuda.cuModuleLoadData(ctypes.byref(module), data)) File "/opt/conda/lib/python3.10/site-packages/cpm_kernels/library/cuda.py", line 216, in checkCUStatus raise RuntimeError("CUDA Error: %s" % cuGetErrorString(error)) RuntimeError: CUDA Error: no kernel image is available for execution on the device 本地3090 运行量化没有问题 服务器上 用的M40 sat模型 直接推理没有问题,但是量化的时候 会报错

麻烦问下这个是什么问题? 会是显卡的问题吗?