ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Apache License 2.0
40.69k
stars
5.22k
forks
source link
[BUG/Help] OSError nvidia/cublas/lib/libcublas.so.11: symbol cublasLtGetStatusString, version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference #921
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 172, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: symbol cublasLtGetStatusString, version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "cli_demo.py", line 8, in <module>
tokenizer = AutoTokenizer.from_pretrained("/home/aistudio/ChatGLM-6B/", trust_remote_code=True)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 635, in from_pretrained
pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 915, in from_pretrained
return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/transformers/configuration_utils.py", line 553, in from_pretrained
return cls.from_dict(config_dict, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/transformers/configuration_utils.py", line 696, in from_dict
config = cls(**config_dict)
File "/home/aistudio/.cache/huggingface/modules/transformers_modules/configuration_chatglm.py", line 102, in __init__
**kwargs
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/transformers/configuration_utils.py", line 336, in __init__
import torch
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 217, in <module>
_load_global_deps()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 178, in _load_global_deps
_preload_cuda_deps()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 158, in _preload_cuda_deps
ctypes.CDLL(cublas_path)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/nvidia/cublas/lib/libcublas.so.11: symbol cublasLtGetStatusString, version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference
Expected Behavior
No response
Steps To Reproduce
python cli_demo.py
Environment
- OS: linux for baidu aistudio
- Python: Python 3.7.4
- Transformers: 4.27.1
- PyTorch: 1.13.1
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : return error:
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 172, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: symbol cublasLtGetStatusString, version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 217, in <module>
_load_global_deps()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 178, in _load_global_deps
_preload_cuda_deps()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/torch/__init__.py", line 158, in _preload_cuda_deps
ctypes.CDLL(cublas_path)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/nvidia/cublas/lib/libcublas.so.11: symbol cublasLtGetStatusString, version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference
Is there an existing issue for this?
Current Behavior
Expected Behavior
No response
Steps To Reproduce
python cli_demo.py
Environment
more infos, pip installed packages are:
conda list: