Open Guodongchang opened 1 year ago
使用:conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia 原因可以看:https://zhuanlan.zhihu.com/p/619427217
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
报错:
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed
UnsatisfiableError: The following specifications were found to be incompatible with the existing python installation in your environment:
Specifications:
Your python: python=3.8
If python is on the left-most side of the chain, that's the version you've asked for. When python appears to the right, that indicates that the thing on the left is somehow not available for the python version you are constrained to. Note that conda will not change your python version to a different minor version unless you explicitly specify that.
The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package setuptools conflicts for: python=3.8 -> pip -> setuptools torchvision -> setuptools pytorch -> jinja2 -> setuptools
Package typing conflicts for: pytorch -> typing_extensions -> typing[version='>=3.7.4'] pytorch -> typing
Package flit-core conflicts for: torchvision -> typing_extensions -> flit-core[version='>=3.6,<4'] pytorch -> typing_extensions -> flit-core[version='>=3.6,<4']
Package libcxxabi conflicts for: pytorch -> libcxx[version='>=4.0.1'] -> libcxxabi==4.0.1[build='hcfea43d_1|hebd6815_0'] python=3.8 -> libcxx[version='>=4.0.1'] -> libcxxabi==4.0.1[build='hcfea43d_1|hebd6815_0']
Package pytorch conflicts for: torchaudio -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|2.0.1|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0|1.5.1'] torchvision -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|2.0.1|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0|1.5.1|1.7.1.|1.3.1.']
使用 python cli_demo.py chatglm 启动时,报错,信息如下:
Explicitly passing a
model = models.get_model(args)
File "G:\JittorLLMs\models__init.py", line 46, in get_model
return module.get_model(args)
File "G:\JittorLLMs\models\chatglm__init__.py", line 48, in get_model
return ChatGLMMdoel(args)
File "G:\JittorLLMs\models\chatglm__init.py", line 21, in init__
self.tokenizer = AutoTokenizer.from_pretrained(os.path.dirname(file__), trust_remote_code=True)
File "C:\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 642, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(
File "C:\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 363, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File "C:\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 237, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File "C:\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 134, in check_imports
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: icetk. Run
revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Traceback (most recent call last): File "G:\JittorLLMs\cli_demo.py", line 8, inpip install icetk