Closed xie236393 closed 1 month ago
没安装CUDA版的Pytorch,用这段代码测试一下:
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
好滴 谢谢
pip install torch==2.2.1+cu121 torchaudio==2.2.1+cu121 torchvision==0.17.1+cu121 -f https://mirror.sjtu.edu.cn/pytorch-wheels/torch_stable.html -i https://pypi.tuna.tsinghua.edu.cn/simple/ some-package --trusted-host mirrors.aliyun.com
you need to a right version for torch, torchaudio, torchvsion.
use this fork branch for windows, tested well end to end no issue https://github.com/gpt-omni/mini-omni/pull/44
I will close it for now, please feel free to re-open.
运行此命令之后 python server.py --ip '0.0.0.0' --port 60808
Traceback (most recent call last): File "I:\mini-omni\inference.py", line 666, in
test_infer()
File "I:\mini-omni\inference.py", line 516, in test_infer
fabric, model, text_tokenizer, snacmodel, whispermodel = load_model(ckpt_dir, device)
File "I:\mini-omni\inference.py", line 350, in load_model
snacmodel = SNAC.from_pretrained("hubertsiuzdak/snac_24khz").eval().to(device)
File "i:\mini-omni\venv\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "i:\mini-omni\venv\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "i:\mini-omni\venv\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "i:\mini-omni\venv\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "i:\mini-omni\venv\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "i:\mini-omni\venv\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
File "i:\mini-omni\venv\lib\site-packages\torch\cuda__init__.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
win10 3090 依赖以按照readme安装完毕。
请问这个怎么解决呢?