Coobiw / MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
382 stars 20 forks source link

huggingface 下载的Qwen7B-chat/None #9

Open molyswu opened 10 months ago

molyswu commented 10 months ago

line 317, in set_module_tensor_to_device new_value = value.to(device) File "D:\anaconda3\envs\minigpt4qwen\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

pip installed torch-2.1.0+cu118 torchaudio-2.1.0+cu118 torchvision-0.16.0+cu118

Coobiw commented 10 months ago

感谢关注!issue标题里的Qwen7B-chat/None/None是?然后对于您后续的问题,我感觉应该是cuda版本的问题 你可以试试

import torch
torch.cuda.is_available()

版本库依赖方面可以参考requirements

您这个问题建议参考:https://stackoverflow.com/questions/57814535/assertionerror-torch-not-compiled-with-cuda-enabled-in-spite-upgrading-to-cud