OpenBMB / MiniCPM-V

MiniCPM-V 2.6: A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone
Apache License 2.0
12.52k stars 880 forks source link

installed cuda ,but there is a mismatch in cuda verision ,the code couldn't run without cuda #51

Closed fanatic-revolver closed 6 months ago

fanatic-revolver commented 6 months ago

I installed conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia but an error message showed up, RuntimeError: cutlassF: no kernel found to launch! , it indicates that my cuda is not the correct verision , I downloaded the model from modelscope

我根据requirements.txt安装了依赖 但是在运行代码时 系统提示我需要安装cuda , 但我安装了cuda后依然报错,是说我的cuda版本与你们的不一致,我是自己下载了模型,并且在使用

iceflame89 commented 6 months ago

Which gpu type are you using,A100 or 3090 or something else?

fanatic-revolver commented 6 months ago

Neither , I am using a outmoded gpu , start with 1xxx , does it require a better gpu to locally use this model ?

fanatic-revolver commented 6 months ago

Will it support API usage one day ?

iceflame89 commented 6 months ago

Outmoded gpu should use fp16, try this:

model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2', trust_remote_code=True)
# For Nvidia GPUs support BF16 (like A100, H100, RTX3090)
#model = model.to(device='cuda', dtype=torch.bfloat16)

# For Nvidia GPUs do NOT support BF16 (like V100, T4, RTX2080)
model = model.to(device='cuda', dtype=torch.float16)

More detail: https://huggingface.co/openbmb/MiniCPM-V-2#usage

fanatic-revolver commented 6 months ago

OK,but my computer still says

    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

the gpu also is not over 12GB of memory , so it will not run , but thanks for now .

fanatic-revolver commented 6 months ago

Will it support to use api usage one day ? 有一天能使用API式调用吗,就是不占用本机显存的这种方式