Closed fanatic-revolver closed 6 months ago
Which gpu type are you using,A100 or 3090 or something else?
Neither , I am using a outmoded gpu , start with 1xxx , does it require a better gpu to locally use this model ?
Will it support API usage one day ?
Outmoded gpu should use fp16, try this:
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2', trust_remote_code=True)
# For Nvidia GPUs support BF16 (like A100, H100, RTX3090)
#model = model.to(device='cuda', dtype=torch.bfloat16)
# For Nvidia GPUs do NOT support BF16 (like V100, T4, RTX2080)
model = model.to(device='cuda', dtype=torch.float16)
More detail: https://huggingface.co/openbmb/MiniCPM-V-2#usage
OK,but my computer still says
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
the gpu also is not over 12GB of memory , so it will not run , but thanks for now .
Will it support to use api usage one day ? 有一天能使用API式调用吗,就是不占用本机显存的这种方式
I installed conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia but an error message showed up, RuntimeError: cutlassF: no kernel found to launch! , it indicates that my cuda is not the correct verision , I downloaded the model from modelscope
我根据requirements.txt安装了依赖 但是在运行代码时 系统提示我需要安装cuda , 但我安装了cuda后依然报错,是说我的cuda版本与你们的不一致,我是自己下载了模型,并且在使用