Meituan-AutoML / MobileVLM

Strong and Open Vision Language Assistant for Mobile Devices
Apache License 2.0
890 stars 64 forks source link

有中文的readme没有?两张rtx A4000跑不起来,报错 #32

Closed life2048 closed 4 months ago

life2048 commented 4 months ago

使用readme上的vlm推理代码,运行最后保存,请问 参数上是否需要做一些调整 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)

weifei7 commented 4 months ago

Please make sure your torch is the CUDA version. If it still not work, you could add device_map='cuda' to here -> tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.load_8bit, args.load_4bit, device_map='cuda')

er-muyue commented 4 months ago

Hi, we are closing this issue due to the inactivity. Hope your question has been resolved. If you have any further concerns, please feel free to re-open it or open a new issue. Thanks!