Closed BruceChen15 closed 1 year ago
You can put the model on the GPU.
Hi, do you mean that I can revise the code at line 33,34 like below? Or I just run the command CUDA_VISIBLE_DEVICES=1 python inference_video.py? thx
model = MplugOwlForConditionalGeneration.from_pretrained(
pretrained_ckpt,
torch_dtype=torch.bfloat16,
device_map={'': 0},
)
You can modify like this, and try run with CUDA_VISIBLE_DEVICES=1 python inference_video.py
thank you. problem sloved.
thank you. problem sloved.
Did you observe the nan phenomenon?
Yes... I find out that x.half() would become nan after some iterations. And I have try the way 2 and 3 you provide, but did not work.
What about bfloat16
?
Do you test it on inference?
I change the code like below, and get the bfloat16 not implement on conv_depthwise3d error
Have you solve this problem? I also got the NAN problem when running video_inference
Have you solve this problem? I also got the NAN problem when running video_inference
Not yet
See #101 .
Hi, First of all, thank you for providing such amazing works. But when I trying to inference video inference code that you provided. I got these error. Can you figure it out? thx