Closed zychyz95822 closed 1 year ago
I think the problem can be solved by replacing the 237-th line (processed_img = emb_roi2im([idAudio], imgs, bbxs, prediction, device)) by: processed_img = emb_roi2im([idAudio], imgs, bbxs, prediction.cpu(), 'cpu').
However, I have tested this modification. Could you try it and tell me whether it works?
I think the problem can be solved by replacing the 237-th line (processed_img = emb_roi2im([idAudio], imgs, bbxs, prediction, device)) by: processed_img = emb_roi2im([idAudio], imgs, bbxs, prediction.cpu(), 'cpu').
However, I have tested this modification. Could you try it and tell me whether it works?
I have solved the problem of insufficient gpu memory by resize the video resolution, I also tried your modification is also work, thank you!
I have resize the batch size when face_detection, but it seems not enough when running av_hubert, is there any method to fix it? Traceback (most recent call last): File "inf_demo.py", line 280, in
synt_demo(fa, device, model, args)
File "inf_demo.py", line 237, in synt_demo
processed_img = emb_roi2im([idAudio], imgs, bbxs, prediction, device)
File "/data/home/ss/TalkLip/utils/data_avhubert.py", line 174, in emb_roi2im
imgs[i] = imgs[i].float().to(device)
RuntimeError: CUDA out of memory. Tried to allocate 23.75 GiB (GPU 0; 14.76 GiB total capacity; 960.37 MiB already allocated; 4.07 GiB free; 9.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF