harlanhong / CVPR2022-DaGAN

Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
https://harlanhong.github.io/publications/dagan.html
Other
957 stars 125 forks source link

Error while running demo.py #68

Open sangeun99 opened 1 year ago

sangeun99 commented 1 year ago

Hello Thanks for your code

I am running demo.py now but i faced " RuntimeError: PytorchStreamReader failed reading file data/2: invalid header or archive is corrupted " code while doing.

My shell code was like this.

CUDA_VISIBLE_DEVICES=0 python demo.py \ --config config/vox-adv-256.yaml \ --driving_video path-video \ --source_image path-img \ --checkpoint modules/depth/models/DaGAN_vox_adv_256.pth.tar \ --relative \ --adapt_scale \ --kp_num 15 \ --generator DepthAwareGenerator

And the error code was like this image

I downloaded your pre-trained model from one-drive. Is there any problem with checkpoint file (DaGAN_vox_adv_256.pth.tar) or just the problem with my env?

AND one more question for evaluating model! I want to evaluate the model Do you have any codes to make mp4s for testing? Could you provide it?

Thanks in advance

harlanhong commented 1 year ago

I will check in these days.

sangeun99 commented 1 year ago

@harlanhong Thanks for your prompt reply. I figured it out that it was my fault while uploading the file!

Just more questions for evaluation! To check SSIM or PSNR, is the ground truth driving video? and do you have any codes to make mp4s for testing? Could you provide it?

Thanks a lot

harlanhong commented 1 year ago

Yes, for the evaluation, the driving videos are taken as the ground truth for same-identity experiments. For the evaluation codes, I will arrange them after these busy days. Actually, you can test them frame by frame.

sangeun99 commented 1 year ago

@harlanhong Thanks for your help! I want to try evaluation for myself, is it okay to set source image as the first frame of the driving video and evaluate it?

And also, I'd like to train DaGAN with my own datasets. I already set two folders 'train' and 'test' under dataroot. Do I need to make the pairs list on my own? Can I just make it randomly and train the model? Is there any rules to make the pairs list? How many pairs lists should be? I have 4585 videos in my train folders