NVlabs / few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Other
1.79k stars 276 forks source link

Again the same: Pretrained network G has fewer layers #52

Closed BertyWooster closed 4 years ago

BertyWooster commented 4 years ago

I was running the model using a sample image and sequence after training it. I get the following error. ex

I can't fix it. Also I can't understand the reason. Sorry for my question, but I can't understand the answer in previous issue.

LamLuong commented 4 years ago

Have you created the corresponding test_keypoint file? you should you default folder structure put your driving frame at ./dataset/face/test_images/0001 your driving frame keypoints at ./dataset/face/test_keypoints/0001 your ref_image at ./dataset/face/test_images/0002 your ref keypoints at ./dataset/face/test_keypoints/0002 and use command test on github

BertyWooster commented 4 years ago

Thank you for your reply! This is the link with project in google drive https://drive.google.com/open?id=11OxheleCCejMJPm90QGVZKIbdxywTZ19

I have loaded example dataset: python scripts/download_datasets.py

Next step I run training python train.py --args

And finally I catch the exception in python test.py --args

Sorry, how I can create test_keypoint file?

LamLuong commented 4 years ago

oh, so you does not have keypoints, it is bad and few shot cannot run few shot vid2vid base on vid2vid at github link https://github.com/NVIDIA/vid2vid you clone this project, go to vid2vid folder, create datasets/face folder put your test image folder to datasets/face folder, and then rename it to test_img remember in test_img you must have 2 sub folder 0001 and 0002 contain your driving and ref image then run python face_landmark_detection.py test it will generate folder test_keypoints corresponding

BertyWooster commented 4 years ago

Oh ... thank you very much! You helped me a lot!