NVlabs / few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Other
1.79k stars 276 forks source link

How to run inference on the face dataset? #17

Closed PyxAI closed 4 years ago

PyxAI commented 4 years ago

I trained the network with the example script and ran the test successfully.

Now I want to use my own video to run. So I split the video into frames and gave the path in seq_path argument, but it seems to not be enough, as no new frames are added to the result path. The output is only:

dataset [FaceDataset] was created
---------- Networks initialized -------------
---------- Optimizers initialized -------------
Pretrained network G has fewer layers; The following are not initialized:
['flow_network_temp.conv_flow', 'flow_network_temp.conv_w', 'flow_network_temp.down_flow', 'flow_network_temp.res_flow', 'flow_network_temp.up_flow']
model [Vid2VidModel] was created

Do I need to create the checkpoints myself ? How do I go about inputting a sequence and an image, to output a new video sequence ?

TheLukaDragar commented 4 years ago

I have the same issue in colab

seta-quynhbui commented 4 years ago

I have a problem with training process, can you share with me pre-train models @PyxAI .

seta-quynhbui commented 4 years ago

I think you need facial landmark for all images.

tab-1 commented 4 years ago

I want to use my own video to train, do I need to create the checkpoints myself for all images ?

tbbjymm21 commented 4 years ago

Could you share the pre-trained models @PyxAI ?Thank you very much!