NVlabs / few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Other
1.79k stars 276 forks source link

Very bad results #42

Closed hamzatrq closed 3 years ago

hamzatrq commented 4 years ago

Getting very bad results even for the first 50 epochs.

At 15 epochs: 00000

At 30 epochs: 00000

At 50 epochs: 00000

From start my f_flow value is very high but I am not sure if it has something to do with the issue and if yes what exactly is wrong.

Log File: loss_log.txt

macriluke commented 4 years ago

Have you also observed that the raw_image is sometimes a different person from the reference and target faces?

I get decent results up to 50 or so and then the model slowly degrades as it gets closer to 100 epochs.

I have noticed very bad results when the target face is angled opposite the reference face. training with --no_flip may help but I'm not sure.

My dataset is broken into frame sequences of single faces, so reference face and target face are always the same person, and the background doesn't change drastically between them. This is why it is very odd raw_image includes a face that isn't the same person as ref and target.

FredMusoro commented 4 years ago

It seems higher versions of pytorch have strange warping behavior, because of it the model diverges. Are you using a version of pytorch perhaps higher than 1.2.0? Fixed the problem for me...

macriluke commented 4 years ago

It seems higher versions of pytorch have strange warping behavior, because of it the model diverges. Are you using a version of pytorch perhaps higher than 1.2.0? Fixed the problem for me...

As for myself, I'm using torch 1.0.0- I still get poor warps.

tcwang0509 commented 3 years ago

This repo is now deprecated. Please refer to the new Imaginaire repo: https://github.com/NVlabs/imaginaire.