I have extracted tens of thousands of faces from a variety of videos of the model I want into data_src,
and created videos from various angles of my face for data_dst, then extracted them.
After about 1 million training iterations with an approximate batch size of 4 using SAEHD, I extracted a dfm file.
However, when I applied this to DeepFaceLive, the result was completely different from what I saw in the SAEHD training preview screen.
It looked as if it had not been trained at all.
The sample dfm files provided yield very good results, but the dfm file I created does not. Do I need more training?
I have extracted tens of thousands of faces from a variety of videos of the model I want into data_src, and created videos from various angles of my face for data_dst, then extracted them. After about 1 million training iterations with an approximate batch size of 4 using SAEHD, I extracted a dfm file. However, when I applied this to DeepFaceLive, the result was completely different from what I saw in the SAEHD training preview screen.
It looked as if it had not been trained at all.
The sample dfm files provided yield very good results, but the dfm file I created does not. Do I need more training?