Closed ruah1984 closed 6 years ago
@Clorr was working on GAN but he's not available nowadays it seems. Dfaker's results are blurry and don't look good at all in my view.
Are changes only needed to convert/merge to get the higher res?
Having better defined images as output means the model has to learn smaller features, and need then a bigger model.
I was aware of the faceswap-GAN version, and I intend to add it. Thanks for the @dfaker link, I was not aware of it!
@Clorr Hey there, feel free to ask any questions of bits you'd like to have offered for pull, I keep meaning to send over a patch for the multi-threaded version of https://github.com/dfaker/df/blob/master/training_data.py#L56 which can be a major bottleneck if you've got a fast enough GPU.
@dfaker Thanks, I haven't checked your code in depth for now, but maybe I can do a plugin from it. I'll try to check end of coming week
I'd really like to try the 128x128 output size when the code is ready, but I've got an overclocked 1070 with 8 Gb rather than the 11 Gb on the 1080i that the code was made on. I'm hoping that the script will run on a 1070 with some tweaking.
If I want to change to 128x128pxl. In model ,which line I should change the parameters ?Only change the model.py?I mean not for GAN Model
@ruah1984 That depends on how complex you want the model to be and how much your GPU can handle memory wise. More features and higher ENCODER_DIM for the Encoder as well as more layers and features for the Decoder means better output quality. But more features translate to more variables that need to be stored = bigger memory footprint. From my tests i just saw that leaving the Decoder as it is now yields blurry images. But changing it (adding a layer) increases memory usage. I'm really not sure what the ideal model would look like ...
So there is not one definitive way to change the Model_Original.py
for 128x128px – there are quite a few. The only one that produced good output images for me was one that needed tons of GPU memory. Tell me what GPU you have and i can post the model structure here. You also need to edit the TrainingDataGenerator
class and random_warp
function in lib/training_data.py
so that the internal image processing can handle 128x128px images. And if you want to convert some images you also need to take a look at Convert_Adjust.py
and Convert_Masked.py
. Did not test converting, spent most of time with training + testing different models.
@subzerofun , i watch this in youtube https://github.com/alexjc/neural-enhance, and found the pxl can be improve, i not sure if the process is we go through model training to learn , and re-train again with neural enhance another time, but for sure pxl can be improve like how the movie maker doing right now. currently i after merge the face, i will use Da vinci resolver 14 with the face refinement to touch up, but i found face refinement concept same as what we are doing right now, but the face detection are worst then what we have in face extraction ,
i will get a new setup soon by next week, the spec will be Win 10 Memory 16gb, 2933MHZ NVDIA GTX 1080Ti with 11GB GDDR5X Intel i7 8700k (6-core/12Thread,12MB Cache, Overclocked up to 4.6GHz across all cores)
all setup here will use for Neural network study in future. i am rookies right now , but i want to know if base on current scrip what i can change to improve the pxl and within my new hardware capabilities
by the way, in future any changes here, i would like to try contribute and help to validate with my new setup .
https://github.com/deepfakes/faceswap/issues/221
put this here as well.
@oatssss have complete GAN128, i will close this issue here. will open back if we want "high resolution" image again in future.
Can we include the 128x128 input/output size? to have better face resolution I not a programmer, but found the latest dfaker and Faceswap-GAN have included it
please see below link https://github.com/dfaker/df https://github.com/shaoanlu/faceswap-GAN/blob/master/FaceSwap_GAN_v2_sz128_train.ipynb
Hope the New-GAN version will release soon. good luck and thanks to the team.