Closed luoling1993 closed 3 years ago
Please see the README section on preparing your own data: https://github.com/eladrich/pixel2style2pixel#preparing-your-data After defining your data paths, transforms, and experiment type, you can train your image to image task. We also provide example train commands which you can rely on.
When I read the README, I know how to preparing my data. But there is some confusion about pretrained model.
id_lambda=0
Regarding the id_lambda
, if you're not working on facial images, then you can try setting id_lambda=0
.
Regarding the pre-trained model, please see Issue #76 .
When I read the issue#76 and the code, do I understand you correctly.
ir_se50
, trained by facical datasets. Because the params will be updated while training, I can use this pretrained model in my tasks, or replaced with a Resnet-IR trained based on imagenet.styleGan2
, trained by FFHQ datasets, and NOT update default. I can set train_decoder=True
to fine-tune the decoderYes you understood correctly
Thank you very much for your answers!
I have a image2image tasks,and I have the image pairs, nne has a watermark, the other one doesn't. I want to use this project to train a model to remove the watermark.How do I use this project?