leehomyc / Faster-High-Res-Neural-Inpainting

High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis
http://www.harryyang.org/inpainting
MIT License
1.3k stars 213 forks source link

Which file used for training? #3

Closed xuyifeng-nwpu closed 7 years ago

xuyifeng-nwpu commented 7 years ago

Thank you share your codes.

Can you tell me which lua file used for training the dataset.

leehomyc commented 7 years ago

I did not include that to train the content network. But it is similar to the Context Encoders. You can use their code to train on ImageNet. (https://people.eecs.berkeley.edu/~pathak/context_encoder/)

xuyifeng-nwpu commented 7 years ago

@leehomyc The joint loss function of your paper is differ from that of context_encoder. Can I directly use the train code(https://people.eecs.berkeley.edu/~pathak/context_encoder/)?

Or should I modify the training lua code in the section of joint loss function? Thanks.

leehomyc commented 7 years ago

No, you don't need to. The model for the content constraint is trained on ImageNet using Context Encoders.