researchmm / PEN-Net-for-Inpainting

[CVPR'2019] PEN-Net: Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting
https://arxiv.org/abs/1904.07475
MIT License
361 stars 77 forks source link

Pretrained model #13

Open XiaoYangon opened 4 years ago

XiaoYangon commented 4 years ago

hello , thanks for releaseing the pretrained model . I download the model of places2 data,then run test.py to see the results in places2 val data, other paramers are unchanged,but the results is a little poor, such as, so I don't konw why. can you tell me your experiment results? ,

hastaluegoph commented 4 years ago

@Purdandelion me too, I test it on Places2, but turn out to be a little worse, maybe I should retrain the model

zhengbowei commented 4 years ago

Hi,everybody! I want to test the pretrained model, but I do not know how to prepare "zip_root": "../datazip" and the setting of "-l", thank you for your answer!