ChrisChen1023 / HINT

HINT: High-quality INpainting Transformer with Enhanced Attention and Mask-aware Encoding
MIT License
30 stars 4 forks source link

The performance of the algorithm is too far from what the paper claims. #6

Closed CharlesNord closed 7 months ago

CharlesNord commented 8 months ago

Did you upload the correct pre-trained model? Why is the algorithm performing so poorly?

test

ChrisChen1023 commented 7 months ago

Did you upload the correct pre-trained model? Why is the algorithm performing so poorly?

test

Hi there,

Thanks for your interest for our work. We do release the right pre-trained model. We do not use the above images for showcasing the performance in our paper. The artifect could be raised because of the training strategy that we use the irregular masks to cover the whole images for training. We are working on solve that with the next project. Cheers.