huangwenwenlili / spa-former

Code for the paper titled "Sparse Self-Attention Transformer for Image Inpainting".
MIT License
13 stars 0 forks source link

Can pre-trained models be shared? #2

Open tanbuzheng opened 4 months ago

tanbuzheng commented 4 months ago

Hi author, I would like to quote your approach as a comparison algorithm, could you share the pre-trained models?

Looking forward to your reply!

wangupupup commented 2 months ago

+1