liuqk3 / PUT

Paper 'Transformer based Pluralistic Image Completion with Reduced Information Loss' in TPAMI 2024 and 'Reduce Information Loss in Transformers for Pluralistic Image Inpainting' in CVPR2022
MIT License
173 stars 15 forks source link

About train #5

Open 1997Jessie opened 2 years ago

1997Jessie commented 2 years ago

Pvqvae.yaml trains first, then transformer.yaml train second Is it right? I feel confused, don't know how to start 1661345559019

liuqk3 commented 2 years ago

Hi @1997Jessie , Thanks for your interests. Your are right. You need train P-VQVAE firstly, and then train Transformer (with P-VQVAE fixed).

Best wishes.