jingyuanli001 / RFR-Inpainting

The source code for CVPR 2020 accepted paper "Recurrent Feature Reasoning for Image Inpainting"
MIT License
355 stars 76 forks source link

About code #32

Closed haolin512900 closed 2 years ago

haolin512900 commented 3 years ago

Hello,thank your great project! I come again! In the modules/RFRNet.py, the class RFRNet ,what is the work of self.RFRModule.att.att.att_scores_prev = None self.RFRModule.att.att.masks_prev = None? Looking forward your reply

jingyuanli001 commented 3 years ago

This part is for initializing the attention scores of KCA in each feed forward process. During testing, this line of code is not important. But during training, without this line of code, the attention score will be messed up.

haolin512900 commented 3 years ago

Thank your reply!

haolin512900 commented 3 years ago

But Why use three attin the code self.RFRModule.att.att.att_scores_prev = None ?Why not use aatt? Example,self.RFRModule.att_scores_prev = None .

xiankgx commented 3 years ago

First .att is because the AttentionModule is part of the RFRModule.

Second .att because KnowledgeConsistentAttention is part of the AttentionModule. The AttentionModule is a module that combines KnowledgeConsistentAttention with standard (before attention) to create a combination thru a learnable point wise (kernel size 1) convolution layer (the combiner).

And finally, att_scres_prev and masks_prev in KnowledgeConsistentAttention is kind of like an initial parameter that is initialized at the beginning of an inpainting process. A bit similar in idea to how we initialize the hidden state in the decoder in a LSTM/GRU model.

Just my 2 cents.

haolin512900 commented 3 years ago

First .att is because the AttentionModule is part of the RFRModule.

Second .att because KnowledgeConsistentAttention is part of the AttentionModule. The AttentionModule is a module that combines KnowledgeConsistentAttention with standard (before attention) to create a combination thru a learnable point wise (kernel size 1) convolution layer (the combiner).

And finally, att_scres_prev and masks_prev in KnowledgeConsistentAttention is kind of like an initial parameter that is initialized at the beginning of an inpainting process. A bit similar in idea to how we initialize the hidden state in the decoder in a LSTM/GRU model.

Just my 2 cents. Thank your reply! Good lucky