ewrfcas / MAE-FAR

Codes of Learning Prior Feature and Attention Enhanced Image Inpainting (ECCV2022)
Other
78 stars 4 forks source link

Clarification on "TIPS: Now we recommend to use features from layer6 of the MAE instead of layer8 to enjoy superior performance." #20

Closed zhou13 closed 1 year ago

zhou13 commented 1 year ago

Hi @ewrfcas, First, I'd like to thank you for providing such simple and concise code. Great work on inpainting! I want to get some clarification on your new tips. I cannot find any place where you used layer 8 MAE features in the code. Could you point to me the line? Thank you.

ewrfcas commented 1 year ago

Hi, this conclusion is discovered in our subsequent research. So related codes and models have not been updated yet. We recommend using the 6th layer from the pre-trained MAE to train your new model for superior performance. Here is the figure about the performance of different layers.

image

Your could find that layer 8 slightly outperforms the image output (but the image output is not stable), but layer 6 performs best and is most stable among all features.

zhou13 commented 1 year ago

Thank you for the input! Just to clarify, the release model is using the "Image" layer as shown in the plot, right?

ewrfcas commented 1 year ago

The released model uses features from layer8.