microsoft / Relation-Aware-Global-Attention-Networks

We design an effective Relation-Aware Global Attention (RGA) module for CNNs to globally infer the attention.
MIT License
341 stars 65 forks source link

about embedding function shared #20

Open ZixuanLingit666 opened 3 years ago

ZixuanLingit666 commented 3 years ago

From this paper, we can see that the parameters of all the modules including spatial and channel are shared, which means it is same weight in a certain position for all training images.

But if the distribution of images changes such as occlusion, can the performance be maintained?