MenghaoGuo / PCT

Jittor implementation of PCT:Point Cloud Transformer
654 stars 80 forks source link

question about the attention map visualization #14

Closed amiltonwong closed 3 years ago

amiltonwong commented 3 years ago

Hi, @MenghaoGuo,

For Figure 1 in your paper, which self-attention layer is used to visualize the attention map? From your implementation, there are 4 self-attention layers (SA1, SA2, SA3, SA4) in the model.

Thanks~

MenghaoGuo commented 3 years ago

Hi, @amiltonwong Like Vison Transformer, we visualize the attention map by using mean value of all attention layer.

amiltonwong commented 3 years ago

@MenghaoGuo , OK, Thanks ~