MenghaoGuo / PCT

Jittor implementation of PCT:Point Cloud Transformer
666 stars 80 forks source link

I want to know how you visualize your attention map #23

Open ja604041062 opened 3 years ago

ja604041062 commented 3 years ago

Hi, First of all, I want to thank you for the proposed method , which benefited me a lot. So I reproduce your code by pytorch and tried to visualize the attention map in part segmentation task. but when I want to use the right wing as query point, it can't attention the left wing like what you visualize on the paper. So I want to know how you show the visualization result like the paper.

In addition, other issue point out the dimension of softmax is wrong because your multiplication is Value*Attention, so I think the dimension of softmax in Attention would be 1, not -1(or 2), please correct me if there is any mistakes. And also the dimension of softmax and L1 norm is different(softmax is -1 but L1-Norm is 1), why? Line 211: self.softmax = nn.Softmax(dim=-1) Line 220: attention = attention / (1e-9 + attention.sum(dim=1, keepdims=True))

Also, I want to know how you do neighbor embedding in part segmentation. the paper said the number of output points is N, which means you didn't sampling the point and also do SG(sampling and grouping) module twice. but when I reproduce the same method, I got cuda out of memory in RTX 2080Ti(VRAM:12G). Is my VRAM not big enough or I have any problem with the understanding of the paper discription?

I'm looking forward to your reply, and thank you for your contribution.

MenghaoGuo commented 3 years ago

Hi, thanks for your attention.

  1. I visualize the attention map by calculating relationship by using PCT model (including neighbor embedding) between different points.
  2. The norm dimension is right. The first softmax does not play a role of normalization but plays a role in eliminating the impact of scale. The second normalization plays a role of normalization. For detailed explanation, please read this paper
  3. 12G memory seems not enough for part segmentation. We conduct experiments by using 3090 or RTX.
ja604041062 commented 3 years ago

Thanks for your reply! But you have 4 attention layers, which attention layer are you visualizing? Do you visualizing another attention layer? In my exp, the 1st and 3rd attention layer just attention the neighboring points(1st is much wider than 3rd), and another attention layer focus on irrelevant area. I don't know what is the meaning of 2nd, 4th attention blocks.(I visualize sPCT in your paper)

MenghaoGuo commented 3 years ago
  1. We visualize the average of 4 attention layers.
  2. Yes, i also try to visualize all attention layers of PCT and it produce simliar results. However, we do not try to visualize SPCT.
ja604041062 commented 3 years ago

How to visualize the average of 4 attention layers? do you use element-wise addition on all four N*N attention map and divided by 4 and visualize?

OK it seems that PCT is much stronger than SPCT, haha.

感謝您的回覆!

suyukun666 commented 3 years ago

Thanks for your reply! But you have 4 attention layers, which attention layer are you visualizing? Do you visualizing another attention layer? In my exp, the 1st and 3rd attention layer just attention the neighboring points(1st is much wider than 3rd), and another attention layer focus on irrelevant area. I don't know what is the meaning of 2nd, 4th attention blocks.(I visualize sPCT in your paper)

Hi! can you share your visualization.py? I try many times but fail. Many Thanks!

ja604041062 commented 3 years ago

` import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from pylab import *

def Visualize_Attention_Map(u, xyz, attention, axis): coordi = xyz.cpu().permute(1, 0) fig = plt.figure(dpi=100, frameon=False) ax = fig.gca(projection='3d')

if not axis:
    plt.axis('off')

the_fourth_dimension = attention.cpu().numpy()
the_fourth_dimension = (the_fourth_dimension-min(the_fourth_dimension))/(max(the_fourth_dimension)-min(the_fourth_dimension))
colors = cm.cividis(the_fourth_dimension)

ax.scatter(coordi[:,0], coordi[:,1], coordi[:,2], c=colors, marker='o', s=10)
ax.scatter(coordi[u,0], coordi[u,1], coordi[u,2], c='r', s=100)

colmap = cm.ScalarMappable(cmap=cm.cividis)
colmap.set_array(the_fourth_dimension)

fig.colorbar(colmap)
plt.show()

`

input: u: (0 ~ N) u-th points(data) you want to see xyz: (3, N) the corresponding coordinate of points attention: (N) the attention map axis: (True or False) whether you want to visualize axis

I use matplotlib to print the map and here is the code. Tell me if you have any problems. ok I can't edit perfectly...

suyukun666 commented 3 years ago

` import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from pylab import *

def Visualize_Attention_Map(u, xyz, attention, axis): coordi = xyz.cpu().permute(1, 0) fig = plt.figure(dpi=100, frameon=False) ax = fig.gca(projection='3d')

if not axis:
    plt.axis('off')

the_fourth_dimension = attention.cpu().numpy()
the_fourth_dimension = (the_fourth_dimension-min(the_fourth_dimension))/(max(the_fourth_dimension)-min(the_fourth_dimension))
colors = cm.cividis(the_fourth_dimension)

ax.scatter(coordi[:,0], coordi[:,1], coordi[:,2], c=colors, marker='o', s=10)
ax.scatter(coordi[u,0], coordi[u,1], coordi[u,2], c='r', s=100)

colmap = cm.ScalarMappable(cmap=cm.cividis)
colmap.set_array(the_fourth_dimension)

fig.colorbar(colmap)
plt.show()

`

input: u: (0 ~ N) u-th points(data) you want to see xyz: (3, N) the corresponding coordinate of points attention: (N) the attention map axis: (True or False) whether you want to visualize axis

I use matplotlib to print the map and here is the code. Tell me if you have any problems. ok I can't edit perfectly...

Thanks!!But I have some problems. The attention tensor is in (Batch,256,256)(sa1,sa2,sa3,sa4 the same),as your instruction,attention: (N) the attention map is in N?How should I change it?

ja604041062 commented 3 years ago

you can just call this function into your model. here is my partial code:

`#################################### partial code for model #################################### x, attention1 = self.sa1(x) x, attention2 = self.sa2(x) x, attention3 = self.sa3(x) x, attention4 = self.sa4(x)

#################################### full code for visualize ####################################

for i in range(2048): coordi = xyz[8].cpu() atten = attention1[8,I,:] #if I want to see the first attention map Visualize_Attention_Map(i, coordi, attention[8,i,:], False)`

I simply return the attention map and use for loop to visualize the attention map for each points see my input of Visualize_Attention_Map and you will know why the size of attention is N.

queenie88 commented 2 years ago

you can just call this function into your model. here is my partial code:

`#################################### partial code for model #################################### x, attention1 = self.sa1(x) x, attention2 = self.sa2(x) x, attention3 = self.sa3(x) x, attention4 = self.sa4(x)

#################################### full code for visualize ####################################

for i in range(2048): coordi = xyz[8].cpu() atten = attention1[8,I,:] #if I want to see the first attention map Visualize_Attention_Map(i, coordi, attention[8,i,:], False)`

I simply return the attention map and use for loop to visualize the attention map for each points see my input of Visualize_Attention_Map and you will know why the size of attention is N.

I use this code to visualize the attention map,but the results are not the same as in the paper. Do you realize the results? I look forward your reply!

mmiku1 commented 2 years ago

请问您能发布一下您pytorch重现的部分分割的代码吗?我自己重现的效果很差,如果可以的话希望能看看您重现的代码,感激不尽

mmiku1 commented 2 years ago

Hi, First of all, I want to thank you for the proposed method , which benefited me a lot. So I reproduce your code by pytorch and tried to visualize the attention map in part segmentation task. but when I want to use the right wing as query point, it can't attention the left wing like what you visualize on the paper. So I want to know how you show the visualization result like the paper.

In addition, other issue point out the dimension of softmax is wrong because your multiplication is Value*Attention, so I think the dimension of softmax in Attention would be 1, not -1(or 2), please correct me if there is any mistakes. And also the dimension of softmax and L1 norm is different(softmax is -1 but L1-Norm is 1), why? Line 211: self.softmax = nn.Softmax(dim=-1) Line 220: attention = attention / (1e-9 + attention.sum(dim=1, keepdims=True))

Also, I want to know how you do neighbor embedding in part segmentation. the paper said the number of output points is N, which means you didn't sampling the point and also do SG(sampling and grouping) module twice. but when I reproduce the same method, I got cuda out of memory in RTX 2080Ti(VRAM:12G). Is my VRAM not big enough or I have any problem with the understanding of the paper discription?

I'm looking forward to your reply, and thank you for your contribution.

Can you release the partial segmentation code reproduced by your pytorch? The effect of my own reproduction is very poor. If I can, I hope to see your reproduced code. Thank you very much