Serge-weihao / CCNet-Pure-Pytorch

Criss-Cross Attention (2d&3d) for Semantic Segmentation in pure Pytorch with a faster and more precise implementation.
MIT License
183 stars 21 forks source link

Segmentation fault (core dumped) #2

Open YLONl opened 4 years ago

YLONl commented 4 years ago

Who got this error? I googled and got some reasons about core and GPU version。 But i dont konw how to solve?

YLONl commented 4 years ago

i know the wrong code. "import networks."

Serge-weihao commented 4 years ago

what does the Traceback show?

YLONl commented 4 years ago

No Traceback shows. Just the tip.

kumartr commented 4 years ago

I would like to know more about the energy_H and energy_W variables, what the compute and how they help to acheive. Also how the Criss Cross Attention is achieved energy_H = (torch.bmm(proj_query_H, proj_key_H)+self.INF(m_batchsize, height, width)).view(m_batchsize,width,height,height).permute(0,2,1,3) energy_W = torch.bmm(proj_query_W, proj_key_W).view(m_batchsize,height,width,width) concate = self.softmax(torch.cat([energy_H, energy_W], 3))

Serge-weihao commented 4 years ago

to aggregate the values from the same column(energy_H) and row(energy_W ) of the query. self.INF will make one of the overlapped position to be zero.

kumartr commented 4 years ago

Thanks a lot Serge for your kind reply, I will try to relook at your code with this insight. One more question - Where are the H+W-1 'channels' for the Attention maps computed ?

Would it be possible to connect sometime over short zoom call, to clarify few other points My email address is kumartr@gmail.com My LinkedIn link is as below https://www.linkedin.com/in/kumartr/

Serge-weihao commented 4 years ago

concate = self.softmax(torch.cat([energy_H, energy_W], 3)) computes the attention maps and one overlapped position was seted to 0 by the self.INF, so there are H+W-1 non-zero values + 1 zero value.