oppo-us-research / NeuRBF

MIT License
301 stars 16 forks source link

代码读不懂 #13

Open tzslg opened 8 months ago

tzslg commented 8 months ago

代码读的很困难,与文章中的公式对应不上,请教下面代码与文章对应的公式? def forward(self, x_g, point_idx, *kwargs): kernel_idx = self.forward_kernel_idx(x_g, point_idx, '0') rbf_out = self.forward_rbf(x_g, kernel_idx, '0')
rbf_out = rbf_out / (rbf_out.detach().sum(-1, keepdim=True) + 1e-8) rbf_out = rbf_out[..., None] # [p k_topk 1] rbf_out= torch.sin(rbf_out
self.pe_lc0_rbf_freqs[None, None]) #这是对应的公式几? out = (self.lc0(kernel_idx) rbf_out).sum(1) #这是对应的公式几? out_hg = self.hg0(x_g / self.cmax_gpu.flip(-1)[None])
out = torch.cat([out_hg, out], -1) + self.lcb0[None]#这是对应的公式5吗? h = F.relu(out, inplace=True) for l in range(self.num_layers): h = self.backbonel if l == 0: h = torch.sin(h
self.pe_lc0_freqs[None]) + h #公式(7) elif l != self.num_layers - 1: h = F.relu(h, inplace=True) return h

请教代码中的lc0,lcb0, hg0分别表示什么意思,对应文章中的变量是什么?

感谢作者,读您的文章和代码收益很多,恳请赐教

LansburyCH commented 8 months ago

rbf_out= torch.sin(rbf_out * self.pe_lc0_rbf_freqs[None, None]) 对应公式5

out = (self.lc0(kernel_idx) * rbf_out).sum(1) 对应公式6,lc0是每个rbf的neural feature,对应文中w_i

out = torch.cat([out_hg, out], -1) + self.lcb0[None] 对应hybrid radial bases,将adaptive rbf的features和grid-based part的features concat在一起,lcb0是concat后加上的bias,文中略去这一变量

hg0是grid-based部分的encoding,这里用的是hash grid

hyperzy commented 8 months ago

rbf_out= torch.sin(rbf_out * self.pe_lc0_rbf_freqs[None, None]) 对应公式5

out = (self.lc0(kernel_idx) * rbf_out).sum(1) 对应公式6,lc0是每个rbf的neural feature,对应文中w_i

out = torch.cat([out_hg, out], -1) + self.lcb0[None] 对应hybrid radial bases,将adaptive rbf的features和grid-based part的features concat在一起,lcb0是concat后加上的bias,文中略去这一变量

hg0是grid-based部分的encoding,这里用的是hash grid

请问有相关缩写的解释吗? 比如lc, kw, ks, kc, sq (clip_kw_sq)

hyperzy commented 8 months ago

能大致猜到kw, ks, kc 是kernel weights, kernel sigma, kernel center,代码里面还有类似这一类code方便解释下表示哪些rbf吗?

        if rbf_type.endswith('_a'):
            ks.weight.data = torch.eye(in_dim)[None, ...].repeat(n_kernel, 1, 1).reshape(n_kernel, -1)
        elif rbf_type.endswith('_d'):
            ks.weight.data[:] = 1
        elif rbf_type.endswith('_s'):
            ks.weight.data[:] = 1
LansburyCH commented 8 months ago

'_a'代表anisotropic rbf(shape parameter是一个covariance matrix,即同时有n维的scale和rotation),'_d'代表diagonally anisotropic rbf(covariance matrix是diagonal的,即只有n维的scale),'_s'代表isotropic rbf(shape parameter是一个scalar的scale,相当于每一维的scale相同);另外,kw是kernel width。

tzslg commented 8 months ago

rbf_out= torch.sin(rbf_out * self.pe_lc0_rbf_freqs[None, None]) 对应公式5

out = (self.lc0(kernel_idx) * rbf_out).sum(1) 对应公式6,lc0是每个rbf的neural feature,对应文中w_i

out = torch.cat([out_hg, out], -1) + self.lcb0[None] 对应hybrid radial bases,将adaptive rbf的features和grid-based part的features concat在一起,lcb0是concat后加上的bias,文中略去这一变量

hg0是grid-based部分的encoding,这里用的是hash grid

谢谢