nnzhan / Graph-WaveNet

graph wavenet
MIT License
628 stars 201 forks source link

Some questions for implemention of gcn in model.py. #15

Open guokan987 opened 4 years ago

guokan987 commented 4 years ago

Hi, I have a question: we utilze GCN with AXW, but in your model.py, I find it become WXA, Why?

CYBruce commented 4 years ago

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()
guokan987 commented 4 years ago

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()

I think that X's size is nxcxvxl, and in the normal(AX) input X's is nxvxcxl, in this place, X in fact is X.transpose, so it exchange A and X location in matrix multiply. But the coffuse place is (AX).tranpose=X.tranposeA.tranpose. In code, there is X.tranposeA. from the traffic graph, A is in-degree direction, A.tranpose is out-degree direction. But, in paper , authors proposed a diffusion gcn: it conclude A and A.transpose. so it look like correct in diffusion-gcn. However the normalization of A should be conducted in column not row in util.py(dims in asym_adj() should be 0, not -1). so I think there is two kinds ways to solve this confusion: 1.as the above, normalzation is conducted in column;

  1. revise the code 'x = torch.einsum('ncvl,vw->ncwl',(x,A))' to 'A=A.tranpose(-1,-2) and x = torch.einsum('ncvl,vw->ncwl',(x,A))'
wanzhixiao commented 3 years ago

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()

I think that X's size is nxcxvxl, and in the normal(AX) input X's is nxvxcxl, in this place, X in fact is X.transpose, so it exchange A and X location in matrix multiply. But the coffuse place is (AX).tranpose=X.tranpose_A.tranpose. In code, there is X.tranpose_A. from the traffic graph, A is in-degree direction, A.tranpose is out-degree direction. But, in paper , authors proposed a diffusion gcn: it conclude A and A.transpose. so it look like correct in diffusion-gcn. However the normalization of A should be conducted in column not row in util.py(dims in asym_adj() should be 0, not -1). so I think there is two kinds ways to solve this confusion: 1.as the above, normalzation is conducted in column;

  1. revise the code 'x = torch.einsum('ncvl,vw->ncwl',(x,A))' to 'A=A.tranpose(-1,-2) and x = torch.einsum('ncvl,vw->ncwl',(x,A))'

it looks like the author use the same weight by a mlp after diffusion gcn. i think it not accord with formula (6) or (7),which has k layers and each layer has a unique weight.

def forward(self,x,support):
        out = [x]
        for a in support:
            x1 = self.nconv(x,a) #AX
            out.append(x1)
            for k in range(2, self.order + 1):# k in (2,3)
                x2 = self.nconv(x1,a)
                out.append(x2)
                x1 = x2

        h = torch.cat(out,dim=1)
        h = self.mlp(h) #AXW
guokan987 commented 3 years ago

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()

I think that X's size is nxcxvxl, and in the normal(AX) input X's is nxvxcxl, in this place, X in fact is X.transpose, so it exchange A and X location in matrix multiply. But the coffuse place is (AX).tranpose=X.tranpose_A.tranpose. In code, there is X.tranpose_A. from the traffic graph, A is in-degree direction, A.tranpose is out-degree direction. But, in paper , authors proposed a diffusion gcn: it conclude A and A.transpose. so it look like correct in diffusion-gcn. However the normalization of A should be conducted in column not row in util.py(dims in asym_adj() should be 0, not -1). so I think there is two kinds ways to solve this confusion: 1.as the above, normalzation is conducted in column;

  1. revise the code 'x = torch.einsum('ncvl,vw->ncwl',(x,A))' to 'A=A.tranpose(-1,-2) and x = torch.einsum('ncvl,vw->ncwl',(x,A))'

it looks like the author use the same weight by a mlp after diffusion gcn. i think it not accord with formula (6) or (7),which has k layers and each layer has a unique weight.

def forward(self,x,support):
        out = [x]
        for a in support:
            x1 = self.nconv(x,a) #AX
            out.append(x1)
            for k in range(2, self.order + 1):# k in (2,3)
                x2 = self.nconv(x1,a)
                out.append(x2)
                x1 = x2

        h = torch.cat(out,dim=1)
        h = self.mlp(h) #AXW

应该没问题,这里应该是将K层的特征在特征维度上拼接成一个tensor,从而对这个Tensor 进行MLP映射,完成公式内容。