In HDGCN.py's HD_Gconv module, is the code z = torch.einsum('n c t u, v u -> n c t v', x_down, A[i, j]) might be z = torch.einsum('n c t u, u v -> n c t v', x_down, A[i, j])?
The initial HD-Graph in certain layer in certain subset is normalized by column, it means that the sum of each column is 1 or 0(elements of torch.sum(A[i, j], dim=0) are either 0 or 1). But the code mentioned above means adding each row of elements in the graph. The weight sum may not be 1, it can be any value(0.333, 0.6, 3, etc.)
But, unlike conventional graph, every element of HD-Graph can be trained and learned, so there's no problem with such code either?
In HDGCN.py's HD_Gconv module, is the code
z = torch.einsum('n c t u, v u -> n c t v', x_down, A[i, j])
might bez = torch.einsum('n c t u, u v -> n c t v', x_down, A[i, j])
? The initial HD-Graph in certain layer in certain subset is normalized by column, it means that the sum of each column is 1 or 0(elements oftorch.sum(A[i, j], dim=0)
are either 0 or 1). But the code mentioned above means adding each row of elements in the graph. The weight sum may not be 1, it can be any value(0.333, 0.6, 3, etc.) But, unlike conventional graph, every element of HD-Graph can be trained and learned, so there's no problem with such code either?