Open hkim716 opened 3 years ago
Hi,
first of all, the line loss = torch.autograd.Variable(F.mse_loss(pred, label), requires_grad=True)
looks really weird to me. In general, you do not need wrap the loss into its own Variable
. It should require_grad
by default.
Second, you should verify that model parameters receive a gradient, e.g., by looking at list(model.paramaters)[0].grad
.
Otherwise, your code looks correct to me, although I'm a bit confused about the actual task. It seems like you simply want to reconstruct node features, which (a) does not look particularly useful to me, (b) might be difficult for GCN
layers as they will smooth out node features.
In order to identify any issues with GraphUNet
, you should first verify that a more simpler GNN, e.g., stacking a few GCNConv
layers, already produces useful results.
Thanks Matt, I solved the problem by deleting loss = torch.autograd.Variable(F.mse_loss(pred, label), requires_grad=True)
and added
loss = F.mse_loss(pred.float(), label.float())
loss.backward()
because it kept giving me errors when dtype
s are not matched. Anyways, it is working now, and actually, I have done stacking pyg_nn.GCNConv
layers with MyData
, and it worked well with your kind help. :)
I need to reduce and then increase the dimension of the nodes
(in terms of the number of nodes) so I can create like autoencoder
architecture with graph data.
I have another question about the architecture of Graph-UNet
. I can see that encoder
uses TopK-Pooling
and GCNConv
layers, but how does the decoder
recover the dimension of the node numbers because I only see GCNConv
layers stacked in the decoder
part. In my understanding, GCNConv
layer changes the number of node features
, not the number of nodes
.
And also does the decoder
actually have trainable parameters when it is unpooled?
The Graph-UNet
is not really an auto-encoder, as it will use skip-connections between the different levels of coarsened graphs and maintains graph adjacency information from the encoder for the decoder part. There is no real bottleneck here. Unpooling is done by setting node features to zeros for the filtered out nodes and adding initial features as skip-connections.
[num_global_features]
to [num_nodes, num_node_features]
. Note that you lose permutation-equivariance that way.
I'm trying to modify GraphUNet to do regression for third column elements in
pos[ : , 2 ]
with my own datasets.MyData
has a lot of graphs, but each graph has same number of nodes and edges.dataset[0]
looks likeData(edge_index=[2, 74], pos=[74, 3], test_mask=[74, 1], train_mask=[74, 1])
.I would like to train my
Net
, but it does not updatepred
when I try to train with10 epochs
When I print epoch# and loss values, I think the network does not learn anything..Matt, can you help me out? Here is my code.