Open tayssirmoussa66 opened 2 years ago
How is your loss_fn
defined? I am not sure it might be a good idea to use the attention_weights
as training loss.
Do you set the weights to be trainable with e.g. model.train()
prior to the training step?
my model is used to define the scores of reactivity between each pair of atoms in a chemical reaction, my loss_fn(input, target) is defined like this: the target y is a vecteur contains 1 if the bond between two atoms has changed from reactant to product and 0 else. the input is the attention_weights calculated by the model
I think it might make more sense to define your own head for this, e.g.:
out = MLP(torch.cat([x[edge_index[0]], x[edge_index[1]]], dim=-1)
return softmax(out, edge_index[1])
and then use a multi-class loss rather than a multi-label loss.
@rusty1s do you mean i don't need to use GATConv layer ?
Yes, if you are only interested in defining an edge score, using GATConv
for this might be overkill.
🐛 Describe the bug
I am training a GCN model using pytorch geometric that calculate the attention weight betwenn each pair of nodes, but my loss function does not update during training. this is my model:
and this is the train function
the loss remaining exactly the same each epoch. Please help me, what happens? Why does the model not train during training steps?
Environment
conda
,pip
, source):torch-scatter
):