Changed variable names to alleviate confusion and improve architecture flexibility.
generalize batch_norms -> feature_layers
specify for _init_conv in GAT
specify _init_conv for SchNet
The core of the architecture is this for loop which consists of one pass at the graph_conv and one pass at a feature_layer.
for conv, feat_layer in zip(self.graph_convs, self.feature_layers):
c = conv(x=x, **conv_args)
x = F.relu(feat_layer(c))
Graph convolutions and feature layers are stacked in the _init_conv method. Flexible feature_layers can stack BatchNorm layers (default inherited from Base), be ignored using torch.nn.Identity as in the SCFStack, or do more advanced multi-heading as in GATStack.
If additional flexibility is desired then the F.relu component of the for loop can be integrated into the feature_layer by initializing a Sequential module.
Changed variable names to alleviate confusion and improve architecture flexibility.
batch_norms
->feature_layers
_init_conv
in GAT_init_conv
for SchNetThe core of the architecture is this for loop which consists of one pass at the
graph_conv
and one pass at afeature_layer
.Graph convolutions and feature layers are stacked in the
_init_conv
method. Flexiblefeature_layers
can stackBatchNorm
layers (default inherited fromBase
), be ignored usingtorch.nn.Identity
as in theSCFStack
, or do more advanced multi-heading as inGATStack
.If additional flexibility is desired then the
F.relu
component of the for loop can be integrated into thefeature_layer
by initializing aSequential
module.