Open John-Atha opened 1 year ago
Interesting. I guess for now you can resolve it by writing your model as
class GNNEncoder(torch.nn.Module):
def __init__(self, hidden_channels, out_channels):
super().__init__()
self.lin = Linear(-1, hidden_channels)
self.conv1 = SAGEConv(hidden_channels, hidden_channels, aggr="lstm")
self.conv2 = SAGEConv(hidden_channels, out_channels, aggr="lstm")
def forward(self, x, edge_index):
x = self.lin(x).relu()
x = self.conv1(x, edge_index).relu()
x = self.conv2(x, edge_index)
return x
Will also look into supporting -1
for LSTM-style aggregation.
I am working on the
heterogeneous link prediction
example of the official PyG GitHub repository. I am working with various configurations of the GNNEncoder, using mainly the SAGEConv layer. As stated in the Heterogeneous Graph Learning docs, I am usingto_hetero
function, and thelazy initialization
feature (in_channels=-1) at theSAGEConv
layer. I am trying to use thelstm
aggregator on the SAGEConv layers (aggr=“lstm”):but I keep getting the following error:
As far as I can understand the error is caused because
hidden_size
is equal to -1, because of thelazy initialization
, andproj_size
is equal to 0 by default. Given the fact that theproj_size
must always be positive and smaller thanhidden_size
at the same time, Is there a way to use the lazy initialization feature for the SAGEConv layer, using thelstm
aggregator?Environment
torch-geometric==2.1.0.post1
torch==1.12.1
macOS Monterey version 12.6
Python 3.9.6
conda
,pip
, source):pip
torch-scatter==2.0.9
torch-sparse==0.6.15