net_params['in_dim'] = torch.unique(dataset.train[0][0].ndata['feat'],dim=0).size(0) # node_dim (feat is an integer) #code in main
......
in_dim_node = net_params['in_dim'] # node_dim (feat is an integer) #graphtransformer/nets/SBMs_node_classification/graph_transformer_net.py line:19
......
self.embedding_h = nn.Embedding(in_dim_node, hidden_dim) # node feat is an integer #graphtransformer/nets/SBMs_node_classification/graph_transformer_net.py line:45
Why is the number of nodes used as the in_dim_node of the embedding? What is the node feature in SBMs? I get an error when I use the new node feature because its value is greater than in_dim_node.
Why is the number of nodes used as the in_dim_node of the embedding? What is the node feature in SBMs? I get an error when I use the new node feature because its value is greater than in_dim_node.
为什么使用节点的个数作为embedding的in_dim_node?SBMs中的节点特征是指的什么?我使用新的节点特征时由于其值大于in_dim_node导致报错。