Open Fiffy-lin opened 2 years ago
Unfortunately, HGTConv
currently does not support bipartite graph, i.e., where the source nodes and destination nodes are separated (may have different quantity or feature dimensions). This is a reasonable ask and should be implemented in my opinion. For example, HGTConv
should allow input to be a pair of tensors (x_src, x_dst)
.
For the other question to convert a heterogeneous DGLBlock graph to homogeneous, it is indeed not easy at the moment. Let me back to you later once I have time.
Unfortunately,
HGTConv
currently does not support bipartite graph, i.e., where the source nodes and destination nodes are separated (may have different quantity or feature dimensions). This is a reasonable ask and should be implemented in my opinion. For example,HGTConv
should allow input to be a pair of tensors(x_src, x_dst)
.For the other question to convert a heterogeneous DGLBlock graph to homogeneous, it is indeed not easy at the moment. Let me back to you later once I have time.
Much thanks for your reply, gonna try other implementations
š Bug
Hi, there.
I wish to use HGTConv on blocks generated by dataloader.
My graph actually has 2 type of nodes, named A and B. It seems HGTConv accepts only homo graph input, as it will assign node embedding tensor (not dict) directly to graph ndata.
So i tried to convert my block to homo graph using dgl.to_homogeneous(), but then it has 4 node types and doubled node numbers. Tries to make it work, as node numbers get doubled, i need to concat twice my node input like h = torch.cat([h, h]) and adjust num_ntypes = num_ntypes * 2, which is quite weird.
I know in a block source nodes and dest nodes are marked differently, but i have no clue how can i use HGTConv properly when input is a block.
Any help is appreciated.
To Reproduce
Expected behavior
Environment
conda
,pip
, source): pipAdditional context