wengong-jin / hgraph2graph

Hierarchical Generation of Molecular Graphs using Structural Motifs
MIT License
367 stars 108 forks source link

Why are the inputs of HierVAE.decoder different from that of other models? #12

Open WhatAShot opened 4 years ago

WhatAShot commented 4 years ago

The decoder in HierVAE : self.decoder((root_vecs, root_vecs, root_vecs), graphs, tensors, orders), with only the vectors of roots are fed. But the inputs of the decoder in other models are something like: self.decoder((x_root_vecs, x_tree_vecs, x_graph_vecs), y_graphs, y_tensors, y_orders).

wengong-jin commented 4 years ago

This is due to the difference between generative model and graph translation model. In VAE, the latent space must be a fix-sized vector, while in graph translation model, the input can be a sequence of vectors. The sequence of vectors are fed into decoder attention layer. Note that generative model does not have attention layers.

KatarinaYuan commented 2 years ago

Hi. I'm quite confused that it have to be a fix-sized vector because Graph-VAE allows a sequence of vectors for each node in the graph. Could you please explain why generative model can only use a fix-sized vector?