Open goznxn opened 3 years ago
Hi,
The decoder needs to predict the first motif during decoding, and it is easier to do this by feeding the first motif embedding to the decoder. It is similar to RNNs in machine translation, where the first hidden state is fed to a decoder.
On Fri, Feb 19, 2021 at 10:16 AM Zhenxiang Gao notifications@github.com wrote:
In the encoder section, after getting all motif embedding, why does the model choose the first motif embedding to construct the molecule embedding?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/wengong-jin/hgraph2graph/issues/19, or unsubscribe https://github.com/notifications/unsubscribe-auth/AATGYLBNYDPRKCZOSTJMIGDS7Z6EFANCNFSM4X4RDBVQ .
Can the root motif reflect some properties of the entire molecular structure? If we use just the rood motif for regression models, ...
In the encoder section, after getting all motif embedding, why does the model choose the first motif embedding to construct the molecule embedding?