Closed xiaolinpan closed 4 years ago
Thanks for your interest in our code.
Thanks for your explanation! ` def _reward(nodes, edges):
def _preference(smi):
val = 0
# mol = Chem.MolFromSmiles(smi)
# set val = 1 if mol is preferred
return val
R_t = np.zeros((self.batch_size, self.dim_R))
for j in range(self.batch_size):
try:
R_smi = self._vec_to_mol(nodes[j], edges[j], atom_list, train=True)
R_t[j, 0] = 1
if self.dim_R == 2: R_t[j, 1] = _preference(R_smi)
except:
pass
return R_t
`
the function _reward
only return 0, Are you missing something, your paper said it will return 0 or 1.
No. it returns 1 when it succeeds processing the line "R_smi = self._vec_to_mol(nodes[j], edges[j], atom_list, train=True)"
thank you!!!
Thank you for sharing your code! I have a question about the
_decoder_edge
function. ` def _decoder_edge(vec):vec = tf.layers.dense(vec, (self.n_node + self.n_dummy) * (self.n_node + self.n_dummy) * (self.dim_edge+1)) vec = tf.reshape(vec, [batch_size, self.n_node + self.n_dummy, self.n_node + self.n_dummy, self.dim_edge+1]) logit = (vec + tf.transpose(vec, perm = [0, 2, 1, 3])) / 2 probs = tf.nn.softmax(logit)[:,:,:,:-1] * tf.reshape(1-tf.eye(self.n_node + self.n_dummy), [1, self.n_node + self.n_dummy, self.n_node + self.n_dummy, 1]) return probs
` Could you tell me why logit and probs is calculated like this? I want to convert this code to pytorch, thank you!
Hello! Have you successfully converted this code to pytorch?
Sorry, two years have passed and I have forgotten the project and lost the code.
OK, Thanks a lot. I'll try it myself.
Thank you for sharing your code! I have a question about the
_decoder_edge
function. ` def _decoder_edge(vec):` Could you tell me why logit and probs is calculated like this? I want to convert this code to pytorch, thank you!