seokhokang / graphvae_approx

Efficient Learning of Non-Autoregressive Graph Variational Autoencoders for Molecular Graph Generation
38 stars 10 forks source link

decoder_edge #4

Closed xiaolinpan closed 4 years ago

xiaolinpan commented 4 years ago

Thank you for sharing your code! I have a question about the _decoder_edge function. ` def _decoder_edge(vec):

        vec = tf.layers.dense(vec, (self.n_node + self.n_dummy) * (self.n_node + self.n_dummy) * (self.dim_edge+1))
        vec = tf.reshape(vec, [batch_size, self.n_node + self.n_dummy, self.n_node + self.n_dummy, self.dim_edge+1])

        logit = (vec + tf.transpose(vec, perm = [0, 2, 1, 3])) / 2

        probs = tf.nn.softmax(logit)[:,:,:,:-1] * tf.reshape(1-tf.eye(self.n_node + self.n_dummy), [1, self.n_node + self.n_dummy, self.n_node + self.n_dummy, 1])

        return probs

` Could you tell me why logit and probs is calculated like this? I want to convert this code to pytorch, thank you!

seokhokang commented 4 years ago

Thanks for your interest in our code.

  1. The edge matrix (and its prediction) must be symmetric because when it comes to undirected graphs. For the reason, "logit" is the average of "vec" and the transpose of "vec" to make it symmetric.
  2. "probs" is the softmax transformation of "logit" for edge (categorical) prediction. The diagonal elements of "probs" have no meaning because they do not correspond to edges. They are therefore forced to zero.
xiaolinpan commented 4 years ago

Thanks for your explanation! ` def _reward(nodes, edges):

        def _preference(smi):
            val = 0
            # mol = Chem.MolFromSmiles(smi)
            # set val = 1 if mol is preferred 

            return val

        R_t = np.zeros((self.batch_size, self.dim_R))
        for j in range(self.batch_size):
            try:
                R_smi = self._vec_to_mol(nodes[j], edges[j], atom_list, train=True)
                R_t[j, 0] = 1
                if self.dim_R == 2: R_t[j, 1] = _preference(R_smi)
            except:
                pass

        return R_t

`  the function _reward only return 0, Are you missing something, your paper said it will return 0 or 1.

seokhokang commented 4 years ago

No. it returns 1 when it succeeds processing the line "R_smi = self._vec_to_mol(nodes[j], edges[j], atom_list, train=True)"

xiaolinpan commented 4 years ago

thank you!!!

YeolYao commented 2 years ago

Thank you for sharing your code! I have a question about the _decoder_edge function. ` def _decoder_edge(vec):

        vec = tf.layers.dense(vec, (self.n_node + self.n_dummy) * (self.n_node + self.n_dummy) * (self.dim_edge+1))
        vec = tf.reshape(vec, [batch_size, self.n_node + self.n_dummy, self.n_node + self.n_dummy, self.dim_edge+1])

        logit = (vec + tf.transpose(vec, perm = [0, 2, 1, 3])) / 2

        probs = tf.nn.softmax(logit)[:,:,:,:-1] * tf.reshape(1-tf.eye(self.n_node + self.n_dummy), [1, self.n_node + self.n_dummy, self.n_node + self.n_dummy, 1])

        return probs

` Could you tell me why logit and probs is calculated like this? I want to convert this code to pytorch, thank you!

Hello! Have you successfully converted this code to pytorch?

xiaolinpan commented 2 years ago

Sorry, two years have passed and I have forgotten the project and lost the code.

YeolYao commented 2 years ago

OK, Thanks a lot. I'll try it myself.