LPDI-EPFL / masif

MaSIF- Molecular surface interaction fingerprints. Geometric deep learning to decipher patterns in molecular surfaces.
Apache License 2.0
571 stars 151 forks source link

defining mu and rho for conv layers #76

Open byungukP opened 3 months ago

byungukP commented 3 months ago

According to the source code (masif/source/masif_modules), mus and sigmas for rho and theta are defined in a different way for the ones of convolutional layer 1 and the rest. While initial polar coordinates of rho and theta were used for defining mu and theta of layer 1, the order got mixed up a little bit in the rest of the additional layers. Is this a trivial mistake or is there a reason why you defined the variables in this way?

        for i in range(self.n_feat):
            self.mu_rho.append(
                tf.Variable(mu_rho_initial, name="mu_rho_{}".format(i))
            )  # 1, n_gauss
            self.mu_theta.append(
                tf.Variable(mu_theta_initial, name="mu_theta_{}".format(i))
            )  # 1, n_gauss
            self.sigma_rho.append(
                tf.Variable(
                    np.ones_like(mu_rho_initial) * self.sigma_rho_init,
                    name="sigma_rho_{}".format(i),
                )
            )  # 1, n_gauss
            self.sigma_theta.append(
                tf.Variable(
                    (np.ones_like(mu_theta_initial) * self.sigma_theta_init),
                    name="sigma_theta_{}".format(i),
                )
            )  # 1, n_gauss
        if n_conv_layers > 1:
            self.mu_rho_l2 = tf.Variable(
                mu_rho_initial, name="mu_rho_{}".format("l2")
            )
            self.mu_theta_l2 = tf.Variable(
                mu_theta_initial, name="mu_theta_{}".format("l2")
            )
            self.sigma_rho_l2 = tf.Variable(
                np.ones_like(mu_rho_initial) * self.sigma_rho_init,
                name="sigma_rho_{}".format("l2"),
            )
            self.sigma_theta_l2 = tf.Variable(
                (np.ones_like(mu_theta_initial) * self.sigma_theta_init),
                name="sigma_theta_{}".format("l2"),
            )
        if n_conv_layers > 2:
            self.mu_rho_l3 = tf.Variable(
                mu_rho_initial, name="mu_rho_{}".format("l3")
            )
            self.sigma_rho_l3 = tf.Variable(
                mu_theta_initial, name="mu_theta_{}".format("l3")
            )
            self.mu_theta_l3 = tf.Variable(
                np.ones_like(mu_rho_initial) * self.sigma_rho_init,
                name="sigma_rho_{}".format("l3"),
            )
            self.sigma_theta_l3 = tf.Variable(
                (np.ones_like(mu_theta_initial) * self.sigma_theta_init),
                name="sigma_theta_{}".format("l3"),
            )
        if n_conv_layers > 3:
            self.mu_rho_l4 = tf.Variable(
                mu_rho_initial, name="mu_rho_{}".format("l4")
            )
            self.sigma_rho_l4 = tf.Variable(
                mu_theta_initial, name="mu_theta_{}".format("l4")
            )
            self.mu_theta_l4 = tf.Variable(
                np.ones_like(mu_rho_initial) * self.sigma_rho_init,
                name="sigma_rho_{}".format("l4"),
            )
            self.sigma_theta_l4 = tf.Variable(
                (np.ones_like(mu_theta_initial) * self.sigma_theta_init),
                name="sigma_theta_{}".format("l4"),
            )