mtang724 / NWR-GAE

36 stars 8 forks source link

Code logic problem #3

Closed iDestro closed 1 year ago

iDestro commented 1 year ago

The inner for loop is make no sense, and the out for loop is only excute once.

def reconstruction_neighbors(self, FNN_generator, neighbor_indexes, neighbor_dict, from_layer, to_layer, device):
        '''
         Reconstruction Neighbors
         INPUT:
         -----------------------
         FNN_generator    :    FNN decoder
         neighbor_indexes     :   new neighbor indexes after hungarian matching
         neighbor_dict    :    specific neighbors a node have
         from_layer     :    from which layer K
         to_layer     :   decode to which layer K-1
         device    :    CPU or GPU
         OUTPUT:
         -----------------------
         loss   :   reconstruction loss
         new index    :   new indexes after hungarian matching
        '''
        local_index_loss = 0
        sampled_embeddings_list, mark_len_list = self.sample_neighbors(neighbor_indexes, neighbor_dict, to_layer)
        for i, neighbor_embeddings1 in enumerate(sampled_embeddings_list):
            # Generating h^k_v, reparameterization trick
            index = neighbor_indexes[i]
            mask_len1 = mark_len_list[i]
            mean = from_layer[index].repeat(self.sample_size, 1)
            mean = self.mlp_mean(mean)
            sigma = from_layer[index].repeat(self.sample_size, 1)
            sigma = self.mlp_sigma(sigma)
            std_z = self.m.sample().to(device)
            var = mean + sigma.exp() * std_z
            nhij = FNN_generator(var, device)
            generated_neighbors = nhij
            # Caculate 2-Wasserstein distance
            sum_neighbor_norm = 0
            for indexi, generated_neighbor in enumerate(generated_neighbors):
                sum_neighbor_norm += torch.norm(generated_neighbor) / math.sqrt(self.out_dim)
            generated_neighbors = torch.unsqueeze(generated_neighbors, dim=0).to(device)
            target_neighbors = torch.unsqueeze(torch.FloatTensor(neighbor_embeddings1), dim=0).to(device)
            hun_loss, new_index = hungarian_loss(generated_neighbors, target_neighbors, mask_len1, self.pool)
            local_index_loss += hun_loss
            return local_index_loss, new_index

Please give some explanation about these two problems?

Thx a lot~

mtang724 commented 1 year ago

Hi iDestro,

Thank you so much for your good questions! Yes, you are right. The out-for-loop only ran once since I accidentally indented the return statement when I cleaned the code for Github release. Thank you for pointing that out. I will update the code really quickly. The original code (for the paper) is intended to be looped through sampled_embeddings_list. (I also did quick testing with unindent return on Texas, Winsconsin and Chameleon (all are the smallest datasets), the result is the state-of-the-art results on Texas, Wisconsin and Chameleon, it should be since unindent code is our original code for experiments)

The inner loop is for the Appendix D experiment, which is the addition experiment to test the approximation power of our reconstruction method. You can find the detail on Pages 15-16. We delete the code that records the approximation power. That's why it looks comes from nowhere. Thanks again for pointing out this part. I am considering deleting the inner loop.

Please let me know if I addressed your question, and feel free to ask if you have further questions. Thank you again!

Best

iDestro commented 1 year ago

okay, that's address my question.

Hi iDestro,

Thank you so much for your good questions! Yes, you are right. The out-for-loop only ran once since I accidentally indented the return statement when I cleaned the code for Github release. Thank you for pointing that out. I will update the code really quickly. The original code (for the paper) is intended to be looped through sampled_embeddings_list. (I also did quick testing with unindent return on Texas, Winsconsin and Chameleon (all are the smallest datasets), the result is the state-of-the-art results on Texas, Wisconsin and Chameleon, it should be since unindent code is our original code for experiments)

The inner loop is for the Appendix D experiment, which is the addition experiment to test the approximation power of our reconstruction method. You can find the detail on Pages 15-16. We delete the code that records the approximation power. That's why it looks comes from nowhere. Thanks again for pointing out this part. I am considering deleting the inner loop.

Please let me know if I addressed your question, and feel free to ask if you have further questions. Thank you again!

Best

okay, that's address my question.