DaehanKim / vgae_pytorch

This repository implements variational graph auto encoder by Thomas Kipf.
MIT License
385 stars 73 forks source link

Recreating the adjacancey matrix using VGAE cocept #7

Closed akpas001 closed 1 year ago

akpas001 commented 2 years ago

Hi, I have been trying to recreate the adjacency of my sparse matrix using the same VGAE concept. I am not able to recreate the adjacency matrix. Do you think there is any preprocessing is necessary for such sparse graphs? Please let me know. I am attaching the data and code for your reference. I am also attaching the results I am able to reproduce using this code. Please feel free to go through the code and suggest necessary changes. Thank you!

P.S: The graphs are unidirectional. And do not have self-loops as well.

[states.zip](https://github.com/Daehan adjacency_pred_vgae .txt Kim/vgae_pytorch/files/8883726/states.zip)

image (4) image (3) image (2) image (1)

DaehanKim commented 2 years ago

What is the purpose of reconstructing your adjacency matrix? Since you have a sparse graph, there are not much training signals for some nodes, resulting in inaccurate edge reconstruction. Maybe you need some auxiliary approaches to model your dataset.

akpas001 commented 2 years ago

I am trying to create a custom policy for my reinforcement learning agent to train with. i am generating this data from my reinforcement learning environment. what kind of auxilary approaches should i be using? can you throw some light on them?

DaehanKim commented 2 years ago

Why don't you use true adjacency matrix as a reward signal, instead of reconstructing it? I don't have much to tell about auxiliary approaches since I have no clue on your task. Can you elaborate more on that?

akpas001 commented 2 years ago

So, what exactly I am doing is training the encoder to be a feature extractor using VGAE concept and going to use the trained the encoder as the feature extractor for my custom policy network. and train this custom policy using a Reinforcement Learning agent. In order to achieve that my Variational Auto Encoder should work, which is not happening in my case.

The env returns the reward based on the next_state predicted, I cannot feed the next_state itself as a reward signal to it. Earlier I was feeding the state and action to the network to predict the next_state and reward as well, but I faced the same issue with that as well where, I cannot achieve the similar next_state when a certain action is fed to the step function of the environment. So, as an alternative approach I am trying this