Open tianshangaga opened 5 years ago
Hi,
In general this code is building the symbolic computation graph first, and then run the forward/backward. This is similar to tensorflow's logic.
Regarding the two functions you are talking about: SetupGraphInput is setting up the input, similar to setting up 'feed_dict' in tensorflow. BuildNet is just building that static symbolic computation graph.
Let me know if you have further questions.
Hi, thanks for sharing code! Do you have any documents or tutorials that explains how to use factor_graph.h library to build a custom factor graph network?
Thanks
Hi,
There was a (possibly out-dated) document: https://www.cc.gatech.edu/~hdai8/graphnn/html/annotated.html
Basically you add new operators to the computation graph through the 'af' function: https://github.com/Hanjun-Dai/graphnn/blob/bdf51e66231d51bc2b9a560b2be255bc642d4a03/include/nn/factor_graph.h#L334
like af
But since I'm not maintaining the code base further, I would suggest to use pytorch or tensorflow for developing new models.
Thanks for your answer!
Sorry I have another question. In your paper, you have explained that we can use two seprate net, first for embedding that iterates (for example) 4 times, second for finding Q value-function. But in this code I can't see this structure and there is one net. Did you merge them? How?
Thanks.
It is trained jointly. For example, in MVC, up to this line was for embedding the graph, and after that we feed the embedding into another mlp to calculate the Q function.
I find it is not easy to understand how does the nn works, e.g., the func 'QNet::SetupGraphInput()' and ' QNet::BuildNet()', so can you give more detailed instructions about the code? thanks a lot!