However, right now I do not see a simple solution to reproduce this metric due to significant difference between the first implementation (https://github.com/GFNOrg/gflownet/tree/master) and the current. The first problem in in searching all paths to a molecule, which is implemented in the _get_mol_pathgraph function, and I do not see analogues in this repository (however, it could copy pasted from the previous repository) and the second problem is in different input/output formats for neural networks.
I find useful to measure the Pearson correlation between a [GFlowNet's log reward] and a [proxy-model's log prediction], just as it was done in the Learning GFlowNets From Partial Episodes For Improved Convergence And Stability (https://arxiv.org/abs/2209.12782) and Generative Flow Networks as Entropy-Regularized RL (https://arxiv.org/abs/2310.12934), which followed the former's setup. Both works provide github repositories (https://github.com/GFNOrg/gflownet/tree/subtb and https://github.com/d-tiapkin/gflownet-rl/tree/main), look the _computecorrelation function in the gflownet.py.
However, right now I do not see a simple solution to reproduce this metric due to significant difference between the first implementation (https://github.com/GFNOrg/gflownet/tree/master) and the current. The first problem in in searching all paths to a molecule, which is implemented in the _get_mol_pathgraph function, and I do not see analogues in this repository (however, it could copy pasted from the previous repository) and the second problem is in different input/output formats for neural networks.
I would be grateful to hear any ideas!