ngushchin / EntropicOTBenchmark

Entropic Optimal Transport Benchmark (NeurIPS 2023).
https://arxiv.org/abs/2306.10161
MIT License
18 stars 2 forks source link

Where are the baseline methods implemented? #2

Open JTT94 opened 1 year ago

JTT94 commented 1 year ago

Hello,

Thank you for putting this together. I am also looking into benchmarking and reproducibility. I can see ENOT implementation, where are the other implementations for FB-SB etc?

[1] Nikita Gushchin, Alexander Kolesov, Alexander Korotin, Dmitry Vetrov, and Evgeny Burnaev. Entropic neural optimal transport via diffusion processes. arXiv preprint arXiv:2211.01156, 2022.

Best, James

ngushchin commented 11 months ago

Hello,

We used the official implementation from https://github.com/ghliu/SB-FBSDE. We just slightly modified the functions "build_prior_sampler" and "build_data_sampler" in their data.py by adding our benchmark samplers as described in our README. The hyperparameters used are described in our preprint: https://arxiv.org/pdf/2306.10161.pdf in Appendix D.

Best, Nikita.

JTT94 commented 9 months ago

Thanks. Just confirming, in your reported results you did not use GP for MLE-SB but the same network as ENOT? Did you use the same networks for all approaches? Were there any other modifications of the other repos?

ngushchin commented 9 months ago

Hi. Sorry for the delayed reply. Yes, for MLE-SB we used the same network as for ENOT because the GP version results were sufficiently worse. For all other methods, we use networks and hyperparameters as specified in the appendix. Usually it was the network from the author's code of the corresponding paper. We did not modify any other repos, just added code for our benchmark samplers.