Closed simondemeule closed 3 years ago
From the discussion I think we had for the moment : Using different priors; Studying the data organization between the prior and the manifold in the output space, Using it for data augmentation, Studying the loss as a function of the divergence and analyzing them, using reinforcement learning
Here are links to the main paper and code
Another idea I just had of research that could be easily implemented and very interesting, related to trying different priors: Try matching distributions that are far from the priors and see how adding capacity to the neural network allows it to better match the distribution
Just noticed there are 2 missing divergence from the repo (Jeffrey and Neyman). It could be worth taking a look to see if we could implement them ourselves
This is a general thread for some ideas we could investigate for our report.